• Title/Summary/Keyword: profile method

Search Result 3,218, Processing Time 0.035 seconds

Partial transmission block production for real efficient method of block and MLC (Partial transmission block 제작 시 real block과 MLC를 이용한 방법 중 효율적인 방법에 대한 고찰)

  • Choi JiMin;Park JuYoung;Ju SangGyu;Ahn JongHo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.16 no.2
    • /
    • pp.19-24
    • /
    • 2004
  • Introduction : The Vaginal, the urethra, the vulva and anal cancer avoid the many dose to femur head and the additional treatment is necessary in inguinal LN. The partial transmission block to use inguinal LN addition there is to a method which it treats and produce partial transmission block a method and the MLC which to it analyzes. Material & Methode : The Inguinal the LN treatment patient partial transmission it used block and the MLC in the object and with solid water phantom with the patient it reappeared the same depth. In order to analyze the error of the junction the EDR2 (Extended dose range, the Kodak and the U.S) it used the Film and it got film scanner it got the beam profile. The partial transmission block and the MLC bias characteristic, accuracy and stability of production for, it shared at hour and comparison it analyzed. Result : The partial the transmission block compares in the MLC and the block production is difficult and production hour also above 1 hours. The custom the block the place where it revises the error of the junction is a difficult problem. If use of the MLC the fabrication will be break and only the periodical calibration of the MLC it will do and it will be able to use easily. Conclusion : The Inguinal there is to LN treatment and partial transmission block and the MLC there is efficiency of each one but there is a place where the junction of block for partial transmission block the production hour is caught long and it fixes and a point where the control of the block is difficult. like this problem it transfers with the MLC and if it treats, it means the effective treatment will be possible.

  • PDF

Statics corrections for shallow seismic refraction data (천부 굴절법 탄성파 탐사 자료의 정보정)

  • Palmer Derecke;Nikrouz Ramin;Spyrou Andreur
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.7-17
    • /
    • 2005
  • The determination of seismic velocities in refractors for near-surface seismic refraction investigations is an ill-posed problem. Small variations in the computed time parameters can result in quite large lateral variations in the derived velocities, which are often artefacts of the inversion algorithms. Such artefacts are usually not recognized or corrected with forward modelling. Therefore, if detailed refractor models are sought with model based inversion, then detailed starting models are required. The usual source of artefacts in seismic velocities is irregular refractors. Under most circumstances, the variable migration of the generalized reciprocal method (GRM) is able to accommodate irregular interfaces and generate detailed starting models of the refractor. However, where the very-near-surface environment of the Earth is also irregular, the efficacy of the GRM is reduced, and weathering corrections can be necessary. Standard methods for correcting for surface irregularities are usually not practical where the very-near-surface irregularities are of limited lateral extent. In such circumstances, the GRM smoothing statics method (SSM) is a simple and robust approach, which can facilitate more-accurate estimates of refractor velocities. The GRM SSM generates a smoothing 'statics' correction by subtracting an average of the time-depths computed with a range of XY values from the time-depths computed with a zero XY value (where the XY value is the separation between the receivers used to compute the time-depth). The time-depths to the deeper target refractors do not vary greatly with varying XY values, and therefore an average is much the same as the optimum value. However, the time-depths for the very-near-surface irregularities migrate laterally with increasing XY values and they are substantially reduced with the averaging process. As a result, the time-depth profile averaged over a range of XY values is effectively corrected for the near-surface irregularities. In addition, the time-depths computed with a Bero XY value are the sum of both the near-surface effects and the time-depths to the target refractor. Therefore, their subtraction generates an approximate 'statics' correction, which in turn, is subtracted from the traveltimes The GRM SSM is essentially a smoothing procedure, rather than a deterministic weathering correction approach, and it is most effective with near-surface irregularities of quite limited lateral extent. Model and case studies demonstrate that the GRM SSM substantially improves the reliability in determining detailed seismic velocities in irregular refractors.

SKU recommender system for retail stores that carry identical brands using collaborative filtering and hybrid filtering (협업 필터링 및 하이브리드 필터링을 이용한 동종 브랜드 판매 매장간(間) 취급 SKU 추천 시스템)

  • Joe, Denis Yongmin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.77-110
    • /
    • 2017
  • Recently, the diversification and individualization of consumption patterns through the web and mobile devices based on the Internet have been rapid. As this happens, the efficient operation of the offline store, which is a traditional distribution channel, has become more important. In order to raise both the sales and profits of stores, stores need to supply and sell the most attractive products to consumers in a timely manner. However, there is a lack of research on which SKUs, out of many products, can increase sales probability and reduce inventory costs. In particular, if a company sells products through multiple in-store stores across multiple locations, it would be helpful to increase sales and profitability of stores if SKUs appealing to customers are recommended. In this study, the recommender system (recommender system such as collaborative filtering and hybrid filtering), which has been used for personalization recommendation, is suggested by SKU recommendation method of a store unit of a distribution company that handles a homogeneous brand through a plurality of sales stores by country and region. We calculated the similarity of each store by using the purchase data of each store's handling items, filtering the collaboration according to the sales history of each store by each SKU, and finally recommending the individual SKU to the store. In addition, the store is classified into four clusters through PCA (Principal Component Analysis) and cluster analysis (Clustering) using the store profile data. The recommendation system is implemented by the hybrid filtering method that applies the collaborative filtering in each cluster and measured the performance of both methods based on actual sales data. Most of the existing recommendation systems have been studied by recommending items such as movies and music to the users. In practice, industrial applications have also become popular. In the meantime, there has been little research on recommending SKUs for each store by applying these recommendation systems, which have been mainly dealt with in the field of personalization services, to the store units of distributors handling similar brands. If the recommendation method of the existing recommendation methodology was 'the individual field', this study expanded the scope of the store beyond the individual domain through a plurality of sales stores by country and region and dealt with the store unit of the distribution company handling the same brand SKU while suggesting a recommendation method. In addition, if the existing recommendation system is limited to online, it is recommended to apply the data mining technique to develop an algorithm suitable for expanding to the store area rather than expanding the utilization range offline and analyzing based on the existing individual. The significance of the results of this study is that the personalization recommendation algorithm is applied to a plurality of sales outlets handling the same brand. A meaningful result is derived and a concrete methodology that can be constructed and used as a system for actual companies is proposed. It is also meaningful that this is the first attempt to expand the research area of the academic field related to the existing recommendation system, which was focused on the personalization domain, to a sales store of a company handling the same brand. From 05 to 03 in 2014, the number of stores' sales volume of the top 100 SKUs are limited to 52 SKUs by collaborative filtering and the hybrid filtering method SKU recommended. We compared the performance of the two recommendation methods by totaling the sales results. The reason for comparing the two recommendation methods is that the recommendation method of this study is defined as the reference model in which offline collaborative filtering is applied to demonstrate higher performance than the existing recommendation method. The results of this model are compared with the Hybrid filtering method, which is a model that reflects the characteristics of the offline store view. The proposed method showed a higher performance than the existing recommendation method. The proposed method was proved by using actual sales data of large Korean apparel companies. In this study, we propose a method to extend the recommendation system of the individual level to the group level and to efficiently approach it. In addition to the theoretical framework, which is of great value.

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

Accuracy Analysis of ADCP Stationary Discharge Measurement for Unmeasured Regions (ADCP 정지법 측정 시 미계측 영역의 유량 산정 정확도 분석)

  • Kim, Jongmin;Kim, Seojun;Son, Geunsoo;Kim, Dongsu
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.7
    • /
    • pp.553-566
    • /
    • 2015
  • Acoustic Doppler Current Profilers(ADCPs) have capability to concurrently capitalize three-dimensional velocity vector and bathymetry with highly efficient and rapid manner, and thereby enabling ADCPs to document the hydrodynamic and morphologic data in very high spatial and temporal resolution better than other contemporary instruments. However, ADCPs are also limited in terms of the inevitable unmeasured regions near bottom, surface, and edges of a given cross-section. The velocity in those unmeasured regions are usually extrapolated or assumed for calculating flow discharge, which definitely affects the accuracy in the discharge assessment. This study aimed at scrutinizing a conventional extrapolation method(i.e., the 1/6 power law) for estimating the unmeasured regions to figure out the accuracy in ADCP discharge measurements. For the comparative analysis, we collected spatially dense velocity data using ADV as well as stationary ADCP in a real-scale straight river channel, and applied the 1/6 power law for testing its applicability in conjunction with the logarithmic law which is another representative velocity law. As results, the logarithmic law fitted better with actual velocity measurement than the 1/6 power law. In particular, the 1/6 power law showed a tendency to underestimate the velocity in the near surface region and overestimate in the near bottom region. This finding indicated that the 1/6 power law could be unsatisfactory to follow actual flow regime, thus that resulted discharge estimates in both unmeasured top and bottom region can give rise to discharge bias. Therefore, the logarithmic law should be considered as an alternative especially for the stationary ADCP discharge measurement. In addition, it was found that ADCP should be operated in at least more than 0.6 m of water depth in the left and right edges for better estimate edge discharges. In the future, similar comparative analysis might be required for the moving boat ADCP discharge measurement method, which has been more widely used in the field.

Quality of Life in Chungcheong area University Students according to their Sensory Processing Intervention (충청권 대학생의 감각처리 중재 후 삶의 질)

  • Lee, Ji-Hyun;Lee, Tae-Yong;Kim, Young-Ran
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.6
    • /
    • pp.81-88
    • /
    • 2016
  • This research investigated the sensory processing ability and selected subjects who had sensory processing problems, and divided the subjects into an experimental group and control group. The experimental group was educated with sensory processing intervention on the activities of daily living to determine its influence on the quality of life. The study was based on 230 university students with similar majors in 3 universities of Chungcheong area in the beginning of May, 2013. From here, 32 subjects who had issues with their sensory processing ability were selected. The 230 students were given a survey on the quality of life, individual characteristics, and sensory processing ability. Later, the 32 subjects who had problems with their sensory processing ability were divided into an experimental group and control group. The control group performed sensory processing intervention, whereas the control group did not. After the sensory processing intervention method for 6 weeks, the quality of life was re-evaluated. As a result, the total score of the quality of life after intervention was 98.69 in the experimental group and 84.81 in the control group (p=0.001). The physical score was 16.43 in the experimental group and 14.64 in the control group (p=0.008). The psychological score was 14.71 in the experimental group and 11.75 in the control group (p<0.001). The social score was 14.67 in the experimental group and 13.17 in the control group (p=0.032). The environment score was 14.66 in the experimental group and 12.34 in the control group (p=0.006). The experimental group showed a significant increase in all areas of the quality of life, whereas the control group did not. Through this result, it can be seen that a sensory processing intervention method in daily life can increase the quality of life for subjects with problems in sensory processing ability. Overall, it will be necessary to apply a treatment of various sensory intervention programs for adults and promote a better quality of life.

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Decrease of the Activation and Carbamylation of Rubisco by High CO2 in Kidney Bean (KidneyBean에서의 고 CO2 농도에 의한 Rubisco의 Activation과 Carbamylation의 감소)

  • 노광수;김재기
    • KSBB Journal
    • /
    • v.11 no.3
    • /
    • pp.295-302
    • /
    • 1996
  • The measurements of rubisco parameters are important in photosynthetic studies. In this experiment, we used photometric assay method to detect these major parameters, such as activity, carbamylation and amount of rubisco. The main advantages of this method are very simple and as sensitive as conventional methods which usually produce radioactive waste. In this study, with kidney bean (Phaseolus vulgatis L.) leaves grown at normal $CO_2$ (350ppm) and high $CO_2$ (650 ppm), we investigated the effect of $CO_2$ concentration on activation and carbamylation of rubisco by measuring the rubisco activity, carbamylation rate and amount of rubisco using a dual beam (334nm and 405nm) spectrophotometer, and analyzed the polypeptide profiles of rubisco by SDS-PAGE. When $CO_2$ concentration was raised from 350ppm to 650ppm, all parameters of rubisco were decreased : $41.2{\mu}M/m^2/s and 52.2{\mu}M/m^2/s$ to $27.4{\mu}M/m^2/s and 46.1{\mu}M/m^2/s$ for initial and total rubisco activity, respectively ; from 79% to 58.9% for carbamylation rate ; from $1.94 {\mu}M/m^2$ to 1.58{\mu}M/m^2$ for amount of rubisco. These results suggests that the decrease in rubisco activity at high $CO_2$ was caused by carbamylation. The analysis of the preparation by SDS-PAGE showed two major polypeptides at 50 and 14.5 kD which were identified as the large and the small subunits of rubisco. There were no differences in the intensity compared high $CO_2$ to normal $CO_2$ in both 50 kD and 14.5 kD bands. We also found that these inhibitory effects of $CO_2$ were reversible. When high $CO_2$ was switched to normal $CO_2$, the parameters of rubisco changed were almost the same as normal rubisco parameters. These data provide an evidence that activity of rubisco was recovered by $CO_2$ concentration of 350 ppm.

  • PDF

Evaluation of Image Quality Change by Truncated Region in Brain PET/CT (Brain PET에서 Truncated Region에 의한 영상의 질 평가)

  • Lee, Hong-Jae;Do, Yong-Ho;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.68-73
    • /
    • 2015
  • Purpose The purpose of this study was to evaluate image quality change by truncated region in field of view (FOV) of attenuation correction computed tomography (AC-CT) in brain PET/CT. Materials and Methods Biograph Truepoint 40 with TrueV (Siemens) was used as a scanner. $^{68}Ge$ phantom scan was performed with and without applying brain holder using brain PET/CT protocol. PET attenuation correction factor (ACF) was evaluated according to existence of pallet in FOV of AC-CT. FBP, OSEM-3D and PSF methods were applied for PET reconstruction. Parameters of iteration 4, subsets 21 and gaussian 2 mm filter were applied for iterative reconstruction methods. Window level 2900, width 6000 and level 4, 200, width 1000 were set for visual evaluation of PET AC images. Vertical profiles of 5 slices and 20 slices summation images applied gaussian 5 mm filter were produced for evaluating integral uniformity. Results Patient pallet was not covered in FOV of AC-CT when without applying brain holder because of small size of FOV. It resulted in defect of ACF sinogram by truncated region in ACF evaluation. When without applying brain holder, defect was appeared in lower part of transverse image on condition of window level 4200, width 1000 in PET AC image evaluation. With and without applying brain holder, integral uniformities of 5 slices and 20 slices summation images were 7.2%, 6.7% and 11.7%, 6.7%. Conclusion Truncated region by small FOV results in count defect in occipital lobe of brain in clinical or research studies. It is necessary to understand effect of truncated region and apply appropriate accessory for brain PET/CT.

  • PDF

Solitary Juvenile Polyps and Colonoscopic Polypectomy in Children (연소성 대장 용종의 내시경적 용종 절제술)

  • Cheon, Kyoung Whoon;Kim, Jae Young;Kim, Sung Won
    • Clinical and Experimental Pediatrics
    • /
    • v.46 no.3
    • /
    • pp.236-241
    • /
    • 2003
  • Purpose : This study was performed to know the clinical profile and effectiveness of colonoscopic polypectomy in patients with solitary juvenile polyp. Methods : This study included 19 children, aged 1.8 to 11.4 years, who underwent colonoscopic polypectomy and histologically proven solitary juvenile polyps between March 1998 and August 2002. We analyzed their detailed history, clinical manifestations, colonoscopic examination, method of anesthesia and results of colonoscopic polypectomy. Results : The mean age of the 19 cases was $4.7{\pm}2.8year$. The male to female ratio was 1 : 1.1. Hematochezia, the main indication of colonoscopy, was present in all cases. Combined symptoms were mucoid stool or diarrhea(42%), abdominal pain(26%), constipation(11%) and anal fissure(11%). Anemia(Hb <10 g/dL) in four cases recovered spontaneously after polypectomy. Complications associated with premedication, sedation and colonoscopy itself did not occur. Bleeding developed in two cases(11%) after polypectomy. One of them was controlled with hemoclipping. The main site of polyps was the rectosigmoid colon in 15 cases(79%). The size of the polyps ranged from 0.5 to 3.5 cm. The interval between the onset of symptoms and polypectomy was from 0.1 to 42 months. Conclusion : Juvenile polyps are a common cause of benign, chronic and recurrent rectal bleeding. Colonoscopic polypectomy is a simple, safe and effective therapeutic method. So earlier colonoscopy might avoid uneffective treatment and prevent untoward problems such as fear of parents and anemia.