• Title/Summary/Keyword: System Parameter

Search Result 6,819, Processing Time 0.032 seconds

Analysis of the major factors of influence on the conditions of the Intensity Modulated Radiation Therapy planning optimization in Head and Neck (두경부 세기견조방사선치료계획 최적화 조건에서 주요 인자들의 영향 분석)

  • Kim, Dae Sup;Lee, Woo Seok;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.11-19
    • /
    • 2014
  • Purpose : To derive the most appropriate factors by considering the effects of the major factors when applied to the optimization algorithm, thereby aiding the effective designing of a ideal treatment plan. Materials and Methods : The eclipse treatment planning system(Eclipse 10.0, Varian, USA) was used in this study. The PBC (Pencil Beam Convolution) algorithm was used for dose calculation, and the DVO (Dose Volume Optimizer 10.0.28) Optimization algorithm was used for intensity modulated radiation therapy. The experimental group consists of patients receiving intensity modulated radiation therapy for the head and neck cancer and dose prescription to two planned target volume was 2.2 Gy and 2.0 Gy simultaneously. Treatment plan was done with inverse dose calculation methods utilizing 6 MV beam and 7 fields. The optimal algorithm parameter of the established plan was selected based on volume dose-priority(Constrain), dose fluence smooth value and the impact of the treatment plan was analyzed according to the variation of each factors. Volume dose-priority determines the reference conditions and the optimization process was carried out under the condition using same ratio, but different absolute values. We evaluated the surrounding normal organs of treatment volume according to the changing conditions of the absolute values of the volume dose-priority. Dose fluence smooth value was applied by simply changing the reference conditions (absolute value) and by changing the related volume dose-priority. The treatment plan was evaluated using Conformal Index, Paddick's Conformal Index, Homogeneity Index and the average dose of each organs. Results : When the volume dose-priority values were directly proportioned by changing the absolute values, the CI values were found to be different. However PCI was $1.299{\pm}0.006$ and HI was $1.095{\pm}0.004$ while D5%/D95% was $1.090{\pm}1.011$. The impact on the prescribed dose were similar. The average dose of parotid gland decreased to 67.4, 50.3, 51.2, 47.1 Gy when the absolute values of the volume dose-priority increased by 40,60,70,90. When the dose smooth strength from each treatment plan was increased, PCI value increased to $1.338{\pm}0.006$. Conclusion : The optimization algorithm was more influenced by the ratio of each condition than the absolute value of volume dose-priority. If the same ratio was maintained, similar treatment plan was established even if the absolute values were different. Volume dose-priority of the treatment volume should be more than 50% of the normal organ volume dose-priority in order to achieve a successful treatment plan. Dose fluence smooth value should increase or decrease proportional to the volume dose-priority. Volume dose-priority is not enough to satisfy the conditions when the absolute value are applied solely.

Negative Support Reactions of the Single Span Twin-Steel Box Girder Curved Bridges with Skew Angles (단경간 2련 강박스 거더 곡선교의 사각에 따른 부반력 특성)

  • Park, Chang Min;Lee, Hyung Joon
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.16 no.4
    • /
    • pp.34-43
    • /
    • 2012
  • The behaviors of the curved bridges which has been constructed in the RAMP or Interchange are very complicate and different than orthogonal bridges according to the variations of radius of curvature, skew angle and spacing of shoes. Occasionally, the camber of girder and negative reactions can be occurred due to bending and torsional moment. In this study, the effects on the negative reaction in the curved bridge were investigated on the basis of design variables such as radius of curvature, skew angle, and spacing of shoes. For this study, the twin-steel box girder curved bridge with single span which is applicable for the RAMP bridges with span length(L) of 50.0m and width of 9.0m was chosen and the structural analysis to calculate the reactions was conducted using 3-dimensional equivalent grillage system. The value of negative reaction in curved bridges depends on the plan structures of bridges, the formations of structural systems, and the boundary conditions of bearing, so, radius of curvature, skew angle, and spacing of shoes among of design variables were chosen as the parameter and the load combination according to the design standard were considered. According to the results of numerical analysis, the negative reaction in curved bridge increased with an decrease of radius of curvature, skew angle, and spacing of shoes, respectively. Also, in case of skew angle of $60^{\circ}$ the negative reaction has been always occurred without regard to ${\theta}/B$, and in case of skew angle of $75^{\circ}$ the negative reaction hasn't been occurred in ${\theta}/B$ below 0.27 with the radius of curvature of 180m and in ${\theta}/B$ below 0.32 with the radius of curvature of 250m, and in case of skew angle of $90^{\circ}$ the negative reaction hasn't been occurred in the radius of curvature over 180m and in ${\theta}/B$ below 0.38 with the radius of curvature of 130m, The results from this study indicated that occurrence of negative reaction was related to design variables such as radius of curvature, skew angle, and spacing of shoes, and the problems with the stability including negative reaction will be expected to be solved as taken into consideration of the proper combinations of design variables in design of curved bridge.

Optimization of Microbial Production of Ethanol form Carbon Monoxide (미생물을 이용한 일산화탄소로부터 에탄올 생산공정 최적화)

  • 강환구;이충렬
    • KSBB Journal
    • /
    • v.17 no.1
    • /
    • pp.73-79
    • /
    • 2002
  • The method to optimize the microbial production of ethanol from CO using Clostridium ljungdahlii was developed. The kinetic parameter study on CO conversion with Clostridium ljungdahlii was carried out and maximum CO conversion rate of 37.14 mmol/L-hr-O.D. and $K_{m}$ / of 0.9516 atm were obtained. It was observed that method of two stage fermentation, which consists of cell growth stage and ethanol production stage, was effective to produce ethanol. When pH was shifted from 5.5 to 4.5 and ammonium solution was supplied to culture media as nitrogen source at ethanol production stage, the concentration of ethanol produced was increased 20 times higher than that without shift. Ethanol production from CO in a fermenter with Clostridium ljungdahlii was optimized and the concentration of ethanol produced was 45 g/L and maximun ethanol productivity was 0.75 g ethanol/L-hr.

Evaluation of Setup Uncertainty on the CTV Dose and Setup Margin Using Monte Carlo Simulation (몬테칼로 전산모사를 이용한 셋업오차가 임상표적체적에 전달되는 선량과 셋업마진에 대하여 미치는 영향 평가)

  • Cho, Il-Sung;Kwark, Jung-Won;Cho, Byung-Chul;Kim, Jong-Hoon;Ahn, Seung-Do;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.81-90
    • /
    • 2012
  • The effect of setup uncertainties on CTV dose and the correlation between setup uncertainties and setup margin were evaluated by Monte Carlo based numerical simulation. Patient specific information of IMRT treatment plan for rectal cancer designed on the VARIAN Eclipse planning system was utilized for the Monte Carlo simulation program including the planned dose distribution and tumor volume information of a rectal cancer patient. The simulation program was developed for the purpose of the study on Linux environment using open source packages, GNU C++ and ROOT data analysis framework. All misalignments of patient setup were assumed to follow the central limit theorem. Thus systematic and random errors were generated according to the gaussian statistics with a given standard deviation as simulation input parameter. After the setup error simulations, the change of dose in CTV volume was analyzed with the simulation result. In order to verify the conventional margin recipe, the correlation between setup error and setup margin was compared with the margin formula developed on three dimensional conformal radiation therapy. The simulation was performed total 2,000 times for each simulation input of systematic and random errors independently. The size of standard deviation for generating patient setup errors was changed from 1 mm to 10 mm with 1 mm step. In case for the systematic error the minimum dose on CTV $D_{min}^{stat{\cdot}}$ was decreased from 100.4 to 72.50% and the mean dose $\bar{D}_{syst{\cdot}}$ was decreased from 100.45% to 97.88%. However the standard deviation of dose distribution in CTV volume was increased from 0.02% to 3.33%. The effect of random error gave the same result of a reduction of mean and minimum dose to CTV volume. It was found that the minimum dose on CTV volume $D_{min}^{rand{\cdot}}$ was reduced from 100.45% to 94.80% and the mean dose to CTV $\bar{D}_{rand{\cdot}}$ was decreased from 100.46% to 97.87%. Like systematic error, the standard deviation of CTV dose ${\Delta}D_{rand}$ was increased from 0.01% to 0.63%. After calculating a size of margin for each systematic and random error the "population ratio" was introduced and applied to verify margin recipe. It was found that the conventional margin formula satisfy margin object on IMRT treatment for rectal cancer. It is considered that the developed Monte-carlo based simulation program might be useful to study for patient setup error and dose coverage in CTV volume due to variations of margin size and setup error.

Ecoclimatic Map over North-East Asia Using SPOT/VEGETATION 10-day Synthesis Data (SPOT/VEGETATION NDVI 자료를 이용한 동북아시아의 생태기후지도)

  • Park Youn-Young;Han Kyung-Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.2
    • /
    • pp.86-96
    • /
    • 2006
  • Ecoclimap-1, a new complete surface parameter global database at a 1-km resolution, was previously presented. It is intended to be used to initialize the soil-vegetation- atmosphere transfer schemes in meteorological and climate models. Surface parameters in the Ecoclimap-1 database are provided in the form of a per-class value by an ecoclimatic base map from a simple merging of land cover and climate maps. The principal objective of this ecoclimatic map is to consider intra-class variability of life cycle that the usual land cover map cannot describe. Although the ecoclimatic map considering land cover and climate is used, the intra-class variability was still too high inside some classes. In this study, a new strategy is defined; the idea is to use the information contained in S10 NDVI SPOT/VEGETATION profiles to split a land cover into more homogeneous sub-classes. This utilizes an intra-class unsupervised sub-clustering methodology instead of simple merging. This study was performed to provide a new ecolimatic map over Northeast Asia in the framework of Ecoclimap-2 global database construction for surface parameters. We used the University of Maryland's 1km Global Land Cover Database (UMD) and a climate map to determine the initial number of clusters for intra-class sub-clustering. An unsupervised classification process using six years of NDVI profiles allows the discrimination of different behavior for each land cover class. We checked the spatial coherence of the classes and, if necessary, carried out an aggregation step of the clusters having a similar NDVI time series profile. From the mapping system, 29 ecosystems resulted for the study area. In terms of climate-related studies, this new ecosystem map may be useful as a base map to construct an Ecoclimap-2 database and to improve the surface climatology quality in the climate model.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Feasibility of Mixed-Energy Partial Arc VMAT Plan with Avoidance Sector for Prostate Cancer (전립선암 방사선치료 시 회피 영역을 적용한 혼합 에너지 VMAT 치료 계획의 평가)

  • Hwang, Se Ha;NA, Kyoung Su;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.17-29
    • /
    • 2020
  • Purpose: The purpose of this work was to investigate the dosimetric impact of mixed energy partial arc technique on prostate cancer VMAT. Materials and Methods: This study involved prostate only patients planned with 70Gy in 30 fractions to the planning target volume (PTV). Femoral heads, Bladder and Rectum were considered as oragan at risk (OARs). For this study, mixed energy partial arcs (MEPA) were generated with gantry angle set to 180°~230°, 310°~50° for 6MV arc and 130°~50°, 310°~230° for 15MV arc. Each arc set the avoidance sector which is gantry angle 230°~310°, 50°~130° at first arc and 50°~310° at second arc. After that, two plans were summed and were analyzed the dosimetry parameter of each structure such as Maximum dose, Mean dose, D2%, Homogeneity index (HI) and Conformity Index (CI) for PTV and Maximum dose, Mean dose, V70Gy, V50Gy, V30Gy, and V20Gy for OARs and Monitor Unit (MU) with 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC plan. Results: In MEPA, the maximum dose, mean dose and D2% were lower than 6MV 1 ARC plan(p<0.0005). However, the average difference of maximum dose was 0.24%, 0.39%, 0.60% (p<0.450, 0.321, 0.139) higher than 6MV, 10MV, 15MV 2 ARC plan, respectively and D2% was 0.42%, 0.49%, 0.59% (p<0.073, 0.087, 0.033) higher than compared plans. The average difference of mean dose was 0.09% lower than 10MV 2 ARC plan, but it is 0.27%, 0.12% (p<0.184, 0.521) higher than 6MV 2 ARC, 15MV 2 ARC plan, respectively. HI was 0.064±0.006 which is the lowest value (p<0.005, 0.357, 0.273, 0.801) among the all plans. For CI, there was no significant differences which were 1.12±0.038 in MEPA, 1.12±0.036, 1.11±0.024, 1.11±0.030, 1.12±0.027 in 6MV 1 ARC, 6MV, 10MV, 15MV 2 ARC, respectively. MEPA produced significantly lower rectum dose. Especially, V70Gy, V50Gy, V30Gy, V20Gy were 3.40, 16.79, 37.86, 48.09 that were lower than other plans. For bladder dose, V30Gy, V20Gy were lower than other plans. However, the mean dose of both femoral head were 9.69±2.93, 9.88±2.5 which were 2.8Gy~3.28Gy higher than other plans. The mean MU of MEPA were 19.53% lower than 6MV 1 ARC, 5.7% lower than 10MV 2 ARC respectively. Conclusion: This study for prostate radiotherapy demonstrated that a choice of MEPA VMAT has the potential to minimize doses to OARs and improve homogeneity to PTV at the expense of a moderate increase in maximum and mean dose to the femoral heads.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Implementation Strategy for the Elderly Care Solution Based on Usage Log Analysis: Focusing on the Case of Hyodol Product (사용자 로그 분석에 기반한 노인 돌봄 솔루션 구축 전략: 효돌 제품의 사례를 중심으로)

  • Lee, Junsik;Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.117-140
    • /
    • 2019
  • As the aging phenomenon accelerates and various social problems related to the elderly of the vulnerable are raised, the need for effective elderly care solutions to protect the health and safety of the elderly generation is growing. Recently, more and more people are using Smart Toys equipped with ICT technology for care for elderly. In particular, log data collected through smart toys is highly valuable to be used as a quantitative and objective indicator in areas such as policy-making and service planning. However, research related to smart toys is limited, such as the development of smart toys and the validation of smart toy effectiveness. In other words, there is a dearth of research to derive insights based on log data collected through smart toys and to use them for decision making. This study will analyze log data collected from smart toy and derive effective insights to improve the quality of life for elderly users. Specifically, the user profiling-based analysis and elicitation of a change in quality of life mechanism based on behavior were performed. First, in the user profiling analysis, two important dimensions of classifying the type of elderly group from five factors of elderly user's living management were derived: 'Routine Activities' and 'Work-out Activities'. Based on the dimensions derived, a hierarchical cluster analysis and K-Means clustering were performed to classify the entire elderly user into three groups. Through a profiling analysis, the demographic characteristics of each group of elderlies and the behavior of using smart toy were identified. Second, stepwise regression was performed in eliciting the mechanism of change in quality of life. The effects of interaction, content usage, and indoor activity have been identified on the improvement of depression and lifestyle for the elderly. In addition, it identified the role of user performance evaluation and satisfaction with smart toy as a parameter that mediated the relationship between usage behavior and quality of life change. Specific mechanisms are as follows. First, the interaction between smart toy and elderly was found to have an effect of improving the depression by mediating attitudes to smart toy. The 'Satisfaction toward Smart Toy,' a variable that affects the improvement of the elderly's depression, changes how users evaluate smart toy performance. At this time, it has been identified that it is the interaction with smart toy that has a positive effect on smart toy These results can be interpreted as an elderly with a desire to meet emotional stability interact actively with smart toy, and a positive assessment of smart toy, greatly appreciating the effectiveness of smart toy. Second, the content usage has been confirmed to have a direct effect on improving lifestyle without going through other variables. Elderly who use a lot of the content provided by smart toy have improved their lifestyle. However, this effect has occurred regardless of the attitude the user has toward smart toy. Third, log data show that a high degree of indoor activity improves both the lifestyle and depression of the elderly. The more indoor activity, the better the lifestyle of the elderly, and these effects occur regardless of the user's attitude toward smart toy. In addition, elderly with a high degree of indoor activity are satisfied with smart toys, which cause improvement in the elderly's depression. However, it can be interpreted that elderly who prefer outdoor activities than indoor activities, or those who are less active due to health problems, are hard to satisfied with smart toys, and are not able to get the effects of improving depression. In summary, based on the activities of the elderly, three groups of elderly were identified and the important characteristics of each type were identified. In addition, this study sought to identify the mechanism by which the behavior of the elderly on smart toy affects the lives of the actual elderly, and to derive user needs and insights.