• Title/Summary/Keyword: optimization method

Search Result 9,040, Processing Time 0.036 seconds

Kriging of Daily PM10 Concentration from the Air Korea Stations Nationwide and the Accuracy Assessment (베리오그램 최적화 기반의 정규크리깅을 이용한 전국 에어코리아 PM10 자료의 일평균 격자지도화 및 내삽정확도 검증)

  • Jeong, Yemin;Cho, Subin;Youn, Youjeong;Kim, Seoyeon;Kim, Geunah;Kang, Jonggu;Lee, Dalgeun;Chung, Euk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.379-394
    • /
    • 2021
  • Air pollution data in South Korea is provided on a real-time basis by Air Korea stations since 2005. Previous studies have shown the feasibility of gridding air pollution data, but they were confined to a few cities. This paper examines the creation of nationwide gridded maps for PM10 concentration using 333 Air Korea stations with variogram optimization and ordinary kriging. The accuracy of the spatial interpolation was evaluated by various sampling schemes to avoid a too dense or too sparse distribution of the validation points. Using the 114,745 matchups, a four-round blind test was conducted by extracting random validation points for every 365 days in 2019. The overall accuracy was stably high with the MAE of 5.697 ㎍/m3 and the CC of 0.947. Approximately 1,500 cases for high PM10 concentration also showed a result with the MAE of about 12 ㎍/m3 and the CC over 0.87, which means that the proposed method was effective and applicable to various situations. The gridded maps for daily PM10 concentration at the resolution of 0.05° also showed a reasonable spatial distribution, which can be used as an input variable for a gridded prediction of tomorrow's PM10 concentration.

Water Digital Twin for High-tech Electronics Industrial Wastewater Treatment System (II): e-ASM Calibration, Effluent Prediction, Process selection, and Design (첨단 전자산업 폐수처리시설의 Water Digital Twin(II): e-ASM 모델 보정, 수질 예측, 공정 선택과 설계)

  • Heo, SungKu;Jeong, Chanhyeok;Lee, Nahui;Shim, Yerim;Woo, TaeYong;Kim, JeongIn;Yoo, ChangKyoo
    • Clean Technology
    • /
    • v.28 no.1
    • /
    • pp.79-93
    • /
    • 2022
  • In this study, an electronics industrial wastewater activated sludge model (e-ASM) to be used as a Water Digital Twin was calibrated based on real high-tech electronics industrial wastewater treatment measurements from lab-scale and pilot-scale reactors, and examined for its treatment performance, effluent quality prediction, and optimal process selection. For specialized modeling of a high-tech electronics industrial wastewater treatment system, the kinetic parameters of the e-ASM were identified by a sensitivity analysis and calibrated by the multiple response surface method (MRS). The calibrated e-ASM showed a high compatibility of more than 90% with the experimental data from the lab-scale and pilot-scale processes. Four electronics industrial wastewater treatment processes-MLE, A2/O, 4-stage MLE-MBR, and Bardenpo-MBR-were implemented with the proposed Water Digital Twin to compare their removal efficiencies according to various electronics industrial wastewater characteristics. Bardenpo-MBR stably removed more than 90% of the chemical oxygen demand (COD) and showed the highest nitrogen removal efficiency. Furthermore, a high concentration of 1,800 mg L-1 T MAH influent could be 98% removed when the HRT of the Bardenpho-MBR process was more than 3 days. Hence, it is expected that the e-ASM in this study can be used as a Water Digital Twin platform with high compatibility in a variety of situations, including plant optimization, Water AI, and the selection of best available technology (BAT) for a sustainable high-tech electronics industry.

Analysis of Determinants of Carbon Emissions Considering the Electricity Trade Situation of Connected Countries and the Introduction of the Carbon Emission Trading System in Europe (유럽 내 탄소배출권거래제 도입에 따른 연결계통국가들의 전력교역 상황을 고려한 탄소배출량 결정요인분석)

  • Yoon, Kyungsoo;Hong, Won Jun
    • Environmental and Resource Economics Review
    • /
    • v.31 no.2
    • /
    • pp.165-204
    • /
    • 2022
  • This study organized data from 2000 to 2014 for 20 grid-connected countries in Europe and analyzed the determinants of carbon emissions through the panel GLS method considering the problem of heteroscedasticity and autocorrelation. At the same time, the effect of introducing ETS was considered by dividing the sample period as of 2005 when the European emission trading system was introduced. Carbon emissions from individual countries were used as dependent variables, and proportion of generation by each source, power self-sufficiency ratio of neighboring countries, power production from resource-holding countries, concentration of power sources, total energy consumption per capita in the industrial sector, tax of electricity, net electricity export per capita, and size of national territory per capita. According to the estimation results, the proportion of nuclear power and renewable energy generation, concentration of power sources, and size of the national territory area per capita had a negative (-) effect on carbon emissions both before and after 2005. On the other hand, the proportion of coal power generation, the power supply and demand rate of neighboring countries, the power production of resource-holding countries, and the total energy consumption per capita in the industrial sector were found to have a positive (+) effect on carbon emissions. In addition, the proportion of gas generation had a negative (-) effect on carbon emissions, and tax of electricity were found to have a positive (+) effect. However, all of these were only significant before 2005. It was found that net electricity export per capita had a negative (-) effect on carbon emissions only after 2005. The results of this study suggest macroscopic strategies to reduce carbon emissions to green growth, suggesting mid- to long-term power mix optimization measures considering the electricity trade market and their role.

4D Printing Materials for Soft Robots (소프트 로봇용 4D 프린팅 소재)

  • Sunhee Lee
    • Fashion & Textile Research Journal
    • /
    • v.24 no.6
    • /
    • pp.667-685
    • /
    • 2022
  • This paper aims to investigate 4D printing materials for soft robots. 4D printing is a targeted evolution of the 3D printed structure in shape, property, and functionality. It is capable of self-assembly, multi-functionality, and self-repair. In addition, it is time-dependent, printer-independent, and predictable. The shape-shifting behaviors considered in 4D printing include folding, bending, twisting, linear or nonlinear expansion/contraction, surface curling, and generating surface topographical features. The shapes can shift from 1D to 1D, 1D to 2D, 2D to 2D, 1D to 3D, 2D to 3D, and 3D to 3D. In the 4D printing auxetic structure, the kinetiX is a cellular-based material design composed of rigid plates and elastic hinges. In pneumatic auxetics based on the kirigami structure, an inverse optimization method for designing and fabricating morphs three-dimensional shapes out of patterns laid out flat. When 4D printing material is molded into a deformable 3D structure, it can be applied to the exoskeleton material of soft robots such as upper and lower limbs, fingers, hands, toes, and feet. Research on 4D printing materials for soft robots is essential in developing smart clothing for healthcare in the textile and fashion industry.

A Study on the Development of Ultra-precision Small Angle Spindle for Curved Processing of Special Shape Pocket in the Fourth Industrial Revolution of Machine Tools (공작기계의 4차 산업혁명에서 특수한 형상 포켓 곡면가공을 위한 초정밀 소형 앵글 스핀들 개발에 관한 연구)

  • Lee Ji Woong
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.119-126
    • /
    • 2023
  • Today, in order to improve fuel efficiency and dynamic behavior of automobiles, an era of light weight and simplification of automobile parts is being formed. In order to simplify and design and manufacture the shape of the product, various components are integrated. For example, in order to commercialize three products into one product, product processing is occurring to a very narrow area. In the case of existing parts, precision die casting or casting production is used for processing convenience, and the multi-piece method requires a lot of processes and reduces the precision and strength of the parts. It is very advantageous to manufacture integrally to simplify the processing air and secure the strength of the parts, but if a deep and narrow pocket part needs to be processed, it cannot be processed with the equipment's own spindle. To solve a problem, research on cutting processing is being actively conducted, and multi-axis composite processing technology not only solves this problem. It has many advantages, such as being able to cut into composite shapes that have been difficult to flexibly cut through various processes with one machine tool so far. However, the reality is that expensive equipment increases manufacturing costs and lacks engineers who can operate the machine. In the five-axis cutting processing machine, when producing products with deep and narrow sections, the cycle time increases in product production due to the indirectness of tools, and many problems occur in processing. Therefore, dedicated machine tools and multi-axis composite machines should be used. Alternatively, an angle spindle may be used as a special tool capable of multi-axis composite machining of five or more axes in a three-axis machining center. Various and continuous studies are needed in areas such as processing vibration absorption, low heat generation and operational stability, excellent dimensional stability, and strength securing by using the angle spindle.

A Comparative Analysis of Social Commerce and Open Market Using User Reviews in Korean Mobile Commerce (사용자 리뷰를 통한 소셜커머스와 오픈마켓의 이용경험 비교분석)

  • Chae, Seung Hoon;Lim, Jay Ick;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.53-77
    • /
    • 2015
  • Mobile commerce provides a convenient shopping experience in which users can buy products without the constraints of time and space. Mobile commerce has already set off a mega trend in Korea. The market size is estimated at approximately 15 trillion won (KRW) for 2015, thus far. In the Korean market, social commerce and open market are key components. Social commerce has an overwhelming open market in terms of the number of users in the Korean mobile commerce market. From the point of view of the industry, quick market entry, and content curation are considered to be the major success factors, reflecting the rapid growth of social commerce in the market. However, academics' empirical research and analysis to prove the success rate of social commerce is still insufficient. Henceforward, it is to be expected that social commerce and the open market in the Korean mobile commerce will compete intensively. So it is important to conduct an empirical analysis to prove the differences in user experience between social commerce and open market. This paper is an exploratory study that shows a comparative analysis of social commerce and the open market regarding user experience, which is based on the mobile users' reviews. Firstly, this study includes a collection of approximately 10,000 user reviews of social commerce and open market listed Google play. A collection of mobile user reviews were classified into topics, such as perceived usefulness and perceived ease of use through LDA topic modeling. Then, a sentimental analysis and co-occurrence analysis on the topics of perceived usefulness and perceived ease of use was conducted. The study's results demonstrated that social commerce users have a more positive experience in terms of service usefulness and convenience versus open market in the mobile commerce market. Social commerce has provided positive user experiences to mobile users in terms of service areas, like 'delivery,' 'coupon,' and 'discount,' while open market has been faced with user complaints in terms of technical problems and inconveniences like 'login error,' 'view details,' and 'stoppage.' This result has shown that social commerce has a good performance in terms of user service experience, since the aggressive marketing campaign conducted and there have been investments in building logistics infrastructure. However, the open market still has mobile optimization problems, since the open market in mobile commerce still has not resolved user complaints and inconveniences from technical problems. This study presents an exploratory research method used to analyze user experience by utilizing an empirical approach to user reviews. In contrast to previous studies, which conducted surveys to analyze user experience, this study was conducted by using empirical analysis that incorporates user reviews for reflecting users' vivid and actual experiences. Specifically, by using an LDA topic model and TAM this study presents its methodology, which shows an analysis of user reviews that are effective due to the method of dividing user reviews into service areas and technical areas from a new perspective. The methodology of this study has not only proven the differences in user experience between social commerce and open market, but also has provided a deep understanding of user experience in Korean mobile commerce. In addition, the results of this study have important implications on social commerce and open market by proving that user insights can be utilized in establishing competitive and groundbreaking strategies in the market. The limitations and research direction for follow-up studies are as follows. In a follow-up study, it will be required to design a more elaborate technique of the text analysis. This study could not clearly refine the user reviews, even though the ones online have inherent typos and mistakes. This study has proven that the user reviews are an invaluable source to analyze user experience. The methodology of this study can be expected to further expand comparative research of services using user reviews. Even at this moment, users around the world are posting their reviews about service experiences after using the mobile game, commerce, and messenger applications.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A study of the plan dosimetic evaluation on the rectal cancer treatment (직장암 치료 시 치료계획에 따른 선량평가 연구)

  • Jeong, Hyun Hak;An, Beom Seok;Kim, Dae Il;Lee, Yang Hoon;Lee, Je hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.171-178
    • /
    • 2016
  • Purpose : In order to minimize the dose of femoral head as an appropriate treatment plan for rectal cancer radiation therapy, we compare and evaluate the usefulness of 3-field 3D conformal radiation therapy(below 3fCRT), which is a universal treatment method, and 5-field 3D conformal radiation therapy(below 5fCRT), and Volumetric Modulated Arc Therapy (VMAT). Materials and Methods : The 10 cases of rectal cancer that treated with 21EX were enrolled. Those cases were planned by Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28) and AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). 3fCRT and 5fCRT plan has $0^{\circ}$, $270^{\circ}$, $90^{\circ}$ and $0^{\circ}$, $95^{\circ}$, $45^{\circ}$, $315^{\circ}$, $265^{\circ}$ gantry angle, respectively. VMAT plan parameters consisted of 15MV coplanar $360^{\circ}$ 1 arac. Treatment prescription was employed delivering 54Gy to recum in 30 fractions. To minimize the dose difference that shows up randomly on optimizing, VMAT plans were optimized and calculated twice, and normalized to the target V100%=95%. The indexes of evaluation are D of Both femoral head and aceta fossa, total MU, H.I.(Homogeneity index) and C.I.(Conformity index) of the PTV. All VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : D of Rt. femoral head was 53.08 Gy, 50.27 Gy, and 30.92 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. D of Rt. aceta fossa was 54.86 Gy, 52.40 Gy, 30.37 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. The maximum dose of both femoral head and aceta fossa was higher in the order of 3fCRT, 5fCRT, and VMAT treatment plan. C.I. showed the lowest VMAT treatment plan with an average of 1.64, 1.48, and 0.99 in the order of 3fCRT, 5fCRT, and VMAT treatment plan. There was no significant difference on H.I. of the PTV among three plans. Total MU showed that the VMAT treatment plan used 124.4MU and 299MU more than the 3fCRT and 5fCRT treatment plan, respectively. IMRT verification gamma test results for the VMAT plan passed over 90.0% at 2mm/2%. Conclusion : In rectal cancer treatment, the VMAT plan was shown to be advantageous in most of the evaluation indexes compared to the 3D plan, and the dose of the femoral head was greatly reduced. However, because of practical limitations there may be a case where it is difficult to select a VMAT treatment plan. 5fCRT has the advantage of reducing the dose of the femoral head as compared to the existing 3fCRT, without regard to additional problems. Therefore, not only would it extend survival time but the quality of life in general, if hospitals improved radiation therapy efficiency by selecting the treatment plan in accordance with the hospital's situation.

  • PDF