• 제목/요약/키워드: Algorithms

검색결과 16,541건 처리시간 0.054초

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • 제22권1호
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • 제22권1호
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

The Availability of the step optimization in Monaco Planning system (모나코 치료계획 시스템에서 단계적 최적화 조건 실현의 유용성)

  • Kim, Dae Sup
    • The Journal of Korean Society for Radiation Therapy
    • /
    • 제26권2호
    • /
    • pp.207-216
    • /
    • 2014
  • Purpose : We present a method to reduce this gap and complete the treatment plan, to be made by the re-optimization is performed in the same conditions as the initial treatment plan different from Monaco treatment planning system. Materials and Methods : The optimization is carried in two steps when performing the inverse calculation for volumetric modulated radiation therapy or intensity modulated radiation therapy in Monaco treatment planning system. This study was the first plan with a complete optimization in two steps by performing all of the treatment plan, without changing the optimized condition from Step 1 to Step 2, a typical sequential optimization performed. At this time, the experiment was carried out with a pencil beam and Monte Carlo algorithm is applied In step 2. We compared initial plan and re-optimized plan with the same optimized conditions. And then evaluated the planning dose by measurement. When performing a re-optimization for the initial treatment plan, the second plan applied the step optimization. Results : When the common optimization again carried out in the same conditions in the initial treatment plan was completed, the result is not the same. From a comparison of the treatment planning system, similar to the dose-volume the histogram showed a similar trend, but exhibit different values that do not satisfy the conditions best optimized dose, dose homogeneity and dose limits. Also showed more than 20% different in comparison dosimetry. If different dose algorithms, this measure is not the same out. Conclusion : The process of performing a number of trial and error, and you get to the ultimate goal of treatment planning optimization process. If carried out to optimize the completion of the initial trust only the treatment plan, we could be made of another treatment plan. The similar treatment plan could not satisfy to optimization results. When you perform re-optimization process, you will need to apply the step optimized conditions, making sure the dose distribution through the optimization process.

Dose Evaluation of TPS according to Treatment Sites in IMRT (세기조절방사선치료 시 치료 부위에 따른 치료계획 시스템 간 선량평가)

  • Kim, Jin Man;Kim, Jong Sik;Hong, Chae Seon;Park, Ju Young;Park, Su Yeon;Ju, Sang Gyu
    • The Journal of Korean Society for Radiation Therapy
    • /
    • 제25권2호
    • /
    • pp.181-186
    • /
    • 2013
  • Purpose: This study executed therapy plans on prostate cancer (homogeneous density area) and lung cancer (non-homogeneous density area) using radiation treatment planning systems such as $Pinnacle^3$ (version 9.2, Philips Medical Systems, USA) and Eclipse (version 10.0, Varian Medical Systems, USA) in order to quantify the difference between dose calculation according to density in IMRT. Materials and Methods: The subjects were prostate cancer patients (n=5) and lung cancer patients (n=5) who had therapies in our hospital. Identical constraints and optimization process according to the Protocol were administered on the subjects. For the therapy plan of prostate cancer patients, 10 MV and 7Beam were used and 2.5 Gy was prescribed in 28 fx to make 70 Gy in total. For lung cancer patients, 6 MV and 6Beam were used and 2 Gy was prescribed in 33 fx to make 66 Gy in total. Through two therapy planning systems, maximum dose, average dose, and minimum dose of OAR (Organ at Risk) of CTV, PTV and around tumor were investigated. Results: In prostate cancer, both therapy planning systems showed within 2% change of dose of CTV and PTV and normal organs (Bladder, Both femur and Rectum out) near the tumor satisfied the dose constraints. In lung cancer, CTV and PTV showed less than 2% changes in dose and normal organs (Esophagus, Spinal cord and Both lungs) satisfied dose restrictions. However, the minimum dose of Eclipse therapy plan was 1.9% higher in CTV and 3.5% higher in PTV, and in case of both lungs there was 3.0% difference at V5 Gy. Conclusion: Each TPS according to the density satisfied dose limits of our hospital proving the clinical accuracy. It is considered more accurate and precise therapy plan can be made if studies on treatment planning for diverse parts and the application of such TPS are made.

  • PDF

Change Analysis of Aboveground Forest Carbon Stocks According to the Land Cover Change Using Multi-Temporal Landsat TM Images and Machine Learning Algorithms (다시기 Landsat TM 영상과 기계학습을 이용한 토지피복변화에 따른 산림탄소저장량 변화 분석)

  • LEE, Jung-Hee;IM, Jung-Ho;KIM, Kyoung-Min;HEO, Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • 제18권4호
    • /
    • pp.81-99
    • /
    • 2015
  • The acceleration of global warming has required better understanding of carbon cycles over local and regional areas such as the Korean peninsula. Since forests serve as a carbon sink, which stores a large amount of terrestrial carbon, there has been a demand to accurately estimate such forest carbon sequestration. In Korea, the National Forest Inventory(NFI) has been used to estimate the forest carbon stocks based on the amount of growing stocks per hectare measured at sampled location. However, as such data are based on point(i.e., plot) measurements, it is difficult to identify spatial distribution of forest carbon stocks. This study focuses on urban areas, which have limited number of NFI samples and have shown rapid land cover change, to estimate grid-based forest carbon stocks based on UNFCCC Approach 3 and Tier 3. Land cover change and forest carbon stocks were estimated using Landsat 5 TM data acquired in 1991, 1992, 2010, and 2011, high resolution airborne images, and the 3rd, 5th~6th NFI data. Machine learning techniques(i.e., random forest and support vector machines/regression) were used for land cover change classification and forest carbon stock estimation. Forest carbon stocks were estimated using reflectance, band ratios, vegetation indices, and topographical indices. Results showed that 33.23tonC/ha of carbon was sequestrated on the unchanged forest areas between 1991 and 2010, while 36.83 tonC/ha of carbon was sequestrated on the areas changed from other land-use types to forests. A total of 7.35 tonC/ha of carbon was released on the areas changed from forests to other land-use types. This study was a good chance to understand the quantitative forest carbon stock change according to the land cover change. Moreover the result of this study can contribute to the effective forest management.

Enhancing Predictive Accuracy of Collaborative Filtering Algorithms using the Network Analysis of Trust Relationship among Users (사용자 간 신뢰관계 네트워크 분석을 활용한 협업 필터링 알고리즘의 예측 정확도 개선)

  • Choi, Seulbi;Kwahk, Kee-Young;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • 제22권3호
    • /
    • pp.113-127
    • /
    • 2016
  • Among the techniques for recommendation, collaborative filtering (CF) is commonly recognized to be the most effective for implementing recommender systems. Until now, CF has been popularly studied and adopted in both academic and real-world applications. The basic idea of CF is to create recommendation results by finding correlations between users of a recommendation system. CF system compares users based on how similar they are, and recommend products to users by using other like-minded people's results of evaluation for each product. Thus, it is very important to compute evaluation similarities among users in CF because the recommendation quality depends on it. Typical CF uses user's explicit numeric ratings of items (i.e. quantitative information) when computing the similarities among users in CF. In other words, user's numeric ratings have been a sole source of user preference information in traditional CF. However, user ratings are unable to fully reflect user's actual preferences from time to time. According to several studies, users may more actively accommodate recommendation of reliable others when purchasing goods. Thus, trust relationship can be regarded as the informative source for identifying user's preference with accuracy. Under this background, we propose a new hybrid recommender system that fuses CF and social network analysis (SNA). The proposed system adopts the recommendation algorithm that additionally reflect the result analyzed by SNA. In detail, our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and trust relationship information between users when calculating user similarities. For this, our system creates and uses not only user-item rating matrix, but also user-to-user trust network. As the methods for calculating user similarity between users, we proposed two alternatives - one is algorithm calculating the degree of similarity between users by utilizing in-degree and out-degree centrality, which are the indices representing the central location in the social network. We named these approaches as 'Trust CF - All' and 'Trust CF - Conditional'. The other alternative is the algorithm reflecting a neighbor's score higher when a target user trusts the neighbor directly or indirectly. The direct or indirect trust relationship can be identified by searching trust network of users. In this study, we call this approach 'Trust CF - Search'. To validate the applicability of the proposed system, we used experimental data provided by LibRec that crawled from the entire FilmTrust website. It consists of ratings of movies and trust relationship network indicating who to trust between users. The experimental system was implemented using Microsoft Visual Basic for Applications (VBA) and UCINET 6. To examine the effectiveness of the proposed system, we compared the performance of our proposed method with one of conventional CF system. The performances of recommender system were evaluated by using average MAE (mean absolute error). The analysis results confirmed that in case of applying without conditions the in-degree centrality index of trusted network of users(i.e. Trust CF - All), the accuracy (MAE = 0.565134) was lower than conventional CF (MAE = 0.564966). And, in case of applying the in-degree centrality index only to the users with the out-degree centrality above a certain threshold value(i.e. Trust CF - Conditional), the proposed system improved the accuracy a little (MAE = 0.564909) compared to traditional CF. However, the algorithm searching based on the trusted network of users (i.e. Trust CF - Search) was found to show the best performance (MAE = 0.564846). And the result from paired samples t-test presented that Trust CF - Search outperformed conventional CF with 10% statistical significance level. Our study sheds a light on the application of user's trust relationship network information for facilitating electronic commerce by recommending proper items to users.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • 제23권2호
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)

  • Jun, Seoung-Pyo;Choi, Daeheon;Park, Hyun-Woo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • 제23권2호
    • /
    • pp.139-160
    • /
    • 2017
  • Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.

Analysis of Significance between SWMM Computer Simulation and Artificial Rainfall on Rainfall Runoff Delay Effects of Vegetation Unit-type LID System (식생유니트형 LID 시스템의 우수유출 지연효과에 대한 SWMM 전산모의와 인공강우 모니터링 간의 유의성 분석)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • 제48권3호
    • /
    • pp.34-44
    • /
    • 2020
  • In order to suggest performance analysis directions of ecological components based on a vegetation-based LID system model, this study seeks to analyze the statistical significance between monitoring results by using SWMM computer simulation and rainfall and run-off simulation devices and provide basic data required for a preliminary system design. Also, the study aims to comprehensively review a vegetation-based LID system's soil, a vegetation model, and analysis plans, which were less addressed in previous studies, and suggest a performance quantification direction that could act as a substitute device-type LID system. After monitoring artificial rainfall for 40 minutes, the test group zone and the control group zone recorded maximum rainfall intensity of 142.91mm/hr. (n=3, sd=0.34) and 142.24mm/hr. (n=3, sd=0.90), respectively. Compared to a hyetograph, low rainfall intensity was re-produced in 10-minute and 50-minute sections, and high rainfall intensity was confirmed in 20-minute, 30-minute, and 40-minute sections. As for rainwater run-off delay effects, run-off intensity in the test group zone was reduced by 79.8% as it recorded 0.46mm/min at the 50-minute point when the run-off intensity was highest in the control group zone. In the case of computer simulation, run-off intensity in the test group zone was reduced by 99.1% as it recorded 0.05mm/min at the 50-minute point when the run-off intensity was highest. The maximum rainfall run-off intensity in the test group zone (Dv=30.35, NSE=0.36) recorded 0.77mm/min and 1.06mm/min in artificial rainfall monitoring and SWMM computer simulation, respectively, at the 70-minute point in both cases. Likewise, the control group zone (Dv=17.27, NSE=0.78) recorded 2.26mm/min and 2.38mm/min, respectively, at the 50-minutes point. Through statistical assessing the significance between the rainfall & run-off simulating systems and the SWMM computer simulations, this study was able to suggest a preliminary design direction for the rainwater run-off reduction performance of the LID system applied with single vegetation. Also, by comprehensively examining the LID system's soil and vegetation models, and analysis methods, this study was able to compile parameter quantification plans for vegetation and soil sectors that can be aligned with a preliminary design. However, physical variables were caused by the use of a single vegetation-based LID system, and follow-up studies are required on algorithms for calibrating the statistical significance between monitoring and computer simulation results.

CT and MRI image fusion reproducibility and dose assessment on Treatment planning system (치료계획시스템에서 전산화단층촬영과 자기공명영상의 영상융합 재현성 및 선량평가)

  • Ahn, Byeong Hyeok;Choi, Jae Hyeok;Hwang, Jae ung;Bak, Ji yeon;Lee, Du hyeon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • 제29권2호
    • /
    • pp.33-41
    • /
    • 2017
  • Objectives: The aim of this study is to evaluate the reproducibility and usefulness of the images through the fusion of CT(Computed tomography) and MRI(Magnetic resonance imaging) using a self-manufactured phantom. We will also compare and analyze the target dose from acquired images. Materials and Methods: Using a self-manufactured phantom, CT images and MRI images are acquired by 1.5T and 3.0T of different magnetic fields. The reproducibility of the size and volume of the small holes present in the phantom is compared through the image from CT and 1.5T and 3.0T MRI, and dose changes are compared and analyzed on any target. Results: 13 small hole diameters were a maximum 31 mm and a minimum 27.54 mm in the CT scan and the were measured within an average of 29.28 mm 1 % compared to actual size. 1.5 T MRI images showed a maximum 31.65 mm and a minimum 24.3 mm, the average is 28.8 mm, which is within 1 %. 3.0T MRI images showed a maximum 30.2 mm and a minimum 27.92 mm, the average is 29.41 mm, which is within 1.3 %. The dose changes in the target were 95.9-102.1 % in CT images, 93.1-101.4 % in CT-1.5T MRI fusion images, and 96-102 % in CT-3.0T MRI fusion images. Conclusion: CT and MRI are applied with different algorithms for image acquisition. Also, since the organs of the human body have different densities, image distortion may occur during image acquisition. Because these inaccurate images description affects the volume range and dose of the target, accurate volume and location of the target can prevent unnecessary doses from being exposed and errors in treatment planning. Therefore, it should be applied to the treatment plan by taking advantage of the image display algorithm possessed by CT and MRI.

  • PDF