• Title/Summary/Keyword: Measure Algorithm

Search Result 2,037, Processing Time 0.029 seconds

Quantitative Study of Annular Single-Crystal Brain SPECT (원형단일결정을 이용한 SPECT의 정량화 연구)

  • 김희중;김한명;소수길;봉정균;이종두
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 1998
  • Nuclear medicine emission computed tomography(ECT) can be very useful to diagnose early stage of neuronal diseases and to measure theraputic results objectively, if we can quantitate energy metabolism, blood flow, biochemical processes, or dopamine receptor and transporter using ECT. However, physical factors including attenuation, scatter, partial volume effect, noise, and reconstruction algorithm make it very difficult to quantitate independent of type of SPECT. In this study, we quantitated the effects of attenuation and scatter using brain SPECT and three-dimensional brain phantom with and without applying their correction methods. Dual energy window method was applied for scatter correction. The photopeak energy window and scatter energy window were set to 140ke${\pm}$10% and 119ke${\pm}$6% and 100% of scatter window data were subtracted from the photopeak window prior to reconstruction. The projection data were reconstructed using Butterworth filter with cutoff frequency of 0.95cycles/cm and order of 10. Attenuation correction was done by Chang's method with attenuation coefficients of 0.12/cm and 0.15/cm for the reconstruction data without scatter correction and with scatter correction, respectively. For quantitation, regions of interest (ROIs) were drawn on the three slices selected at the level of the basal ganglia. Without scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.2 and 2.1, respectively. However, the ratios between basal ganglia and background were very similar for with and without attenuation correction. With scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.69 and 2.64, respectively. These results indicate that the attenuation correction is necessary for the quantitation. When true ratios between basal ganglia and background were 6.58, 4.68, 1.86, the measured ratios with scatter and attenuation correction were 76%, 80%, 82% of their true ratios, respectively. The approximate 20% underestimation could be partially due to the effect of partial volume and reconstruction algorithm which we have not investigated in this study, and partially due to imperfect scatter and attenuation correction methods that we have applied in consideration of clinical applications.

  • PDF

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

A Study on the Development and usefulness of the x/y Plane and z Axis Resolution Phantom for MDCT Detector (MDCT 검출기의 x/y plane과 z축 분해능 팬텀 개발 및 유용성에 관한 연구)

  • Kim, Yung-Kyoon;Han, Dong-Kyoon
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.1
    • /
    • pp.67-75
    • /
    • 2022
  • The aim of this study is to establish a new QC method that can simultaneously evaluate the resolution of the x/y plane and the z-axis by producing a phantom that can reflect exposure and reconstruction parameter of MDCT system. It was used with Aquilion ONE(Cannon Medical System, Otawara, Japan), and the examination was scanned using of 120 kV, 260 mA, and the D-FOV of 300 mm2. It produced new SSP phantom modules in which two aluminum plates inclined at 45° to a vertical axis and a transverse axis to evaluate high contrast resolution of x/y plane and z axis. And it changed factors such as the algorithm, distance from gantry iso-center. All images were reconstructed in five steps from 0.6 mm to 10.0 mm slice thickness to measure resolution of x/y plane and z-axis. The image data measured FWHM and FWTM using Profile tool of Aquarius iNtusion Edition ver. 4.4.13 P6 software(Terarecon, California, USA), and analysed SPQI and signal intensity by ImageJ program(v1.53n, National Institutes of Health, USA). It decreased by 4.09~11.99%, 4.12~35.52%, and 4.70~37.64% in slice thickness of 2.5 mm, 5.0 mm, and 10.0 mm for evaluating the high contrast resolution of x/y plane according to distance from gantry iso-center. Therefore, the high contrast resolution of the x/y plane decreased when the distance from the iso-center increased or the slice thickness increased. Additionally, the slice thicknesses of 2.5 mm, 5.0 mm, and 10.0 mm with a high algorithm increased 74.83, 15.18 and 81.25%. The FWHM was almost constant on the measured SSP graph for evaluating the accuracy of slice thickness which represents the resolution of x/y plane and z-axis, but it was measured to be higher than the nominal slice thickness set by user. The FWHM and FWTM of z-axis with axial scan mode tended to increase significantly as the distance increased from gantry iso-center than the helical mode. Particularly, the thinner slice thickness that increased error range compare with the nominal slice thickness. The SPQI increased with thick slice thickness, and that was closer to 90% in the helical scan than the axial scan. In conclusion, by producing a phantom suitable for MDCT detectors and capable of quantitative resolution evaluation, it can be used as a specific method in the management of research quality and management of outdated equipment. Thus, it is expected to contribute greatly to the discrimination of lesions in the field of CT imaging.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Performance Characteristics of 3D GSO PET/CT Scanner (Philips GEMINI PET/DT) (3차원 GSO PET/CT 스캐너(Philips GEMINI PET/CT의 특성 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Byeong-Il;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.318-324
    • /
    • 2004
  • Purpose: Philips GEMINI is a newly introduced whole-body GSO PET/CT scanner. In this study, performance of the scanner including spatial resolution, sensitivity, scatter fraction, noise equivalent count ratio (NECR) was measured utilizing NEMA NU2-2001 standard protocol and compared with performance of LSO, BGO crystal scanner. Methods: GEMINI is composed of the Philips ALLEGRO PET and MX8000 D multi-slice CT scanners. The PET scanner has 28 detector segments which have an array of 29 by 22 GSO crystals ($4{\times}6{\times}20$ mm), covering axial FOV of 18 cm. PET data to measure spatial resolution, sensitivity, scatter fraction, and NECR were acquired in 3D mode according to the NEMA NU2 protocols (coincidence window: 8 ns, energy window: $409[\sim}664$ keV). For the measurement of spatial resolution, images were reconstructed with FBP using ramp filter and an iterative reconstruction algorithm, 3D RAMLA. Data for sensitivity measurement were acquired using NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves after we confirmed that dead time loss did not exceed 1%. To measure NECR and scatter fraction, 1110 MBq of F-18 solution was injected into a NEMA scatter phantom with a length of 70 cm and dynamic scan with 20-min frame duration was acquired for 7 half-lives. Oblique sinograms were collapsed into transaxial slices using single slice rebinning method, and true to background (scatter+random) ratio for each slice and frame was estimated. Scatter fraction was determined by averaging the true to background ratio of last 3 frames in which the dead time loss was below 1%. Results: Transverse and axial resolutions at 1cm radius were (1) 5.3 and 6.5 mm (FBP), (2) 5.1 and 5.9 mm (3D RAMLA). Transverse radial, transverse tangential, and axial resolution at 10 cm were (1) 5.7, 5.7, and 7.0 mm (FBP), (2) 5.4, 5.4, and 6.4 mm (3D RAMLA). Attenuation free values of sensitivity were 3,620 counts/sec/MBq at the center of transaxial FOV and 4,324 counts/sec/MBq at 10 cm offset from the center. Scatter fraction was 40.6%, and peak true count rate and NECR were 88.9 kcps @ 12.9 kBq/mL and 34.3 kcps @ 8.84 kBq/mL. These characteristics are better than that of ECAT EXACT PET scanner with BGO crystal. Conclusion: The results of this field test demonstrate high resolution, sensitivity and count rate performance of the 3D PET/CT scanner with GSO crystal. The data provided here will be useful for the comparative study with other 3D PET/CT scanners using BGO or LSO crystals.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Usability of Multiple Confocal SPECT SYSTEM in the Myocardial Perfusion SPECT Using $^{99m}Tc$ ($^{99m}Tc$을 이용한 심근 관류 SPECT에서 Multiple Confocal SPECT System의 유용성)

  • Shin, Chae-Ho;Pyo, Sung-Jai;Kim, Bong-Su;Cho, Yong-Gyi;Jo, Jin-Woo;Kim, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.65-71
    • /
    • 2011
  • Purpose: The recently adopted multiple confocal SPECT SYSTEM (hereinafter called IQ SPECT$^{TM}$) has a high difference from the conventional myocardial perfusion SPECT in the collimator form, image capture method, and image reconstruction method. This study was conducted to compare this novice equipment with the conventional one to design a protocol meeting the IQ SPECT, and also determine the characteristics and usefulness of IQ SPECT. Materials and Methods: 1. For the objects of LEHR (Low energy high resolution) collimator and Multiple confocal collimator, $^{99m}Tc$ 37MBq was put in the acrylic dish then each sensitivity ($cpm/{\mu}Ci$) was measured at the distance of 5 cm, 10 cm, 20 cm, 30 cm, and 40 cm respectively. 2. Based on the sensitivity measure results, IQ SPECT Protocol was designed according to the conventional general myocardial SPECT, then respectively 278 kBq/mL, 7.4 kBq/mL, and 48 kBq/mL of $^{99m}Tc$ were injected into the myocardial and soft tissues and liver site by using the anthropomorphic torso phantom then the myocardial perfusion SPECT was run. 3. For the comparison of FWHMs (Full Width at Half Maximum) resulted from the image reconstruction of LEHR collimator, the FWHMs (mm) were measured with only algorithms changed, in the case of the FBP (Filtered Back projection) method- a reconstruction method of conventional myocardial perfusion SPECT, and the 3D OSEM (Ordered subsets expectation maximization) method of IQ SPECT, by using $^{99m}Tc$ Line source. Results: 1. The values of IQ SPECT collimator sensitivity ($cpm/{\mu}Ci$) were 302, 382, 655, 816, 1178, and those of LEHR collimator were measured as 204, 204, 202, 201, 198, both at the distance of 5 cm, 10 cm, 20 cm, 30 cm, and 40 cm respectively. It was found the difference of sensitivity increases up to 4 times at the distance of 30 cm in the cases of IQ SPECT and LEHR. 2. The myocardial perfusion SPECT Protocol was designed according to the geometric characteristics of IQ SPECT based on the sensitivity results, then the phantom test for the aforesaid protocol was conducted. As a result, it was found the examination time can be reduced 1/4 compared to the past. 3. In the comparison of FWHMs according to the reconstructed algorithm in the FBP method and 3D OSEM method followed after the SEPCT test using a LEHR collimator, the result was obtained that FWHM rose around twice in the 3D OSEM method. Conclusion : The IQ SPECT uses the Multiple confocal collimator for the myocardial perfusion SPECT to enhance the sensitivity and also reduces examination time and contributes to improvement of visual screen quality through the myocardial-specific geometric image capture method and image reconstruction method. Due to such benefits, it is expected patients will receive more comfortable and more accurate examinations and it is considered a further study is required using additional clinical materials.

  • PDF

The Availability of the step optimization in Monaco Planning system (모나코 치료계획 시스템에서 단계적 최적화 조건 실현의 유용성)

  • Kim, Dae Sup
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.207-216
    • /
    • 2014
  • Purpose : We present a method to reduce this gap and complete the treatment plan, to be made by the re-optimization is performed in the same conditions as the initial treatment plan different from Monaco treatment planning system. Materials and Methods : The optimization is carried in two steps when performing the inverse calculation for volumetric modulated radiation therapy or intensity modulated radiation therapy in Monaco treatment planning system. This study was the first plan with a complete optimization in two steps by performing all of the treatment plan, without changing the optimized condition from Step 1 to Step 2, a typical sequential optimization performed. At this time, the experiment was carried out with a pencil beam and Monte Carlo algorithm is applied In step 2. We compared initial plan and re-optimized plan with the same optimized conditions. And then evaluated the planning dose by measurement. When performing a re-optimization for the initial treatment plan, the second plan applied the step optimization. Results : When the common optimization again carried out in the same conditions in the initial treatment plan was completed, the result is not the same. From a comparison of the treatment planning system, similar to the dose-volume the histogram showed a similar trend, but exhibit different values that do not satisfy the conditions best optimized dose, dose homogeneity and dose limits. Also showed more than 20% different in comparison dosimetry. If different dose algorithms, this measure is not the same out. Conclusion : The process of performing a number of trial and error, and you get to the ultimate goal of treatment planning optimization process. If carried out to optimize the completion of the initial trust only the treatment plan, we could be made of another treatment plan. The similar treatment plan could not satisfy to optimization results. When you perform re-optimization process, you will need to apply the step optimized conditions, making sure the dose distribution through the optimization process.