• Title/Summary/Keyword: Error reduction

Search Result 1,412, Processing Time 0.03 seconds

Age-related Changes of the Finger Photoplethysmogram in Frequency Domain Analysis (연령증가에 따른 지첨용적맥파의 주파수 영역에서의 변화)

  • Nam, Tong-Hyun;Park, Young-Bae;Park, Young-Jae;Shin, Sang-Hoon
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.12 no.1
    • /
    • pp.42-62
    • /
    • 2008
  • Objectives: It is well known that some parameters of the photoplethysmogram (PPG) acquired by time domain contour analysis can be used as markers of vascular aging. But the previous studies that have been performed for frequency domain analysis of the PPG to date have provided only restrictive and fragmentary information. The aim of the present investigation was to determine whether the harmonics extracted from the PPG using a fast Fourier transformation could be used as an index of vascular aging. Methods: The PPG was measured in 600 recruited subjects for 30 second durations, To grasp the gross age-related change of the PPG waveform, we grouped subjects according to gender and age and averaged the PPG signal of one pulse cycle. To calculate the conventional indices of vascular aging, we selected the 5-6 cycles of pulse that the baseline was relatively stable and then acquired the coordinates of the inflection points. For the frequency domain analysis we performed a power spectral analysis on the PPG signals for 30 seconds using a fast Fourier transformation and dissociated the harmonic components from the PPG signals. Results: A final number of 390 subjects (174 males and 216 females) were included in the statistical analysis. The normalized power of the harmonics decreased with age and on a logarithmic scale reduction of the normalized power in the third (r=-0.492, P<0.0001), fourth (r=-0.621, P<0.0001) and fifth harmonic (r=-0.487, P<0.0001) was prominent. From a multiple linear regression analysis, Stiffness index, reflection index and corrected up-stroke time influenced the normalized power of the harmonics on a logarithmic scale. Conclusions: The normalized harmonic power decreased with age in healthy subjects and may be less error prone due to the essential attributes of frequency domain analysis. Therefore, we expect that the normalized harmonic power density can be useful as a vascular aging marker.

  • PDF

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

A Comparative Study on the Effect of Enterprise SNS on Job Performance - Focused on the Mediation Effect of Communication Level and Moderating Effect of Nationality - (기업용 SNS 이용이 업무성과에 미치는 영향의 국가 간 비교연구 - 커뮤니케이션 수준의 매개효과와 국적의 조절효과를 중심으로 -)

  • Chen, Jing-Yuan;Kwon, Sun-Dong
    • Management & Information Systems Review
    • /
    • v.38 no.4
    • /
    • pp.137-157
    • /
    • 2019
  • Companies are trying to use enterprise SNS for collaboration and speedy decision-making. This study verified the mediating effect of communication between enterprise SNS and job performance, and proved the moderating effect of nationality between enterprise SNS and communication. This study collected survey data of 81 Korean and 81 Chinese from employees who have used enterprise SNS in Korea and China. As results of data analysis, first, enterprise SNS improved job performance through speedy information sharing and error reduction. Second, communication mediated the effect of enterprise SNS on job performance. Third, enterprise SNS increased the level of organizational communication through decreasing the burden of offline face-to-face communication. Compared with Chinese corporate organizations, Korean corporate organizations have high power distances, centralized control, and high superior authority. Therefore, in the off-line communication situation, the subordinate feels the social pressure to follow the command of the superior. Thus communication is one-way and closed. In this Korean organizational situation, corporate SNS can be used as a means to bypass rigid offline communication. In the online communication environment of non face-to-face corporate SNS, anxiety and stress of face-to-face communication can be reduced, so communication between the upper and lower sides can flow more smoothly. The contribution of this paper is that it proved that enterprise SNS promotes communication and improve job performance by reducing the anxiety or stress of offline communication, while according to prior research successful adoption of many types of information systems requires the fit between an organization and its organizational culture.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Downscaling of Sunshine Duration for a Complex Terrain Based on the Shaded Relief Image and the Sky Condition (하늘상태와 음영기복도에 근거한 복잡지형의 일조시간 분포 상세화)

  • Kim, Seung-Ho;Yun, Jin I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.233-241
    • /
    • 2016
  • Experiments were carried out to quantify the topographic effects on attenuation of sunshine in complex terrain and the results are expected to help convert the coarse resolution sunshine duration information provided by the Korea Meteorological Administration (KMA) into a detailed map reflecting the terrain characteristics of mountainous watershed. Hourly shaded relief images for one year, each pixel consisting of 0 to 255 brightness value, were constructed by applying techniques of shadow modeling and skyline analysis to the 3m resolution digital elevation model for an experimental watershed on the southern slope of Mt. Jiri in Korea. By using a bimetal sunshine recorder, sunshine duration was measured at three points with different terrain conditions in the watershed from May 15, 2015 to May 14, 2016. The brightness values of the 3 corresponding pixel points on the shaded relief map were extracted and regressed to the measured sunshine duration, resulting in a brightness-sunshine duration response curve for a clear day. We devised a method to calibrate this curve equation according to sky condition categorized by cloud amount and used it to derive an empirical model for estimating sunshine duration over a complex terrain. When the performance of this model was compared with a conventional scheme for estimating sunshine duration over a horizontal plane, the estimation bias was improved remarkably and the root mean square error for daily sunshine hour was 1.7hr, which is a reduction by 37% from the conventional method. In order to apply this model to a given area, the clear-sky sunshine duration of each pixel should be produced on hourly intervals first, by driving the curve equation with the hourly shaded relief image of the area. Next, the cloud effect is corrected by 3-hourly 'sky condition' of the KMA digital forecast products. Finally, daily sunshine hour can be obtained by accumulating the hourly sunshine duration. A detailed sunshine duration distribution of 3m horizontal resolution was obtained by applying this procedure to the experimental watershed.

Analysis of Respiratory Motion Artifacts in PET Imaging Using Respiratory Gated PET Combined with 4D-CT (4D-CT와 결합한 호흡게이트 PET을 이용한 PET영상의 호흡 인공산물 분석)

  • Cho, Byung-Chul;Park, Sung-Ho;Park, Hee-Chul;Bae, Hoon-Sik;Hwang, Hee-Sung;Shin, Hee-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.3
    • /
    • pp.174-181
    • /
    • 2005
  • Purpose: Reduction of respiratory motion artifacts in PET images was studied using respiratory-gated PET (RGPET) with moving phantom. Especially a method of generating simulated helical CT images from 4D-CT datasets was developed and applied to a respiratory specific RGPET images for more accurate attenuation correction. Materials and Methods: Using a motion phantom with periodicity of 6 seconds and linear motion amplitude of 26 mm, PET/CT (Discovery ST: GEMS) scans with and without respiratory gating were obtained for one syringe and two vials with each volume of 3, 10, and 30 ml respectively. RPM (Real-Time Position Management, Varian) was used for tracking motion during PET/CT scanning. Ten datasets of RGPET and 4D-CT corresponding to every 10% phase intervals were acquired. from the positions, sizes, and uptake values of each subject on the resultant phase specific PET and CT datasets, the correlations between motion artifacts in PET and CT images and the size of motion relative to the size of subject were analyzed. Results: The center positions of three vials in RGPET and 4D-CT agree well with the actual position within the estimated error. However, volumes of subjects in non-gated PET images increase proportional to relative motion size and were overestimated as much as 250% when the motion amplitude was increased two times larger than the size of the subject. On the contrary, the corresponding maximal uptake value was reduced to about 50%. Conclusion: RGPET is demonstrated to remove respiratory motion artifacts in PET imaging, and moreover, more precise image fusion and more accurate attenuation correction is possible by combining with 4D-CT.

Building the Process for Reducing Whole Body Bone Scan Errors and its Effect (전신 뼈 스캔의 오류 감소를 위한 프로세스 구축과 적용 효과)

  • Kim, Dong Seok;Park, Jang Won;Choi, Jae Min;Shim, Dong Oh;Kim, Ho Seong;Lee, Yeong Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.76-82
    • /
    • 2017
  • Purpose Whole body bone scan is one of the most frequently performed in nuclear medicine. Basically, both the anterior and posterior views are acquired simultaneously. Occasionally, it is difficult to distinguish the lesion by only the anterior view and the posterior view. In this case, accurate location of the lesion through SPECT / CT or additional static scan images are important. Therefore, in this study, various improvement activities have been carried out in order to enhance the work capacity of technologists. In this study, we investigate the effect of technologist training and standardized work process processes on bone scan error reduction. Materials and Methods Several systems have been introduced in sequence for the application of new processes. The first is the implementation of education and testing with physicians, the second is the classification of patients who are expected to undergo further scanning, introducing a pre-filtration system that allows technologists to check in advance, and finally, The communication system called NMQA is applied. From January, 2014 to December, 2016, we examined the whole body bone scan patients who visited the Department of Nuclear Medicine, Asan Medical Center, Seoul, Korea Results We investigated errors based on the Bone Scan NMQA sent from January 2014 to December 2016. The number of tests in which NMQA was transmitted over the entire bone scan during the survey period was calculated as a percentage. The annual output is 141 cases in 2014, 88 cases in 2015, and 86 cases in 2016. The rate of NMQA has decreased to 0.88% in 2014, 0.53% in 2015 and 0.45% in 2016. Conclusion The incidence of NMQA has decreased since 2014 when the new process was applied. However, we believe that it will be necessary to accumulate data continuously in the future because of insufficient data until statistically confirming its usefulness. This study confirmed the necessity of standardized work and education to improve the quality of Bone Scan image, and it is thought that update is needed for continuous research and interest in the future.

  • PDF

On the vibration influence to the running power plant facilities when the foundation excavated of the cautious blasting works. (노천굴착에서 발파진동의 크기를 감량 시키기 위한 정밀파실험식)

  • Huh Ginn
    • Explosives and Blasting
    • /
    • v.9 no.1
    • /
    • pp.3-13
    • /
    • 1991
  • The cautious blasting works had been used with emulsion explosion electric M/S delay caps. Drill depth was from 3m to 6m with Crawler Drill ${\phi}70mm$ on the calcalious sand stone (soft -modelate -semi hard Rock). The total numbers of test blast were 88. Scale distance were induced 15.52-60.32. It was applied to propagation Law in blasting vibration as follows. Propagtion Law in Blasting Vibration $V=K(\frac{D}{W^b})^n$ were V : Peak partical velocity(cm/sec) D : Distance between explosion and recording sites(m) W : Maximum charge per delay-period of eight milliseconds or more (kg) K : Ground transmission constant, empirically determind on the Rocks, Explosive and drilling pattern ets. b : Charge exponents n : Reduced exponents where the quantity $\frac{D}{W^b}$ is known as the scale distance. Above equation is worked by the U.S Bureau of Mines to determine peak particle velocity. The propagation Law can be catagorized in three groups. Cubic root Scaling charge per delay Square root Scaling of charge per delay Site-specific Scaling of charge Per delay Plots of peak particle velocity versus distoance were made on log-log coordinates. The data are grouped by test and P.P.V. The linear grouping of the data permits their representation by an equation of the form ; $V=K(\frac{D}{W^{\frac{1}{3}})^{-n}$ The value of K(41 or 124) and n(1.41 or 1.66) were determined for each set of data by the method of least squores. Statistical tests showed that a common slope, n, could be used for all data of a given components. Charge and reduction exponents carried out by multiple regressional analysis. It's divided into under loom over loom distance because the frequency is verified by the distance from blast site. Empirical equation of cautious blasting vibration is as follows. Over 30m ------- under l00m ${\cdots\cdots\cdots}{\;}41(D/sqrt[2]{W})^{-1.41}{\;}{\cdots\cdots\cdots\cdots\cdots}{\;}A$ Over 100m ${\cdots\cdots\cdots\cdots\cdots}{\;}121(D/sqrt[3]{W})^{-1.66}{\;}{\cdots\cdots\cdots\cdots\cdots}{\;}B$ where ; V is peak particle velocity In cm / sec D is distance in m and W, maximLlm charge weight per day in kg K value on the above equation has to be more specified for further understaring about the effect of explosives, Rock strength. And Drilling pattern on the vibration levels, it is necessary to carry out more tests.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Usefulness of Abdominal Compressor Using Stereotactic Body Radiotherapy with Hepatocellular Carcinoma Patients (토모테라피를 이용한 간암환자의 정위적 방사선치료시 복부압박장치의 유용성 평가)

  • Woo, Joong-Yeol;Kim, Joo-Ho;Kim, Joon-Won;Baek, Jong-Geal;Park, Kwang-Soon;Lee, Jong-Min;Son, Dong-Min;Lee, Sang-Kyoo;Jeon, Byeong-Chul;Cho, Jeong-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.157-165
    • /
    • 2012
  • Purpose: We evaluated usefulness of abdominal compressor for stereotactic body radiotherapy (SBRT) with unresectable hepatocellular carcinoma (HCC) patients and hepato-biliary cancer and metastatic liver cancer patients. Materials and Methods: From November 2011 to March 2012, we selected HCC patients who gained reduction of diaphragm movement >1 cm through abdominal compressor (diaphragm control, elekta, sweden) for HT (Hi-Art Tomotherapy, USA). We got planning computed tomography (CT) images and 4 dimensional (4D) images through 4D CT (somatom sensation, siemens, germany). The gross tumor volume (GTV) included a gross tumor and margins considering tumor movement. The planning target volume (PTV) included a 5 to 7 mm safety margin around GTV. We classified patients into two groups according to distance between tumor and organs at risk (OAR, stomach, duodenum, bowel). Patients with the distance more than 1 cm are classified as the 1st group and they received SBRT of 4 or 5 fractions. Patients with the distance less than 1 cm are classified as the 2nd group and they received tomotherapy of 20 fractions. Megavoltage computed tomography (MVCT) were performed 4 or 10 fractions. When we verify a MVCT fusion considering priority to liver than bone-technique. We sent MVCT images to Mim_vista (Mimsoftware, ver .5.4. USA) and we re-delineated stomach, duodenum and bowel to bowel_organ and delineated liver. First, we analyzed MVCT images to check the setup variation. Second we compared dose difference between tumor and OAR based on adaptive dose through adaptive planning station and Mim_vista. Results: Average setup variation from MVCT was $-0.66{\pm}1.53$ mm (left-right) $0.39{\pm}4.17$ mm (superior-inferior), $0.71{\pm}1.74$ mm (anterior-posterior), $-0.18{\pm}0.30$ degrees (roll). 1st group ($d{\geq}1$) and 2nd group (d<1) were similar to setup variation. 1st group ($d{\geq}1$) of $V_{diff3%}$ (volume of 3% difference of dose) of GTV through adaptive planing station was $0.78{\pm}0.05%$, PTV was $9.97{\pm}3.62%$, $V_{diff5%}$ was GTV 0.0%, PTV was $2.9{\pm}0.95%$, maximum dose difference rate of bowel_organ was $-6.85{\pm}1.11%$. 2nd Group (d<1) GTV of $V_{diff3%}$ was $1.62{\pm}0.55%$, PTV was $8.61{\pm}2.01%$, $V_{diff5%}$ of GTV was 0.0%, PTV was $5.33{\pm}2.32%$, maximum dose difference rate of bowel_organ was $28.33{\pm}24.41%$. Conclusion: Despite we saw diaphragm movement more than 5 mm with flouroscopy after use an abdominal compressor, average setup_variation from MVCT was less than 5 mm. Therefore, we could estimate the range of setup_error within a 5 mm. Target's dose difference rate of 1st group ($d{\geq}1$) and 2nd group (d<1) were similar, while 1st group ($d{\geq}1$) and 2nd group (d<1)'s bowel_organ's maximum dose difference rate's maximum difference was more than 35%, 1st group ($d{\geq}1$)'s bowel_organ's maximum dose difference rate was smaller than 2nd group (d<1). When applicating SBRT to HCC, abdominal compressor is useful to control diaphragm movement in selected patients with more than 1 cm bowel_organ distance.

  • PDF