• Title/Summary/Keyword: Accuracy Rate

Search Result 3,386, Processing Time 0.029 seconds

The Influence Evaluation of $^{201}Tl$ Myocardial Perfusion SPECT Image According to the Elapsed Time Difference after the Whole Body Bone Scan (전신 뼈 스캔 후 경과 시간 차이에 따른 $^{201}Tl$ 심근관류 SPECT 영상의 영향 평가)

  • Kim, Dong-Seok;Yoo, Hee-Jae;Ryu, Jae-Kwang;Yoo, Jae-Sook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.67-72
    • /
    • 2010
  • Purpose: In Asan Medical Center we perform myocardial perfusion SPECT to evaluate cardiac event risk level for non-cardiac surgery patients. In case of patients with cancer, we check tumor metastasis using whole body bone scan and whole body PET scan and then perform myocardial perfusion SPECT to reduce unnecessary exam. In case of short term in patients, we perform $^{201}Tl$ myocardial perfusion SPECT after whole body bone scan a minimum 16 hours in order to reduce hospitalization period but it is still the actual condition in which the evaluation about the affect of the crosstalk contamination due to the each other dissimilar isotope administration doesn't properly realize. So in our experiments, we try to evaluate crosstalk contamination influence on $^{201}Tl$ myocardial perfusion SPECT using anthropomorphic torso phantom and patient's data. Materials and Methods: From 2009 August to September, we analyzed 87 patients with $^{201}Tl$ myocardial perfusion SPECT. According to $^{201}Tl$ myocardial perfusion SPECT yesterday whole body bone scan possibility of carrying out, a patient was classified. The image data are obtained by using the dual energy window in $^{201}Tl$ myocardial perfusion SPECT. We analyzed $^{201}Tl$ and $^{99m}Tc$ counts ratio in each patients groups obtained image data. We utilized anthropomorphic torso phantom in our experiment and administrated $^{201}Tl$ 14.8 MBq (0.4 mCi) at myocardium and $^{99m}Tc$ 44.4 MBq (1.2 mCi) at extracardiac region. We obtained image by $^{201}Tl$ myocardial perfusion SPECT without gate method application and analyzed spatial resolution using Xeleris ver 2.0551. Results: In case of $^{201}Tl$ window and the counts rate comparison result yesterday whole body bone scan of being counted in $^{99m}Tc$ window, the difference in which a rate to 24 hours exponential-functionally notes in 1:0.114 with Ventri (GE Healthcare, Wisconsin, USA), 1:0.249 after the bone tracer injection in 12 hours in 1:0.411 with 1:0.79 with Infinia (GE healthcare, Wisconsin, USA) according to a reduction a time-out was shown (Ventri p=0.001, Infinia p=0.001). Moreover, the rate of the case in which it doesn't perform the whole body bone scan showed up as the average 1:$0.067{\pm}0.6$ of Ventri, and 1:$0.063{\pm}0.7$ of Infinia. According to the phantom after experiment spatial resolution measurement result, and an addition or no and time-out of $^{99m}Tc$ administrated, it doesn't note any change of FWHM (p=0.134). Conclusion: Through the experiments using anthropomorphic torso phantom and patients data, we found that $^{201}Tl$ myocardium perfusion SPECT image later carried out after the bone tracer injection with 16 hours this confirmed that it doesn't receive notable influence in spatial resolution by $^{99m}Tc$. But this investigation is only aimed to image quality, so it needs more investigation in patient's radiation dose and exam accuracy and precision. The exact guideline presentation about the exam interval should be made of the validation test which is exact and in which it is standardized about the affect of the crosstalk contamination according to the isotope use in which it is different later on.

  • PDF

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Evaluation of the Usefulness of MapPHAN for the Verification of Volumetric Modulated Arc Therapy Planning (용적세기조절회전치료 치료계획 확인에 사용되는 MapPHAN의 유용성 평가)

  • Woo, Heon;Park, Jang Pil;Min, Jae Soon;Lee, Jae Hee;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.115-121
    • /
    • 2013
  • Purpose: Latest linear accelerator and the introduction of new measurement equipment to the agency that the introduction of this equipment in the future, by analyzing the process of confirming the usefulness of the preparation process for applying it in the clinical causes some problems, should be helpful. Materials and Methods: All measurements TrueBEAM STX (Varian, USA) was used, and a file specific to each energy, irradiation conditions, the dose distribution was calculated using a computerized treatment planning equipment (Eclipse ver 10.0.39, Varian, USA). Measuring performance and cause errors in MapCHECK 2 were analyzed and measured against. In order to verify the performance of the MapCHECK 2, 6X, 6X-FFF, 10X, 10X-FFF, 15X field size $10{\times}10$ cm, gantry $0^{\circ}$, $180^{\circ}$ direction was measured by the energy. IGRT couch of the CT values affect the measurements in order to confirm, CT number values : -800 (Carbon) & -950 (COUCH in the air), -100 & 6X-950 in the state for FFF, 15X of the energy field sizes $10{\times}10$, gantry $180^{\circ}$, $135^{\circ}$, $275^{\circ}$ directionwas measured at, MapPHAN allocated to confirm the value of HU were compared, using the treatment planning computer for, Measurement error problem by the sharp edges MapPHAN Learn gantry direction MapPHAN of dependence was measured in three ways. GANTRY $90^{\circ}$, $270^{\circ}$ in the direction of the vertically erected settings 6X-FFF, 15X respectively, and Setting the state established as a horizontal field sizes $10{\times}10$, $90^{\circ}$, $45^{\circ}$, $315^{\circ}$, $270^{\circ}$ of in the direction of the energy-6X-FFF, 15X, respectively, were measured. Without intensity modulated beam of the third open arc were investigated. Results: Of basic performance MapCHECK confirm the attenuation measured by Couch, measured from the measured HU values that are assigned to the MAP-PHAN, check for calculation accuracy for the angled edge of the MapPHAN all come in a range of valid measurement errors do not affect the could see. three ways for the Gantry direction dependence, the first of the meter built into the value of the Gantry $270^{\circ}$ (relative $0^{\circ}$), $90^{\circ}$ (relative $180^{\circ}$), 6X-FFF, 15X from each -1.51, 0.83% and -0.63, -0.22% was not affected by the AP/PA direction represented. Setting the meter horizontally Gantry $90^{\circ}$, $270^{\circ}$ from the couch, Energy 6X-FFF 4.37, 2.84%, 15X, -9.63, -13.32% the difference. By-side direction measurements MapPHAN in value is not within the valid range can not, because that could be confirmed as gamma pass rate 3% of the value is greater than the value shown. You can check the Open Arc 6X-FFF, 15X energy, field size $10{\times}10$ cm $360^{\circ}$ rotation of the dose distribution in the state to look at nearly 90% pass rate to emerge. Conclusion: Based on the above results, the MapPHAN gantry direction dependence by side in the direction of the beam relative dose distribution suitable for measuring the gamma value, but accurate measurement of the absolute dose can not be considered is. this paper, a more accurate treatment plan in order to confirm, Reduce the tolerance for VMAT, such as lateral rotation investigation in order to measure accurate absolute isodose using a combination of IMF (Isocentric Mounting Fixture) MapCHEK 2, will be able to minimize the impact due to the angular dependence.

  • PDF

Comparison and evaluation between 3D-bolus and step-bolus, the assistive radiotherapy devices for the patients who had undergone modified radical mastectomy surgery (변형 근치적 유방절제술 시행 환자의 방사선 치료 시 3D-bolus와 step-bolus의 비교 평가)

  • Jang, Wonseok;Park, Kwangwoo;Shin, Dongbong;Kim, Jongdae;Kim, Seijoon;Ha, Jinsook;Jeon, Mijin;Cho, Yoonjin;Jung, Inho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.7-16
    • /
    • 2016
  • Purpose : This study aimed to compare and evaluate between the efficiency of two respective devices, 3D-bolus and step-bolus when the devices were used for the treatment of patients whose chest walls were required to undergo the electron beam therapy after the surgical procedure of modified radical mastectomy, MRM. Materials and Methods : The treatment plan of reverse hockey stick method, using the photon beam and electron beam, had been set for six breast cancer patients and these 6 breast cancer patients were selected to be the subjects for this study. The prescribed dose of electron beam for anterior chest wall was set to be 180 cGy per treatment and both the 3D-bolus, produced using 3D printer(CubeX, 3D systems, USA) and the self-made conventional step-bolus were used respectively. The surface dose under 3D-bolus and step-bolus was measured at 5 measurement spots of iso-center, lateral, medial, superior and inferior point, using GAFCHROMIC EBT3 film (International specialty products, USA) and the measured value of dose at 5 spots was compared and analyzed. Also the respective treatment plan was devised, considering the adoption of 3D-bolus and stepbolus and the separate treatment results were compared to each other. Results : The average surface dose was 179.17 cGy when the device of 3D-bolus was adopted and 172.02 cGy when step-bolus was adopted. The average error rate against the prescribed dose of 180 cGy was -(minus) 0.47% when the device of 3D-bolus was adopted and it was -(minus) 4.43% when step-bolus was adopted. It was turned out that the maximum error rate at the point of iso-center was 2.69%, in case of 3D-bolus adoption and it was 5,54% in case of step-bolus adoption. The maximum discrepancy in terms of treatment accuracy was revealed to be about 6% when step-bolus was adopted and to be about 3% when 3D-bolus was adopted. The difference in average target dose on chest wall between 3D-bolus treatment plan and step-bolus treatment plan was shown to be insignificant as the difference was only 0.3%. However, to mention the average prescribed dose for the part of lung and heart, that of 3D-bolus was decreased by 11% for lung and by 8% for heart, compared to that of step-bolus. Conclusion : It was confirmed through this research that the dose uniformity could be improved better through the device of 3D-bolus than through the device of step-bolus, as the device of 3D-bolus, produced in consideration of the contact condition of skin surface of chest wall, could be attached to patients' skin more nicely and the thickness of chest wall can be guaranteed more accurately by the device of 3D-bolus. It is considered that 3D-bolus device can be highly appreciated clinically because 3D-bolus reduces the dose on the adjacent organs and make the normal tissues protected, while that gives no reduction of dose on chest wall.

  • PDF

One-stop Evaluation Protocol of Ischemic Heart Disease: Myocardial Fusion PET Study (허혈성 심장 질환의 One-stop Evaluation Protocol: Myocardial Fusion PET Study)

  • Kim, Kyong-Mok;Lee, Byung-Wook;Lee, Dong-Wook;Kim, Jeong-Su;Jang, Yeong-Do;Bang, Chan-Seok;Baek, Jong-Hun;Lee, In-Su
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.33-37
    • /
    • 2010
  • Purpose: In the early stage of using PET/CT, it was used to damper revision but recently shows that CT with MDCT is commonly used and works well for an anatomical diagnosis. This hospital makes the accuracy and convenience more higher in the diagnosis and evaluate of coronary heart disease through concurrently running myocardial perfusion SPECT examination, myocardial PET examination with FDG, and CT coronary artery CT angiography(coronary CTA) used PET/CT with 64-slice. This report shows protocol and image based on results from about 400 coronary heart disease examinations since having 64 channels PET/CT in July 2007. Materials and Methods: An Equipment for this examination is 64-slice CT and Discovery VCT (DVCT) that is consisted of PET with BGO ($Bi_4Ge_3O_{12}$) scintillation crystal by GE health care. First myocardial perfusion SPECT with pharmacologic stress test to reduce waiting time of a patient and get a quick diagnosis and evaluation, and right after it, myocardial FDG PET examination and coronary CTA run without a break. One-stop evaluation protocol of ischemic heart disease is as follows. 1)Myocardial perfusion SPECT with pharmacologic stress: A patient is injected with $^{99m}Tc$-MIBI 10 mCi and does not have any fatty food for myocardial PET examination and drink natural water with ursodeoxcholic acid 100 mg and we get SPECT image in an hour. 2)Myocardial FDG PET: To reduce blood fatty content and to increase uptake of FDG, we used creative oral glucose load using insulin and Acipimox to according to blood acid content. A patient is injected with $^{18}F$-FDG 5 mCi for reduction of his radiation exposure and we get a gated image an hour later and get delay image when we need. 3) Coronary CTA: The most important point is to control heart rate and to get cooperation of patient's breath. In order to reduce a heart rate of him or her below 65 beats, let him or her take beta blocker 50 mg ~ 200 mg after a consultation with a doctor about it and have breath-practices then have the examination. Right before the examination, we spray isosorbide dinitrate 3 to 5 times to lower tension of bessel wall and to extension a blood wall of a patient. It makes to get better the shape of an anatomy. At filming, a patient is injected CT contrast with high pressure and have enough practices before the examination in order to have no problem. For reduction of his radiation exposure, we have to do ECG-triggered X-ray tube modulation exposure. Results: We evaluate coronary artery stenosis through coronary CTA and study correlation (culprit vessel check) of a decline between stenosis and perfusion from the myocardial perfusion SPECT with pharmacologic stress, coronary CTA, and can check viability of infarction or hibernating myocardium by FDG PET. Conclusion: The examination makes us to set up a direction of remedy (drug treatment, PCI, CABG) because we can estimate of effect from remedy, lesion site and severity. In addition, we have an advantage that it takes just 3 hours and one-stop in that all of process of examinations run in succession and at the same time. Therefore it shows that the method is useful in one stop evaluation of ischemic heart disease.

  • PDF

Comparison and evaluation of volumetric modulated arc therapy and intensity modulated radiation therapy plans for postoperative radiation therapy of prostate cancer patient using a rectal balloon (직장풍선을 삽입한 전립선암 환자의 수술 후 방사선 치료 시 용적변조와 세기변조방사선치료계획 비교 평가)

  • Jung, hae youn;Seok, jin yong;Hong, joo wan;Chang, nam jun;Choi, byeong don;Park, jin hong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.45-52
    • /
    • 2015
  • Purpose : The dose distribution of organ at risk (OAR) and normal tissue is affected by treatment technique in postoperative radiation therapy for prostate cancer. The aim of this study was to compare dose distribution characteristic and to evaluate treatment efficiency by devising VMAT plans according to applying differed number of arc and IMRT plan for postoperative patient of prostate cancer radiation therapy using a rectal balloon. Materials and Methods : Ten patients who received postoperative prostate radiation therapy in our hospital were compared. CT images of patients who inserted rectal balloon were acquired with 3 mm thickness and 10 MV energy of HD120MLC equipped Truebeam STx (Varian, Palo Alto, USA) was applied by using Eclipse (Version 11.0, Varian, Palo Alto, USA). 1 Arc, 2 Arc VMAT plans and 7-field IMRT plan were devised for each patient and same values were applied for dose volume constraint and plan normalization. To evaluate these plans, PTV coverage, conformity index (CI) and homogeneity index (HI) were compared and $R_{50%}$ was calculated to assess low dose spillage as per treatment plan. $D_{25%}$ of rectum and bladder Dmean were compared on OAR. And to evaluate the treatment efficiency, total monitor units(MU) and delivery time were considered. Each assessed result was analyzed by average value of 10 patients. Additionally, portal dosimetry was carried out for accuracy verification of beam delivery. Results : There was no significant difference on PTV coverage and HI among 3 plans. Especially CI and $R_{50%}$ on 7F-IMRT were the highest as 1.230, 3.991 respectively(p=0.00). Rectum $D_{25%}$ was similar between 1A-VMAT and 2A-VMAT. But approximately 7% higher value was observed on 7F-IMRT compare to the others(p=0.02) and bladder Dmean were similar among the all plan(P>0.05). Total MU were 494.7, 479.7, 757.9 respectively(P=0.00) for 1A-VMAT, 2A-VMAT, 7F-IMRT and at the most on 7F-IMRT. The delivery time were 65.2sec, 133.1sec, 145.5sec respectively(p=0.00). The obvious shortest time was observed on 1A-VMAT. All plans indicated over 99.5%(p=0.00) of gamma pass rate (2 mm, 2%) in portal dosimetry quality assurance. Conclusion : As a result of study, postoperative prostate cancer radiation therapy for patient using a rectal balloon, there was no significant difference of PTV coverage but 1A-VMAT and 2A-VMAT were more efficient for dose reduction of normal tissue and OARs. Between VMAT plans. $R_{50%}$ and MU were little lower in 2A-VMAT but 1A-VMAT has the shortest delivery time. So it is regarded to be an effective plan and it can reduce intra-fractional motion of patient also.

  • PDF