• Title/Summary/Keyword: system accuracy

Search Result 11,512, Processing Time 0.045 seconds

Usefulness of Gated RapidArc Radiation Therapy Patient evaluation and applied with the Amplitude mode (호흡 동조 체적 세기조절 회전 방사선치료의 유용성 평가와 진폭모드를 이용한 환자적용)

  • Kim, Sung Ki;Lim, Hhyun Sil;Kim, Wan Sun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.29-35
    • /
    • 2014
  • Purpose : This study has already started commercial Gated RapidArc automation equipment which was not previously in the Gated radiation therapy can be performed simultaneously with the VMAT Gated RapidArc radiation therapy to the accuracy of the analysis to evaluate the usability, Amplitude mode applied to the patient. Materials and Methods : The analysis of the distribution of radiation dose equivalent quality solid water phantom and GafChromic film was used Film QA film analysis program using the Gamma factor (3%, 3 mm). Three-dimensional dose distribution in order to check the accuracy of Matrixx dosimetry equipment and Compass was used for dose analysis program. Periodic breathing synchronized with solid phantom signals Phantom 4D Phantom and Varian RPM was created by breathing synchronized system, free breathing and breath holding at each of the dose distribution was analyzed. In order to apply to four patients from February 2013 to August 2013 with liver cancer targets enough to get a picture of 4DCT respiratory cycle and then patients are pratice to meet patient's breathing cycle phase mode using the patient eye goggles to see the pattern of the respiratory cycle to be able to follow exactly in a while 4DCT images were acquired. Gated RapidArc treatment Amplitude mode in order to create the breathing cycle breathing performed three times, and then at intervals of 40% to 60% 5-6 seconds and breathing exercises that can not stand (Fig. 5), 40% While they are treated 60% in the interval Beam On hold your breath when you press the button in a way that was treated with semi-automatic. Results : Non-respiratory and respiratory rotational intensity modulated radiation therapy technique absolute calculation dose of using computerized treatment plan were shown a difference of less than 1%, the difference between treatment technique was also less than 1%. Gamma (3%, 3 mm) and showed 99% agreement, each organ-specific dose difference were generally greater than 95% agreement. The rotational intensity modulated radiation therapy, respiratory synchronized to the respiratory cycle created Amplitude mode and the actual patient's breathing cycle could be seen that a good agreement. Conclusion : When you are treated Non-respiratory and respiratory method between volumetric intensity modulated radiation therapy rotation of the absolute dose and dose distribution showed a very good agreement. This breathing technique tuning volumetric intensity modulated radiation therapy using a rotary moving along the thoracic or abdominal breathing can be applied to the treatment of tumors is considered. The actual treatment of patients through the goggles of the respiratory cycle to create Amplitude mode Gated RapidArc treatment equipment that does not automatically apply to the results about 5-6 seconds stopped breathing in breathing synchronized rotary volumetric intensity modulated radiation therapy facilitate could see complement.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Multi-Dimensional Analysis Method of Product Reviews for Market Insight (마켓 인사이트를 위한 상품 리뷰의 다차원 분석 방안)

  • Park, Jeong Hyun;Lee, Seo Ho;Lim, Gyu Jin;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.57-78
    • /
    • 2020
  • With the development of the Internet, consumers have had an opportunity to check product information easily through E-Commerce. Product reviews used in the process of purchasing goods are based on user experience, allowing consumers to engage as producers of information as well as refer to information. This can be a way to increase the efficiency of purchasing decisions from the perspective of consumers, and from the seller's point of view, it can help develop products and strengthen their competitiveness. However, it takes a lot of time and effort to understand the overall assessment and assessment dimensions of the products that I think are important in reading the vast amount of product reviews offered by E-Commerce for the products consumers want to compare. This is because product reviews are unstructured information and it is difficult to read sentiment of reviews and assessment dimension immediately. For example, consumers who want to purchase a laptop would like to check the assessment of comparative products at each dimension, such as performance, weight, delivery, speed, and design. Therefore, in this paper, we would like to propose a method to automatically generate multi-dimensional product assessment scores in product reviews that we would like to compare. The methods presented in this study consist largely of two phases. One is the pre-preparation phase and the second is the individual product scoring phase. In the pre-preparation phase, a dimensioned classification model and a sentiment analysis model are created based on a review of the large category product group review. By combining word embedding and association analysis, the dimensioned classification model complements the limitation that word embedding methods for finding relevance between dimensions and words in existing studies see only the distance of words in sentences. Sentiment analysis models generate CNN models by organizing learning data tagged with positives and negatives on a phrase unit for accurate polarity detection. Through this, the individual product scoring phase applies the models pre-prepared for the phrase unit review. Multi-dimensional assessment scores can be obtained by aggregating them by assessment dimension according to the proportion of reviews organized like this, which are grouped among those that are judged to describe a specific dimension for each phrase. In the experiment of this paper, approximately 260,000 reviews of the large category product group are collected to form a dimensioned classification model and a sentiment analysis model. In addition, reviews of the laptops of S and L companies selling at E-Commerce are collected and used as experimental data, respectively. The dimensioned classification model classified individual product reviews broken down into phrases into six assessment dimensions and combined the existing word embedding method with an association analysis indicating frequency between words and dimensions. As a result of combining word embedding and association analysis, the accuracy of the model increased by 13.7%. The sentiment analysis models could be seen to closely analyze the assessment when they were taught in a phrase unit rather than in sentences. As a result, it was confirmed that the accuracy was 29.4% higher than the sentence-based model. Through this study, both sellers and consumers can expect efficient decision making in purchasing and product development, given that they can make multi-dimensional comparisons of products. In addition, text reviews, which are unstructured data, were transformed into objective values such as frequency and morpheme, and they were analysed together using word embedding and association analysis to improve the objectivity aspects of more precise multi-dimensional analysis and research. This will be an attractive analysis model in terms of not only enabling more effective service deployment during the evolving E-Commerce market and fierce competition, but also satisfying both customers.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Evaluation of the Interfraction Setup Errors using On Board- Imager (OBI) (On board imager를 이용한 치료간 환자 셋업오차 평가)

  • Jang, Eun-Sung;Baek, Seong-Min;Ko, Seung-Jin;Kang, Se-Sik
    • Journal of the Korean Society of Radiology
    • /
    • v.3 no.3
    • /
    • pp.5-11
    • /
    • 2009
  • When using Image Guided Radiation Therapy, the patient is placed using skin marker first and after confirming anatomical location using OBI, the couch is moved to correct the set up. Evaluation for the error made at that moment was done. Through comparing $0^{\circ}$ and $270^{\circ}$ direction DRR image and OBI image with 2D-2D matching when therapy planning, comparison between patient's therapy plan setup and actual treatment setup was made to observe the error. Treatment confirmation on important organs such as head, neck and spinal cord was done every time through OBI setup and other organs such as chest, abdomen and pelvis was done 2 ~ 3 times a week. But corrections were all recorded on OIS so that evaluation on accuracy could be made through using skin index which was divided into head, neck, chest and abdomen-pelvis on 160 patients. Average setup error for head and neck patient on each AP, SI, RL direction was $0.2{\pm}0.2cm$, $-0.1{\pm}0.1cm$, $-0.2{\pm}0.0cm$, chest patient was $-0.5{\pm}0.1cm$, $0.3{\pm}0.3cm$, $0.4{\pm}0.2cm$, and abdomen was $0.4{\pm}0.4cm$, $-0.5{\pm}0.1cm$, $-0.4{\pm}0.1cm$. In case of pelvis, it was $0.5{\pm}0.3cm$, $0.8{\pm}0.4cm$, $-0.3{\pm}0.2cm$. In rigid body parts such as head and neck showed lesser setup error compared to chest and abdomen. Error was greater on chest in horizontal axis and in AP direction, abdomen-pelvis showed greater error. Error was greater on chest in horizontal axis because of the curve in patient's body when the setup is made. Error was greater on abdomen in AP direction because of the change in front and back location due to breathing of patient. There was no systematic error on patient setup system. Since OBI confirms the anatomical location, when focus is located on the skin, it is more precise to use skin marker to setup. When compared with 3D-3D conformation, although 2D-2D conformation can't find out the rolling error, it has lesser radiation exposure and shorter setup confirmation time. Therefore, on actual clinic, 2D-2D conformation is more appropriate.

  • PDF

A Simple Method for Evaluation of Pepper Powder Color Using Vis/NIR Hyperspectral System (Vis/NIR 초분광 분석을 이용한 고춧가루 색도 간이 측정법 개발)

  • Han, Koeun;Lee, Hoonsoo;Kang, Jin-Ho;Choi, Eunah;Oh, Se-Jeong;Lee, Yong-Jik;Cho, Byoung-Kwan;Kang, Byoung-Cheorl
    • Horticultural Science & Technology
    • /
    • v.33 no.3
    • /
    • pp.403-408
    • /
    • 2015
  • Color is one of the quality determining factors for pepper powder. To measure the color of pepper powder, several methods including high-performance liquid chromatography (HPLC), thin layer chromatography (TLC), and ASTA-20 have been used. Among the methods, the ASTA-20 method is most widely used for color measurement of a large number of samples because of its simplicity and accuracy. However it requires time consuming preprocessing steps and generates chemical waste containing acetone. As an alternative, we developed a fast and simple method based on a visible/near infrared (Vis/NIR) hyperspectral method to measure the color of pepper powder. To evaluate correlation between the ASTA-20 and the visible/near infrared (Vis/NIR) hyperspectral methods, we first measured the color of a total of 488 pepper powder samples using the two methods. Then, a partial least squares (PLS) model was postulated using the color values of randomly selected 3 66 samples to predict ASTA values of unknown samples. When the ASTA values predicted by the PLS model were compared with those of the ASTA-20 method for 122 samples not used for model development, there was very high correlation between two methods ($R^2=0.88$) demonstrating reliability of Vis/NIR hyperspectral method. We believe that this simple and fast method is suitable for highthroughput screening of a large number of samples because this method does not require preprocessing steps required for the ASTA-20 method, and takes less than 30 min to measure the color of pepper powder.

Evaluation of Planning Dose Accuracy in Case of Radiation Treatment on Inhomogeneous Organ Structure (불균질부 방사선치료 시 계획 선량의 정확성 평가)

  • Kim, Chan Yong;Lee, Jae Hee;Kwak, Yong Kook;Ha, Min Yong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.137-143
    • /
    • 2013
  • Purpose: We are to find out the difference of calculated dose of treatment planning system (TPS) and measured dose in case of inhomogeneous organ structure. Materials and Methods: Inhomogeneous phantom is made with solid water phantom and cork plate. CT image of inhomogeneous phantom is acquired. Treatment plan is made with TPS (Pinnacle3 9.2. Royal Philips Electronics, Netherlands) and calculated dose of point of interest is acquired. Treatment plan was delivered in the inhomogeneous phantom by ARTISTE (Siemens AG, Germany) measured dose of each point of interest is obtained with Gafchromic EBT2 film (International Specialty Products, US) in the gap between solid water phantom or cork plate. To simulate lung cancer radiation treatment, artificial tumor target of paraffin is inserted in the cork volume of inhomogeneous phantom. Calculated dose and measured dose are acquired as above. Results: In case of inhomogeneous phantom experiment, dose difference of calculated dose and measured dose is about -8.5% at solid water phantom-cork gap and about -7% lower in measured dose at cork-solid water phantom gap. In case of inhomogeneous phantom inserted paraffin target experiment, dose difference is about 5% lower in measured dose at cork-paraffin gap. There is no significant difference at same material gap in both experiments. Conclusion: Radiation dose at the gap between two organs with different electron density is significantly lower than calculated dose with TPS. Therefore, we must be aware of dose calculation error in TPS and great care is suggested in case of radiation treatment planning on inhomogeneous organ structure.

  • PDF