• Title/Summary/Keyword: Accuracy test

Search Result 4,817, Processing Time 0.041 seconds

The Role of Air-Vacuum Cushion Device in Patients with Rectal Cancer in Radiation Therapy (직장암 환자에서 방사선치료시 Air-vacuum Cushion의 유용성)

  • Kim Ki-Hwan;Cho Moon-June;Kang No-Hyun;Kim Dong-Wuk;Kim Jun-Sang;Jang Ji-Young;Kim Jae-Sung
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.287-292
    • /
    • 2001
  • Prupose : We analyzed setup errors induced by using air-vacuum cushion as immobilization device in patients with rectal cancer. Materials and methods : We had treated the twenty patients with rectal cancer by 6 MV, 10 MV X-ray from Aug. 1998 to Aug. 1999 at Chungnam National University Hospital. All patients were treated at prone position. They were separated to two groups, control group, 10 patients using styrofoam, and test group, 10 patients using styrofoam and air-vacuum cushion. We measured errors of posterior field for x, y axis and lateral field for z, y axis with simulation film and EPID image using a matching technique. Results : In control group, the mean displacement values of pelvic bone landmark for x axis and y axis were 0.02 mm. 0.78 mm, respectively and the standard deviations of systematic error were 2.13 mm, 2.40 mm, respectively and the standard deviation of random error were 1.46 mm. 1.51 mm, respectively. In test group, the mean displacement values of x axis and y axis were -0.33 mm. 0.81 mm, respectively and the standard deviations of systematic error were 1.71 mm, 3.08 mm, respectively and the standard deviations of random errors were 1.40 mm. 1.88 mm, respectively. The mean displacement values of z axis and y axis were 2.98 mm. 0.74 mm, respectively and the standard deviations of systematic error were 4.75 mm, 2.65 mm, respectively and standard deviations of random error were 2.69 mm. 1.86 mm, respectively. The statistical difference of field size by using air vacuum cushion between two groups in posterior direction and lateral direction was not shown. Conclusion : We think that use of air-vacuum cushion may not be an advantage for improving setup accuracy in rectal cancer patients.

  • PDF

A Comparison of Discriminating Powers between 13 Microsatellite Markers and 37 Single Nucleotide Polymorphism Markers for the Use of Pork Traceability and Parentage Test of Pigs (돼지 개체식별 및 친자감별을 위한 13 microsatellite marker와 37 single nucleotide polymorphism marker 간의 효율성 비교)

  • Lee, Jae-Bong;Yoo, Chae-Kyoung;Jung, Eun-Ji;Lee, Jung-Gyu;Lim, Hyun-Tae
    • Journal of agriculture & life science
    • /
    • v.46 no.5
    • /
    • pp.73-82
    • /
    • 2012
  • Allele information from the analysis of the 13 microsatellite (MS) markers, were classified into the $F_0$, $F_1$ and $F_2$ generations, and probabilities of the same individual emergency in each generation was calculated. As a result, the 13 MS markers showed an estimate of $3.84{\times}10^{-23}$ on the premise of the randomly mated group of $F_2$, which implies that the same individuals may emerge by the use of 37 kinds of SNP markers. In this study, the experimental pigs were intercross between only 2 breeds (Korean native pig and Landrace). In addition, the success rate of paternity tests was analyzed on the whole group, by the use of the 13 MS markers and 37 SNP markers. As regards the exclusionary power of the second parent ($PE_{pu}$), MS markers and SNP markers showed 0.97897 and 0.99149, respectively. In relation to the parent exclusion power of both parent (PE), MS markers and SNP markers showed 0.99916 and 0.99949, respectively. In the case of the estimate to identify parental candidates that had the highest probability ($PNE_{pp}$), the two showed 1.00000 all. The Korean pig industry tends to mass produce hogs with limited numbers of alleles in limited parents. Such being the case, there is a need to organize a marker, for which it is imperative to find markers with high efficiency and high economic feasibility of the characteristics of DNA markers, sample size, the accuracy and expenses of genotyping cost, the manageability of data and the compatibility among analysis systems.

A Study on the Optimal Image Acquisition Time of 18F- Flutemetamol using List Mode (LIST mode를 이용한 18F-Flutemetamol 의 최적 영상획득 시간에 관한 연구)

  • Ryu, Chan-Ju
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.6
    • /
    • pp.891-897
    • /
    • 2021
  • With the development of Amyloid PET Tracer, the accuracy of Alzheimer's diagnosis can be improved through the identification of beta-amyloid neurites. However, the long image acquisition time of 20 minutes can be difficult for the patient. PET/CT scans are sensitive to patient movement and may partially affect test results. In this study, we studied the proper image acquisition time without affecting the quantitative evaluation of the image through the list mode acquisition method according to the time of the distribution of radioactive drugs in the body. The list mode includes information about time compared to the existing frame mode, and it is easy to analyze data because it can reconstruct images about the time that researchers want. The research method obtained a reconstructed image by time using a list mode of 5min frame/bed, 10min frame/bed, 15min frame/bed, and 20min frame/bed to compare the difference between signal-to-pons take ratio (SNR) and lesion-to-pons uptake ratio (LPR) and the difference in reading time to obtain an appropriate image. As a result of quantitative analysis, when measuring in list mode, SUVmean values decreased in 6 regions of interest as the image acquisition time increased, but showed the largest difference in 5 min/bed images, followed by 10 min/bed and 15 min/bed. As a result, the difference in SUVmean values decreased. Therefore, it was found that SUVmean values at 15 min/bed did not differ enough to not affect image evaluation. There was no difference in LPR values. As a result of the qualitative analysis, there was no change in the reading findings according to the PET image acquisition time and there was no significant difference in the qualitative analysis score of the image reconstruction according to time. As a result of the study, there is no significant difference between 15 min/bed and 20 min/bed images during the 18F-flutemetamol PET/CT test, so it can be said that it is clinically useful to reduce the image acquisition time selectively using 15 min/bed via list mode depending on the patient's condition.

Development of Analytical Method for Ergot Alkaloids in Foods Using Liquid Chromatoraphy-Tandem Mass Spectrometry (LC-MS/MS를 이용한 식품 중 맥각 알칼로이드 시험법 개발)

  • Chun, So Young;Chong, Euna;Lee, Bomnae;Kwon, Jin-Wook;Park, Hye Young;Kim, Sheenhee;Gang, Giljin
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.2
    • /
    • pp.158-169
    • /
    • 2019
  • Ergot alkaloids are mycotoxin produced by fungi of the Claviceps genus, mainly by Claviceps purpurea in EU. Recently obtained informations indicates necessity for control the ergot in imported grains. Recent occurrence data of ergot alkaloids from EU countries indicate the necessities of management and control these toxins from the imported grains like rye, wheat, oat etc. The aim of this study is to optimize the liquid chromatography-tandem mass spectrometry method for determination of ergot alkaloids (ergometrine, ergosine, ergotamine, ergocornine, ergocryptine, ergocristine and their epimers (-inines) from grain and grain-based food. The test method was optimized by extracting the sample with acetonitrile containing 2 mM ammonium carbonate, purification with Mycosep cartridge, and instrumental analysis by LC-MS/MS using Syncronis C18 column. The standard calibration curves showed linearity with correlation coefficents; $R^2$ >0.99. Mean recoveries ranged from 72.0 to 111.3% at three different fortified levels (20, 50, and $100{\mu}g/kg$). The correlation coefficient expressed as precision was within the range of 1.9-12.9%. The limit or quantifications (LOQ) ranged from 0.012 to $0.058{\mu}g/kg$. The developed analytical method met the criteria of AOAC Int. and CAC validation parameters like accuracy and sensitivity. As a result, it was confirmed that the test method developed in this study is suitable for the simultaneous analysis of six species of ergot alkaloid from grains and grain products.

Work Environment Measurement Results for Research Workers and Directions for System Improvement (연구활동종사자 작업환경측정 결과 및 제도개선 방향)

  • Hwang, Je-Gyu;Byun, Hun-Soo
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.30 no.4
    • /
    • pp.342-352
    • /
    • 2020
  • Objectives: The characteristics of research workers are different from those working in the manufacturing industry. Furthermore, the reagents used change according to the research due to the characteristics of the laboratory, and the amounts used vary. In addition, since the working time changes almost every day, it is difficult to adjust the time according to exposure standards. There are also difficulties in setting standards as in the manufacturing industry since laboratory environments and the types of experiments performed are all different. For these reasons, the measurement of the working environment of research workers is not realistically carried out within the legal framework, there is a concern that the accuracy of measurement results may be degraded, and there are difficulties in securing data. The exposure evaluation based on an eight-hour time-weighted average used for measuring the working environment to be studied in this study may not be appropriate, but it was judged and consequently applied as the most suitable method among the recognized test methods. Methods: The investigation of the use of chemical substances in the research laboratory, which is the subject of this study, was conducted in the order of carrying out work environment measurement, sample analysis, and result analysis. In the case of the use of chemical substances, after organizing the substances to be measured in the working environment, the research workers were asked to write down the status, frequency, and period of use. Work environment measurement and sample analysis were conducted by a recognized test method, and the results were compared with the exposure standards (TWA: time weighted average value) for chemical substances and physical factors. Results: For the substances subject to work environment measurement, the department of chemical engineering was the most exposed, followed by the department of chemistry. This can lead to exposure to a variety of chemicals in departmental laboratories that primarily deal with chemicals, including acetone, hydrogen peroxide, nitric acid, sodium hydroxide, and normal hexane. Hydrogen chloride was measured higher than the average level of domestic work environment measurements. This can suggest that researchers in research activities should also be managed within the work environment measurement system. As a result of a comparison between the professional science and technology service industry and the education service industry, which are the most similar business types to university research laboratories among the domestic work environment measurements provided by the Korea Safety and Health Agency, acetone, dichloromethane, hydrogen peroxide, sodium hydroxide, nitric acid, normal hexane, and hydrogen chloride are items that appear higher than the average level. This can also be expressed as a basis for supporting management within the work environment measurement system. Conclusions: In the case of research activity workers' work environment measurement and management, specific details can be presented as follows. When changing projects and research, work environment measurement is carried out, and work environment measurement targets and methods are determined by the measurement and analysis method determined by the Ministry of Employment and Labor. The measurement results and exposure standards apply exposure standards for chemical substances and physical factors by the Ministry of Employment and Labor. Implementation costs include safety management expenses and submission of improvement plans when exposure standards are exceeded. The results of this study were presented only for the measurement of the working environment among the minimum health management measures for research workers, but it is necessary to prepare a system to improve the level of safety and health.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Verification of Multi-point Displacement Response Measurement Algorithm Using Image Processing Technique (영상처리기법을 이용한 다중 변위응답 측정 알고리즘의 검증)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.297-307
    • /
    • 2010
  • Recently, maintenance engineering and technology for civil and building structures have begun to draw big attention and actually the number of structures that need to be evaluate on structural safety due to deterioration and performance degradation of structures are rapidly increasing. When stiffness is decreased because of deterioration of structures and member cracks, dynamic characteristics of structures would be changed. And it is important that the damaged areas and extent of the damage are correctly evaluated by analyzing dynamic characteristics from the actual behavior of a structure. In general, typical measurement instruments used for structure monitoring are dynamic instruments. Existing dynamic instruments are not easy to obtain reliable data when the cable connecting measurement sensors and device is long, and have uneconomical for 1 to 1 connection process between each sensor and instrument. Therefore, a method without attaching sensors to measure vibration at a long range is required. The representative applicable non-contact methods to measure the vibration of structures are laser doppler effect, a method using GPS, and image processing technique. The method using laser doppler effect shows relatively high accuracy but uneconomical while the method using GPS requires expensive equipment, and has its signal's own error and limited speed of sampling rate. But the method using image signal is simple and economical, and is proper to get vibration of inaccessible structures and dynamic characteristics. Image signals of camera instead of sensors had been recently used by many researchers. But the existing method, which records a point of a target attached on a structure and then measures vibration using image processing technique, could have relatively the limited objects of measurement. Therefore, this study conducted shaking table test and field load test to verify the validity of the method that can measure multi-point displacement responses of structures using image processing technique.

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

The Evaluation of Attenuation Difference and SUV According to Arm Position in Whole Body PET/CT (전신 PET/CT 검사에서 팔의 위치에 따른 감약 정도와 SUV 변화 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Suk, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.21-25
    • /
    • 2010
  • Purpose: For better PET imaging with accuracy the transmission scanning is inevitably required for attenuation correction. The attenuation is affected by condition of acquisition and patient position, consequently quantitative accuracy may be decreased in emission scan imaging. In this paper, the present study aims at providing the measurement for attenuation varying with the positions of the patient's arm in whole body PET/CT, further performing the comparative analysis over its SUV changes. Materials and Methods: NEMA 1994 PET phantom was filled with $^{18}F$-FDG and the concentration ratio of insert cylinder and background water fit to 4:1. Phantom images were acquired through emission scanning for 4min after conducting transmission scanning by using CT. In an attempt to acquire image at the state that the arm of the patient was positioned at the lower of ahead, image was acquired in away that two pieces of Teflon inserts were used additionally by fixing phantoms at both sides of phantom. The acquired imaged at a were reconstructed by applying the iterative reconstruction method (iteration: 2, subset: 28) as well as attenuation correction using the CT, and then VOI was drawn on each image plane so as to measure CT number and SUV and comparatively analyze axial uniformity (A.U=Standard deviation/Average SUV) of PET images. Results: It was found from the above phantom test that, when comparing two cases of whether Teflon insert was fixed or removed, the CT number of cylinder increased from -5.76 HU to 0 HU, while SUV decreased from 24.64 to 24.29 and A.U from 0.064 to 0.052. And the CT number of background water was identified to increase from -6.14 HU to -0.43 HU, whereas SUV decreased from 6.3 to 5.6 and A.U also decreased from 0.12 to 0.10. In addition, as for the patient image, CT number was verified to increase from 53.09 HU to 58.31 HU and SUV decreased from 24.96 to 21.81 when the patient's arm was positioned over the head rather than when it was lowered. Conclusion: When arms up protocol was applied, the SUV of phantom and patient image was decreased by 1.4% and 9.2% respectively. With the present study it was concluded that in case of PET/CT scanning against the whole body of a patient the position of patient's arm was not so much significant. Especially, the scanning under the condition that the arm is raised over to the head gives rise to more probability that the patient is likely to move due to long scanning time that causes the increase of uptake of $^{18}F$-FDG of brown fat at the shoulder part together with increased pain imposing to the shoulder and discomfort to a patient. As regarding consideration all of such factors, it could be rationally drawn that PET/CT scanning could be made with the arm of the subject lowered.

  • PDF