• Title/Summary/Keyword: Accuracy

Search Result 33,435, Processing Time 0.08 seconds

The study of heavy rain warning in Gangwon State using threshold rainfall (침수유발 강우량을 이용한 강원특별자치도 호우특보 기준에 관한 연구)

  • Lee, Hyeonjia;Kang, Donghob;Lee, Iksangc;Kim, Byungsikd
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.751-764
    • /
    • 2023
  • Gangwon State is centered on the Taebaek Mountains with very different climate characteristics depending on the region, and localized heavy rainfall is a frequent occurrence. Heavy rain disasters have a short duration and high spatial and temporal variability, causing many casualties and property damage. In the last 10 years (2012~2021), the number of heavy rain disasters in Gangwon State was 28, with an average cost of 45.6 billion won. To reduce heavy rain disasters, it is necessary to establish a disaster management plan at the local level. In particular, the current criteria for heavy rain warnings are uniform and do not consider local characteristics. Therefore, this study aims to propose a heavy rainfall warning criteria that considers the threshold rainfall for the advisory areas located in Gangwon State. As a result of analyzing the representative value of threshold rainfall by advisory area, the Mean value was similar to the criteria for issuing a heavy rain warning, and it was selected as the criteria for a heavy rain warning in this study. The rainfall events of Typhoon Mitag in 2019, Typhoons Maysak and Haishen in 2020, and Typhoon Khanun in 2023 were applied as rainfall events to review the criteria for heavy rainfall warnings, as a result of Hit Rate accuracy verification, this study reflects the actual warning well with 72% in Gangneung Plain and 98% in Wonju. The criteria for heavy rain warnings in this study are the same as the crisis warning stages (Attention, Caution, Alert, and Danger), which are considered to be possible for preemptive rain disaster response. The results of this study are expected to complement the uniform decision-making system for responding to heavy rain disasters in the future and can be used as a basis for heavy rain warnings that consider disaster risk by region.

Analytical Method for Determination of Laccaic Acids in Foods with HPLC-PDA and Monitoring (식품 중 락카인산 성분 분리정제를 통한 분석법 확립 및 실태조사)

  • Jae Wook Shin;Hyun Ju Lee;Eunjoo Lim;Jung Bok Kim
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.5
    • /
    • pp.390-401
    • /
    • 2023
  • Major components of lac coloring include laccaic acids A, B, C, and E. The Korean Food Additive Code regulates the use of lac coloring and prohibits its use in ten types of food products including natural food products. Since no commercial standards are available for laccaic acids A, B, C, and E, a standard for lac pigment itself was used to separate laccaic acids from the lac pigment molecule. A standard for each laccaic acid was then obtained by fractionation. To obtain pure lac pigment for use in food by High performance Liquid Chromatography Photo Diode Array (PDA), a C8 column yielded the best resolution among various tested columns and mobile phases. A qualitative analytical method using High Performance Liquid Chromatography (HPLC) Tandem Mass(LC-MS/MS) was developed. The conditions for fast and precise sample preparation begin with extraction using methanol and 0.3% ammonium phosphate, followed by concentration. The degree of precision observed for the analyses of ham, tomato juice and Red pepper paste was 0.3-13.1% (Relative Standard Deviation (RSD%)), degree of accuracy was 90.3-122.2% with r2=0.999 or above, and recovery rate was 91.6-114.9%. The limit of detection was 0.01-0.15 ㎍/mL, and the limits of quantitation ranged from 0.02 to 0.47 ㎍/mL. Lac pigment was not detected in 117 food products in the 10 food categories for which the use of lac pigment is banned. Multiple laccaic acids were detected in 105 food products in 6 food categories that are allowed to use lac color. Lac pigment concentrations range from 0.08 to 16.67 ㎍/mL.

A Study on Medical Waste Generation Analysis during Outbreak of Massive Infectious Diseases (대규모 감염병 발병에 따른 의료폐기물 발생량 예측에 관한 연구)

  • Sang-Min Kim;Jin-Kyu Park;In-Beom Ko;Byung-Sun Lee;Sang-Ryong Shin;Nam-Hoon Lee
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.31 no.4
    • /
    • pp.29-39
    • /
    • 2023
  • In this study, an analysis of medical waste generation characteristics was conducted, differentiating between ordinary situation and the outbreaks of massive infectious diseases. During ordinary situation, prediction models for medical waste quantities by type, general medical waste(G-MW), hazardous medical waste(H-MW), infectious medical waste(I-MW), were established through regression analysis, with all significance values (p) being <0.0001, indicating statistical significance. The determination coefficient(R2) values for prediction models of each category were analyzed as follows : I-MW(R2=0.9943) > G-MW(R2=0.9817) > H-MW(R2=0.9310). Additionally, factors such as GDP(G-MW), the number of medical institutions (H-MW), and the elderly population ratio(I-MW), utilized as influencing factors and consistent with previous literature, showed high correlations. The total MW generation, evaluated by combining each model, had an MAE of 2,615 and RMSE of 3,353. This indicated accuracy levels similar to the medical waste models of H-MW(2,491, 2,890) and I-MW(2,291, 3,267). Due to limitations in accurately estimating the quantity of medical waste during the rapid and outbreaks of massive infectious diseases, the generation unit of I-MW was derived to analyze its characteristics. During the early unstable stage of infectious disease outbreaks, the generation unit was 8.74 kg/capita·day, 2.69 kg/capita·day during the stable stage, and an average of 0.08 kg/capita·day during the reduction stage. Correlation analysis between generation unit of I-MW and lethality rates showed +0.99 in the unstable stage, +0.52 in the stable stage, and +0.96 in the reduction period, demonstrating a very high positive correlation of +0.95 or higher throughout the entire outbreaks of massive infectious diseases. The results derived from this study are expected to play a useful role in establishing an effective medical waste management system in the field of health care.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

A Store Recommendation Procedure in Ubiquitous Market for User Privacy (U-마켓에서의 사용자 정보보호를 위한 매장 추천방법)

  • Kim, Jae-Kyeong;Chae, Kyung-Hee;Gu, Ja-Chul
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Comparison and evaluation between 3D-bolus and step-bolus, the assistive radiotherapy devices for the patients who had undergone modified radical mastectomy surgery (변형 근치적 유방절제술 시행 환자의 방사선 치료 시 3D-bolus와 step-bolus의 비교 평가)

  • Jang, Wonseok;Park, Kwangwoo;Shin, Dongbong;Kim, Jongdae;Kim, Seijoon;Ha, Jinsook;Jeon, Mijin;Cho, Yoonjin;Jung, Inho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.1
    • /
    • pp.7-16
    • /
    • 2016
  • Purpose : This study aimed to compare and evaluate between the efficiency of two respective devices, 3D-bolus and step-bolus when the devices were used for the treatment of patients whose chest walls were required to undergo the electron beam therapy after the surgical procedure of modified radical mastectomy, MRM. Materials and Methods : The treatment plan of reverse hockey stick method, using the photon beam and electron beam, had been set for six breast cancer patients and these 6 breast cancer patients were selected to be the subjects for this study. The prescribed dose of electron beam for anterior chest wall was set to be 180 cGy per treatment and both the 3D-bolus, produced using 3D printer(CubeX, 3D systems, USA) and the self-made conventional step-bolus were used respectively. The surface dose under 3D-bolus and step-bolus was measured at 5 measurement spots of iso-center, lateral, medial, superior and inferior point, using GAFCHROMIC EBT3 film (International specialty products, USA) and the measured value of dose at 5 spots was compared and analyzed. Also the respective treatment plan was devised, considering the adoption of 3D-bolus and stepbolus and the separate treatment results were compared to each other. Results : The average surface dose was 179.17 cGy when the device of 3D-bolus was adopted and 172.02 cGy when step-bolus was adopted. The average error rate against the prescribed dose of 180 cGy was -(minus) 0.47% when the device of 3D-bolus was adopted and it was -(minus) 4.43% when step-bolus was adopted. It was turned out that the maximum error rate at the point of iso-center was 2.69%, in case of 3D-bolus adoption and it was 5,54% in case of step-bolus adoption. The maximum discrepancy in terms of treatment accuracy was revealed to be about 6% when step-bolus was adopted and to be about 3% when 3D-bolus was adopted. The difference in average target dose on chest wall between 3D-bolus treatment plan and step-bolus treatment plan was shown to be insignificant as the difference was only 0.3%. However, to mention the average prescribed dose for the part of lung and heart, that of 3D-bolus was decreased by 11% for lung and by 8% for heart, compared to that of step-bolus. Conclusion : It was confirmed through this research that the dose uniformity could be improved better through the device of 3D-bolus than through the device of step-bolus, as the device of 3D-bolus, produced in consideration of the contact condition of skin surface of chest wall, could be attached to patients' skin more nicely and the thickness of chest wall can be guaranteed more accurately by the device of 3D-bolus. It is considered that 3D-bolus device can be highly appreciated clinically because 3D-bolus reduces the dose on the adjacent organs and make the normal tissues protected, while that gives no reduction of dose on chest wall.

  • PDF

Studies on the analysis of phytin by the Chelatometric method (Chelate 법(法)에 의(依)한 Phytin 분석(分析)에 관(關)한 연구(硏究))

  • Shin, Jai-Doo
    • Applied Biological Chemistry
    • /
    • v.10
    • /
    • pp.1-13
    • /
    • 1968
  • Phytin is a salt(mainly calcium and magnesium) of phytic acid and its purity and molecular formula can be determined by assaying the contents of phosporus, calcium and magnesium in phytin. In order to devise a new method for the quantitative analysis of the three elements in phytin, the chelatometric method was developed as follows: 1) As the pretreatment for phytin analysis, it was ashfied st $550{\sim}600^{\circ}C$ in the presence of concentrated nitric acid. This dry process is more accurate than the wet process. 2) Phosphorus, calcium and megnesium were analyzed by the conventional and the new method described here, for the phytin sample decomposed by the dry process. The ashfied phytin solution in hydrochloric acid was partitioned into cation and anion fractions by means of a ration exchange resin. A portion of the ration fraction was adjusted to pH 7.0, followed by readjustment to pH 10 and titrated with standard EDTA solution using the BT [Eriochrome black T] indicator to obtain the combined value of calcium and magnesium. Another portion of the ration fraction was made to pH 7.0, and a small volume of standard EDTA solution was added to it. pH was adjusted to $12{\sim}13$ with 8 N KOH and it was titrate by a standard EDTA solution in the presence of N-N[2-Hydroxy-1-(2-hydroxy-4-sulfo-1-naphytate)-3-naphthoic acid] diluted powder indicator in order to obtain the calcium content. Magnesium content was calculated from the difference between the two values. From the anion fraction the magnesium ammonium phosphate precipitate was obtained. The precipitate was dissolved in hydrochloric acid, and a standard EDTA solution was added to it. The solution was adjusted to pH 7.0 and then readjusted to pH 10.0 by a buffer solution and titrated with a standard magnesium sulfate solution in the presence of BT indicator to obtain the phosphorus content. The analytical data for phosphorus, calcium and magnesium were 98.9%, 97.1% and 99.1% respectively, in reference to the theoretical values for the formula $C_6H_6O_{24}P_6Mg_4CaNa_2{\cdot}5H_2O$. Statical analysis indicated a good coincidence of the theoretical and experimental values. On the other hand, the observed values for the three elements by the conventional method were 92.4%, 86.8% and 93.8%, respectively, revealing a remarkable difference from the theoretical. 3) When sodium phytate was admixed with starch and subjected to the analysis of phosphorus, calcium and magnesium by the chelatometric method, their recovery was almost 100% 4) In order to confirm the accuracy of this method, phytic acid was reacted with calcium chloride and magnesium chloride in the molar ratio of phytic: calcium chloride: magnesium chloride=1 : 5 : 20 to obtain sodium phytate containing one calcium atom and four magnesium atoms per molecule of sodium phytate. The analytical data for phosporus, calcium and magnesium were coincident with those as determine d by the aforementioned method. The new method employing the dry process, ion exchange resin and chelatometric assay of phosphorus, calcium and magnesium is considered accurate and rapid for the determination of phytin.

  • PDF

The Implementation of a HACCP System through u-HACCP Application and the Verification of Microbial Quality Improvement in a Small Size Restaurant (소규모 외식업체용 IP-USN을 활용한 HACCP 시스템 적용 및 유효성 검증)

  • Lim, Tae-Hyeon;Choi, Jung-Hwa;Kang, Young-Jae;Kwak, Tong-Kyung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.3
    • /
    • pp.464-477
    • /
    • 2013
  • There is a great need to develop a training program proven to change behavior and improve knowledge. The purpose of this study was to evaluate employee hygiene knowledge, hygiene practice, and cleanliness, before and after HACCP system implementation at one small-size restaurant. The efficiency of the system was analyzed using time-temperature control after implementation of u-HACCP$^{(R)}$. The employee hygiene knowledge and practices showed a significant improvement (p<0.05) after HACCP system implementation. In non-heating processes, such as seasoned lettuce, controlling the sanitation of the cooking facility and the chlorination of raw ingredients were identified as the significant CCP. Sanitizing was an important CCP because total bacteria were reduced 2~4 log CFU/g after implementation of HACCP. In bean sprouts, microbial levels decreased from 4.20 logCFU/g to 3.26 logCFU/g. There were significant correlations between hygiene knowledge, practice, and microbiological contamination. First, personnel hygiene had a significant correlation with 'total food hygiene knowledge' scores (p<0.05). Second, total food hygiene practice scores had a significant correlation (p<0.05) with improved microbiological qualities of lettuce salad. Third, concerning the assessment of microbiological quality after 1 month, there were significant (p<0.05) improvements in times of heating, and the washing and division process. On the other hand, after 2 months, microbiological was maintained, although only two categories (division process and kitchen floor) were improved. This study also investigated time-temperature control by using ubiquitous sensor networks (USN) consisting of an ubi reader (CCP thermometer), an ubi manager (tablet PC), and application software (HACCP monitoring system). The result of the temperature control before and after USN showed better thermal management (accuracy, efficiency, consistency of time control). Based on the results, strict time-temperature control could be an effective method to prevent foodborne illness.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.