• Title/Summary/Keyword: risk process

Search Result 2,920, Processing Time 0.03 seconds

In the Treatment I-131, the Significance of the Research that the Patient's Discharge Dose and Treatment Ward can Affect a Patient's Kidney Function on the Significance of Various Factors (I-131 치료시 환자의 신장기능과 다양한 요인으로 의한 퇴원선량 및 치료병실 오염도의 유의성에 관한 연구)

  • Im, Kwang Seok;Choi, Hak Gi;Lee, Gi Hyun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.62-66
    • /
    • 2013
  • Purpose: I-131 is a radioisotope widely used for thyroid gland treatments. The physical half life is 8.01 and characterized by emitting beta and gamma rays which is used in clinical practice for the purpose of acquiring treatment and images. In order to reduce the recurrence rate after surgery in high-risk thyroid cancer patients, the remaining thyroid tissue is either removed or the I-131 is used for treatment during relapse. In cases of using a high dosage of radioactive iodine requiring hospitalization, the patient is administered dosage in the hospital isolation ward over a certain period of time preventing I-131 exposure to others. By checking the radiation amount emitted from patients before discharge, the patients are discharged after checking whether they meet the legal standards (50 uSv/h). After patients are discharged from the hospital, the contamination level is checked in many parts of the ward before the next patients are hospitalized and when necessary, decontamination operations are performed. It is expected that there is exposure to radiation when measuring the ward contamination level and dose check emitted from patients at the time of discharge whereby the radiation exposure by health workers that come from the patients in this process is the main factor. This study analyzed the correlation between discharge dose of patients and ward contamination level through a variety of factors such as renal functions, gender, age, dosage, etc.). Materials and Method: The study was conducted on 151 patients who received high-dosage radioactive iodine treatment at Soon Chun Hyang University Hospital during the period between 8/1/2011~5/31/2012 (Male: Female: 31:120, $47.5{\pm}11.9$, average dosage of $138{\pm}22.4$ mCi). As various factors expected to influence the patient discharge dose & ward contamination such as the beds, floors, bathroom floors, and washbasins, the patient renal function (GFR), age, gender, dosage, and the correlation between the expected Tg & Tg-Tb expected to reflect the remaining tissue in patients were analyzed. Results: In terms of the discharge dose and GFR, a low correlation was shown in the patient discharge dose as the GFR was higher (p < 0.0001). When comparing the group with a dosage of over 150mCi and the group with a lower dosage, the lower dosage group showed a significantly lower discharge dose ($24{\pm}10.4uSv/h$ vs $28.7{\pm}11.8uSv/h$, p<0.05). Age, gender, Tg, Tg-Tb did not show a significant relationship with discharge dose (p> 0.05). The contamination level in each spot of the treatment ward showed no significant relationship with GFR, Tg, Tg-Tb, age, gender, and dosage (p>0.05 ). Conclusion: This study says that discharge of the dose in the patient's body is low in GFR higher and Dosage 150mCi under lower. There was no case of contamination of the treatment ward, depending on the dose and renal association. This suggests that patients' lifestyles or be affected by a variety of other factors.

  • PDF

The Roles of Service Failure and Recovery Satisfaction in Customer-Firm Relationship Restoration : Focusing on Carry-over effect and Dynamics among Customer Affection, Customer Trust and Loyalty Intention Before and After the Events (서비스실패의 심각성과 복구만족이 고객-기업 관계회복에 미치는 영향 : 실패이전과 복구이후 고객애정, 고객신뢰, 충성의도의 이월효과 및 역학관계 비교를 중심으로)

  • La, Sun-A
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.1-36
    • /
    • 2012
  • Service failure is one of the major reasons for customer defection. As the business environment gets tougher and more competitive, a single service failure might bring about fatal consequences to a service provider or a firm. Sometimes a failure won't end up with an unsatisfied customer's simple complaining but with a wide-spread animosity against the service provider or the firm, leading to a threat to the firm's survival itself in the society. Therefore, we are in need of comprehensive understandings of complainants' attitudes and behaviors toward service failures and firm's recovery efforts. Even though a failure itself couldn't be fixed completely, marketers should repair the mind and heart of unsatisfied customers, which can be regarded as an successful recovery strategy in the end. As the outcome of recovery efforts exerted by service providers or firms, recovery of the relationship between customer and service provider need to put on the top in the recovery goal list. With these motivations, the study investigates how service failure and recovery makes the changes in dynamics of fundamental elements of customer-firm relationship, such as customer affection, customer trust and loyalty intention by comparing two time points, before the service failure and after the recovery, focusing on the effects of recovery satisfaction and the failure severity. We adopted La & Choi (2012)'s framework for development of the research model that was based on the previous research stream like Yim et al. (2008) and Thomson et al. (2005). The pivotal background theories of the model are mainly from relationship marketing and social relationships of social psychology. For example, Love, Emotional attachment, Intimacy, and Equity theories regarding human relationships were reviewed. As the results, when recovery satisfaction is high, customer affection and customer trust that were established before the service failure are carried over to the future after the recovery. However, when recovery satisfaction is low, customer-firm relationship that had already established in the past are not carried over but broken up. Regardless of the degree of recovery satisfaction, once a failure occurs loyalty intention is not carried over to the future and the impact of customer trust on loyalty intention becomes stronger. Such changes imply that customers become more prudent and more risk-aversive than the time prior to service failure. The impact of severity of failure on customer affection and customer trust matters only when recovery satisfaction is low. When recovery satisfaction is high, customer affection and customer trust become severity-proof. Interestingly, regardless of the degree of recovery satisfaction, failure severity has a significant negative influence on loyalty intention. Loyalty intention is the most fragile target when a service failure occurs no matter how severe the failure criticality is. Consequently, the ultimate goal of service recovery should be the restoration of customer-firm relationship and recovery of customer trust should be the primary objective to accomplish for a successful recovery performance. Especially when failure severity is high, service recovery should be perceived highly satisfied by the complainants because failure severity matters more when recovery satisfaction is low. Marketers can implement recovery strategies to enhance emotional appeals as well as fair treatments since the both impacts of affection and trust on loyalty intention are significant. In the case of high severity of failure, recovery efforts should be exerted to overreach customer expectation, designed to directly repair customer trust and elaborately designed in the focus of customer-firm communications during the interactional recovery process to affect customer trust rebuilding indirectly. Because it is a longer and harder way to rebuild customer-firm relationship for high severity cases, low recovery satisfaction cannot guarantee customer retention. To prevent customer defection due to service failure of high severity, unexpected rewards as a recovery will be likely to be useful since those will lead to customer delight or customer gratitude toward the service firm. Based on the results of analyses, theoretical and managerial implications are presented. Limitations and future research ideas are also discussed.

  • PDF

Analysis of the Causes of Subfrontal Recurrence in Medulloblastoma and Its Salvage Treatment (수모세포종의 방사선치료 후 전두엽하방 재발된 환자에서 원인 분석 및 구제 치료)

  • Cho Jae Ho;Koom Woong Sub;Lee Chang Geol;Kim Kyoung Ju;Shim Su Jung;Bak Jino;Jeong Kyoungkeun;Kim Tae_Gon;Kim Dong Seok;Choi oong-Uhn;Suh Chang Ok
    • Radiation Oncology Journal
    • /
    • v.22 no.3
    • /
    • pp.165-176
    • /
    • 2004
  • Purpose: Firstly, to analyze facto in terms of radiation treatment that might potentially cause subfrontal relapse in two patients who had been treated by craniospinal irradiation (CSI) for medulloblastoma, Secondly, to explore an effective salvage treatment for these relapses. Materials and Methods: Two patients who had high-risk disease (T3bMl, T3bM3) were treated with combined chemoradiotherapy CT-simulation based radiation-treatment planning (RTP) was peformed. One patient who experienced relapse at 16 months after CSI was treated with salvage surgery followed by a 30.6 Gy IMRT (intensity modulated radiotherapy). The other patient whose tumor relapsed at 12 months after CSI was treated by surgery alone for the recurrence. To investigate factors that might potentially cause subfrontal relapse, we evaluated thoroughly the charts and treatment planning process including portal films, and tried to find out a method to give help for placing blocks appropriately between subfrotal-cribrifrom plate region and both eyes. To salvage subfrontal relapse in a patient, re-irradiation was planned after subtotal tumor removal. We have decided to treat this patient with IMRT because of the proximity of critical normal tissues and large burden of re-irradiation. With seven beam directions, the prescribed mean dose to PTV was 30.6 Gy (1.8 Gy fraction) and the doses to the optic nerves and eyes were limited to 25 Gy and 10 Gy, respectively. Results: Review of radiotherapy Portals clearly indicated that the subfrontal-cribriform plate region was excluded from the therapy beam by eye blocks in both cases, resulting in cold spot within the target volume, When the whole brain was rendered in 3-D after organ drawing in each slice, it was easier to judge appropriateness of the blocks in port film. IMRT planning showed excellent dose distributions (Mean doses to PTV, right and left optic nerves, right and left eyes: 31.1 Gy, 14.7 Gy, 13.9 Gy, 6.9 Gy, and 5.5 Gy, respectively. Maximum dose to PTV: 36 Gy). The patient who received IMRT is still alive with no evidence of recurrence and any neurologic complications for 1 year. Conclusion: To prevent recurrence of medulloblastoma in subfrontal-cribriform plate region, we need to pay close attention to the placement of eye blocks during the treatment. Once subfrontal recurrence has happened, IMRT may be a good choice for re-irradiation as a salvage treatment to maximize the differences of dose distributions between the normal tissues and target volume.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Evaluation of Radiation Exposure to Nurse on Nuclear Medicine Examination by Use Radioisotope (방사성 동위원소를 이용한 핵의학과 검사에서 병동 간호사의 방사선 피폭선량 평가)

  • Jeong, Jae Hoon;Lee, Chung Wun;You, Yeon Wook;Seo, Yeong Deok;Choi, Ho Yong;Kim, Yun Cheol;Kim, Yong Geun;Won, Woo Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.44-49
    • /
    • 2017
  • Purpose Radiation exposure management has been strictly regulated for the radiation workers, but there are only a few studies on potential risk of radiation exposure to non-radiation workers, especially nurses in a general ward. The present study aimed to estimate the exact total exposure of the nurse in a general ward by close contact with the patient undergoing nuclear medicine examinations. Materials and Methods Radiation exposure rate was determined by using thermoluminescent dosimeter (TLD) and optical simulated luminescence (OSL) in 14 nurses in a general ward from October 2015 to June 2016. External radiation rate was measured immediately after injection and examination at skin surface, and 50 cm and 1 m distance from 50 patients (PET/CT 20 pts; Bone scan 20 pts; Myocardial SPECT 10 pts). After measurement, effective half-life, and total radiation exposure expected in nurses were calculated. Then, expected total exposure was compared with total exposures actually measured in nurses by TLD and OSL. Results Mean and maximum amount of radiation exposure of 14 nurses in a general ward were 0.01 and 0.02 mSv, respectively in each measuring period. External radiation rate after injection at skin surface, 0.5 m and 1 m distance from patients was as following; $376.0{\pm}25.2$, $88.1{\pm}8.2$ and $29.0{\pm}5.8{\mu}Sv/hr$, respectively in PET/CT; $206.7{\pm}56.6$, $23.1{\pm}4.4$ and $10.1{\pm}1.4{\mu}Sv/hr$, respectively in bone scan; $22.5{\pm}2.6$, $2.4{\pm}0.7$ and $0.9{\pm}0.2{\mu}Sv/hr$, respectively in myocardial SPECT. After examination, external radiation rate at skin surface, 0.5 m and 1 m distance from patients was decreased as following; $165.3{\pm}22.1$, $38.7{\pm}5.9$ and $12.4{\pm}2.5{\mu}Sv/hr$, respectively in PET/CT; $32.1{\pm}8.7$, $6.2{\pm}1.1$, $2.8{\pm}0.6$, respectively in bone scan; $14.0{\pm}1.2$, $2.1{\pm}0.3$, $0.8{\pm}0.2{\mu}Sv/hr$, respectively in myocardial SPECT. Based upon the results, an effective half-life was calculated, and at 30 minutes after examination the time to reach normal dose limit in 'Nuclear Safety Act' was calculated conservatively without considering a half-life. In oder of distance (at skin surface, 0.5 m and 1 m distance from patients), it was 7.9, 34.1 and 106.8 hr, respectively in PET/CT; 40.4, 199.5 and 451.1 hr, respectively in bone scan, 62.5, 519.3 and 1313.6 hr, respectively in myocardial SPECT. Conclusion Radiation exposure rate may differ slightly depending on the work process and the environment in a general ward. Exposure rate was measured at step in the general examination procedure and it made our results more reliable. Our results clearly showed that total amount of radiation exposure caused by residual radioactive isotope in the patient body was neglectable, even comparing with the natural radiation exposure. In conclusion, nurses in a general ward were much less exposed than the normal dose limit, and the effects of exposure by contacting patients undergoing nuclear medicine examination was ignorable.

  • PDF

Management of Critical Control Points to Improve Microbiological Quality of Potentially Hazardous Foods Prepared by Restaurant Operations (외식업체에서 제공하는 잠재적 위험 식품의 미생물적 품질향상을 위한 중점관리점 관리방안)

  • Chun, Hae-Yeon;Choi, Jung-Hwa;Kwak, Tong-Kyung
    • Korean journal of food and cookery science
    • /
    • v.30 no.6
    • /
    • pp.774-784
    • /
    • 2014
  • The purpose of this study was to present management guidelines for critical control points by analyzing microbiological hazardous elements through screening Potentially Hazardous Foods (PHF) menus in an effort improve the microbiological quality of foods prepared by restaurant operations. Steamed spinach with seasoning left at room temperature presents a range of risk temperatures which microorganisms could flourish, and it exceeded all microbiological safety limits in our study. On the other hand, steamed spinach with seasoning stored in a refrigerator had Aerobic Plate Counts of $2.86{\pm}0.5{\log}\;CFU/g$ and all other microbiological tests showed that their levels were below the limit. The standard plate counts of raw materials of lettuce and tomato were $4.66{\pm}0.4{\log}\;CFU/g$ and $3.08{\pm}0.4{\log}\;CFU/g$, respectively. Upon washing, the standard plate counts were $3.12{\pm}0.6{\log}\;CFU/g$ and $2.10{\pm}0.3{\log}\;CFU/g$, respectively, but upon washing after chlorination, those were $2.23{\pm}0.3{\log}\;CFU/g$ and $0.72{\pm}0.7{\log}\;CFU/g$, respectively. The standard plate counts of baby greens, radicchio and leek were $6.02{\pm}0.5{\log}\;CFU/g$, $5.76{\pm}0.1{\log}\;CFU/g$ and $6.83{\pm}0.5{\log}\;CFU/g$, respectively. After 5 minutes of chlorination, the standard plate counts were $4.10{\pm}0.6{\log}\;CFU/g$, $5.14{\pm}0.1{\log}\;CFU/g$ and $5.30{\pm}0.3{\log}\;CFU/g$, respectively. After 10 minutes of chlorination treatment, the standard plate counts were $2.58{\pm}0.3{\log}\;CFU/g$, $4.27{\pm}0.6{\log}\;CFU/g$, and $4.18{\pm}0.5{\log}\;CFU/g$, respectively. The microbial levels decreased as the time of chlorination increased. This study showed that the microbiological quality of foods was improved with the proper practices of time-temperature control, sanitization control, seasoning control, and personal and surface sanitization control. It also presents management guidelines for the control of potentially hazardous foods at the critical control points in the process of restaurant operations.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

Epidemiological Studies of Clonorchiasis. - I. Current Status and Natural Transition of the Endemicity of Clonorchis sinensis in Gimhae Gun and Delta, a High Endemic area in Korea (간흡충증(肝吸虫症)의 역학(疫學) - I. 고도유행지(高度流行地) 김해지방(金海地方)에 있어서의 간흡충감염(肝吸虫感染)의 현황(現況)과 자연추이(自然推移))

  • Kim, D.C.;Lee, O.Y.;Lee, J.S.;Ahn, J.S.;Chang, Y.M;Son, S.C.;Moon, I.S.
    • Journal of agricultural medicine and community health
    • /
    • v.8 no.1
    • /
    • pp.44-65
    • /
    • 1983
  • As a part of the epidemiological studies of clonorchiasis, this study was conducted to evaluate the current endemicity and the natural transition of the Clonorchis infection in Gimhae Gun and delta area a high endemic area in Korea in recent years, prior to the introduction of praziquantel which will eventually influence the status of the prevalence. The data obtained in this study in 1983 were evaluated for natural transition of the infection in comparison with those obtained 16 years ago in 1967 by the author(Kim, 1974). The areas of investigation, villages and schools surveyed, methods and techniques used in this study were the same as in 1967, except for the contents of the questionnaire for raw freshwater fish consumption by the local inhabitants. 1) The prevalence rate of clonorchiasis in the general population of the villages was 48.1% on the average out of a total of 484 persons examined. The average of those of the riverside-delta area was 65.2% and 43.0% in the inland area. Among the schoolchildren, the prevalence rate was 8.2% on the average out of a total of 1,423 examined. By area, the prevalence rate was 10.8% in the riverside delta area and 2.8% in the inland area. By sex, difference in the prevalence was seen only in the inhabitants of the inland area showing 52.4% in the male and 33.5% in the female. 2) In the natural transition of the infection, the prevalence rate among the inhabitants has decreased from 68.8% in 1967 to 48.1% in 1983, and in the schoolchildren from 56.4% in 1967 to 8.2% in 1983. The reduction rate was higher in the riverside-delta area than in the inland area. 3) In the prevalence rate by age, 11.9% was first seen in the 5-9 age group and the rate gradually increased up to 75.0% in the 50-59 age group. By sex, the rate was higher in the male than in the female in the 20-29 age group and over. 4) In the natural transition of the prevalence rate by age, the reduction rate of the infection during the past 16 years was greater in the younger age groups up to the 40-49 age group and reached the same level in the age group 50-59. Reduction was seen again in the age group over 60s. By sex, the reduction rate was greater in the female than in the male in the 20-29 age group and over. By area, the reduction rate was greater in the riverside delta area than in the inland area, particularly in the young age groups. 5) In the intensity of the infection among the cases, the mean egg out-put per mg feces per infected cases(EPmg) in the inhabitants was 6.3. EPmg of those of the river-side-delta area was 15.4 and that of the in-land was 2.8. On the other hand, in the schoolchildren, EPmg was 3.2, and no difference was seen between the two areas, the river-side-delta area and the inland area. 6) In the transition of the intensity of the infection by area, EPmg among the inhabitants inexplically increased from 7.8 in 1967 to 15.4 in 1983. This was probably caused by uneven specimen collection in the process of sampling the population. EPmg of the inhabitants in the inland area and those of the schoolchildren of both riverside delta and inland areas showed a similar decrease in the past 16 years. 7) The intensity of the infection by age showed a relatively low level in the 20-29 age group and below, and EPmg 5.1-9.5 was seen in the 30-39 age group and over. Sex, Epmg was 5.8 in the male and 4.7 the female. By in 8) In the transition of the intensity of the infection, EPmg decreased from 6.2 in 1967 to 5.4 in 1983. By age, in contrast to the figures of 1967 in which EPmg gradually increased with some fluctuation from 1.1 in the 0-4 age group to peak 10.5 in the 50-59 age group, in 1983 lower intensity of the infection was seen in the age group from 10-14 to 20-29 with the EPmg range of 0.6-2.7. 9) In the distribution of the clonorchiasis cases by the range of EPmg value, 43.2% of the cases were in 0.1 0.9 and 34.6% in 1.0-4.9. As a whole by cumulative percent, 44.6% of them were under 0.9 as light infection and 86.1% of them under 9.9 up to moderate infection. By sex, no difference was seen in Epmg. 10) In the transition of the distribution by the range of Epmg, the cases were distributed up to the range 80.0-99.9 in 1967 and to 60.0-79.9 in 1983. By cumulative percent, in the range of 0.1-0.9 and less, light infection, 34.3% of them were distributed in 1967 and 44.6% in 1983 with about 10% increase. In the range of 5.0-9.9 and less, up to moderate infection, 83.2% in 1967 and 86.1% in 1983 of the cases were seen, respectively. 11) The practice of raw freshwater fish consumption among the inhabitants seems to have decreased in recent years. Those who admitted to raw freshwater fish consumption in the last two years among the infected inhabitants were 59.3%, although 86.8% of them professed to have experience with raw freshwater fish consumption. 31.7% of those who have had experience of the raw freshwater fish consumption denied any further consumption in recent years. From an interview of 543 school-children, 24.1% of them admitted to an experience of raw freshwater fish consumption. However, those who have practised in the past two years comprized 17.9%. Those who denied raw freshwater fish consumption in recent years among those who had such experience were 26.0% out of 131 interviewed. The rate of raw freshwater fish consumption in both inhabitants and schoolchildren were higher in the male than in the female. On the contrary, the rate of those who did not practise in recent years among those who had experience of raw freshwater fish consumption was higher in the female than in the male. 12) The major reason for the reduction of raw freshwater fish consumption among the local inhabitants was the risk of the fluke infection. However, it has become apparent that such change of taste has resulted from water pollution impact which has affected throughout the areas of the freshwater systems in this locality since last several years. 13) In animal survey, Clonorchis infection was seen in 14.8% of 88 dogs examined and 3.7% of 27 house rats examined. It was noted that populations of dogs and cats have increased in the villages surveyed. Although the prevalence rate was lower in the present survey than those of 1967, the significance of the animals as the reservoir host has not changed. 14) Prevalence rate of Clonorchis infection by cercariae in the first intermediate host, Parafossarulus manchouricus, was 0.6% out of 517 snails examined. The infection rate was lower in comparison with 2.3% out of 2,124 examined in 1967. Moreover, sharp decreases in number and distribution of the intermediate host snails in many watershed areas of the huge freshwater systems in this locality seemed to reduce transmission of Clonorchis in connection with the intermediate host stage of its life cycle. 15) Clonorchis infection in the second intermediate fish hosts was relatively low. The mean number of Clonorchis metacercaria per fish in Pseudorasbora parva was 517 in 1983, whereas it was 1943 in 1968 through 1969. Environmental water pollution has also caused the decreased fish population density in these areas, and this has also apparently affected to the practice of raw freshwater fish consumption among the local inhabitants. 16) In conclusion, endemicity of Clonorchis infection in Gimhae Gum and delta area of the Nagdong River has sharply decreased during the past 16 years. The major cause of the regressive transition of the infection was the water pollution of the land water systems of this locality. The pollution has upset the ecosystems comprizing of the intermediate hosts of Clonorchis in many areas, and also affected to a significant extent to the discontinuance of the local inhabitants for raw freshwater fish consumption.

  • PDF