• Title/Summary/Keyword: Evaluation and Validation Test

Search Result 265, Processing Time 0.02 seconds

Evaluation of the Measurement Uncertainty from the Standard Operating Procedures(SOP) of the National Environmental Specimen Bank (국가환경시료은행 생태계 대표시료의 채취 및 분석 표준운영절차에 대한 단계별 측정불확도 평가 연구)

  • Lee, Jongchun;Lee, Jangho;Park, Jong-Hyouk;Lee, Eugene;Shim, Kyuyoung;Kim, Taekyu;Han, Areum;Kim, Myungjin
    • Journal of Environmental Impact Assessment
    • /
    • v.24 no.6
    • /
    • pp.607-618
    • /
    • 2015
  • Five years have passed since the first set of environmental samples was taken in 2011 to represent various ecosystems which would help future generations lead back to the past environment. Those samples have been preserved cryogenically in the National Environmental Specimen Bank(NESB) at the National Institute of Environmental Research. Even though there is a strict regulation (SOP, standard operating procedure) that rules over the whole sampling procedure to ensure each sample to represent the sampling area, it has not been put to the test for the validation. The question needs to be answered to clear any doubts on the representativeness and the quality of the samples. In order to address the question and ensure the sampling practice set in the SOP, many steps to the measurement of the sample, that is, from sampling in the field and the chemical analysis in the lab are broken down to evaluate the uncertainty at each level. Of the 8 species currently taken for the cryogenic preservation in the NESB, pine tree samples from two different sites were selected for this study. Duplicate samples were taken from each site according to the sampling protocol followed by the duplicate analyses which were carried out for each discrete sample. The uncertainties were evaluated by Robust ANOVA; two levels of uncertainty, one is the uncertainty from the sampling practice, and the other from the analytical process, were then compiled to give the measurement uncertainty on a measured concentration of the measurand. As a result, it was confirmed that it is the sampling practice not the analytical process that accounts for the most of the measurement uncertainty. Based on the top-down approach for the measurement uncertainty, the efficient way to ensure the representativeness of the sample was to increase the quantity of each discrete sample for the making of a composite sample, than to increase the number of the discrete samples across the site. Furthermore, the cost-effective approach to enhance the confidence level on the measurement can be expected from the efforts to lower the sampling uncertainty, not the analytical uncertainty. To test the representativeness of a composite sample of a sampling area, the variance within the site should be less than the difference from duplicate sampling. For that, a criterion, ${i.e.s^2}_{geochem}$(across the site variance) <${s^2}_{samp}$(variance at the sampling location) was proposed. In light of the criterion, the two representative samples for the two study areas passed the requirement. In contrast, whenever the variance of among the sampling locations (i.e. across the site) is larger than the sampling variance, more sampling increments need to be added within the sampling area until the requirement for the representativeness is achieved.

Optimization of Analytical Method for Annatto Pigment in Foods (식품 중 안나토색소 분석법 최적화 연구)

  • Lee, Jiyeon;Park, Juhee;Lee, Jihyun;Suh, Hee-Jae;Lee, Chan
    • Journal of Food Hygiene and Safety
    • /
    • v.36 no.4
    • /
    • pp.298-309
    • /
    • 2021
  • In this study we sought to develop a simultaneous analysis method for cis-bixin and cis-norbixin, the main components, to detect annatto pigment in food. To establish the optimal test method, the HPLC analysis methods of the European Food Safety Authority (EFSA), Japan's Ministry of Health, Labor and Welfare (MHLW), and National Institute of Food and Drug Safety Evaluation (NIFDS) were compared and reviewed. In addition, a new pretreatment method applicable to various foods was developed after selecting conditions for simultaneous high-performance liquid chromatography (HPLC) analysis in consideration of linearity, limit of detection (LOD), limit of quantification (LOQ), and analysis time. The HPLC analysis method of NIFDS showed the best linearity (R2 ≥ 0.999), exhibiting low detection and quantification limits for cis-norbixin and cis-bixin as 0.03, 0.05 ㎍/mL, and 0.097, 0.16 ㎍/mL, respectively. All previously reported pretreatment methods had limitations in various food applications. However, the new pretreatment method showed a high recovery rate for all three main food groups of fish meat and meat products, processed cheese and beverages. This method showed an excellent simultaneous recovery rate of 98% or more for cis-bixin and cis-norbixin. The HPLC analysis method with a new pretreatment method showed high linearity with a coefficient of determination (R2) of 1 for both substances, and the accuracy (recovery rate) and precision (%RSD) were 98% and between 0.4-7.9, respectively. From this result, the optimized analytical method was considered to be very suitable for the simultaneous analysis of cis-bixin and cis-norbixin, two main components of annatto pigment in food.

Bankruptcy Type Prediction Using A Hybrid Artificial Neural Networks Model (하이브리드 인공신경망 모형을 이용한 부도 유형 예측)

  • Jo, Nam-ok;Kim, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.79-99
    • /
    • 2015
  • The prediction of bankruptcy has been extensively studied in the accounting and finance field. It can have an important impact on lending decisions and the profitability of financial institutions in terms of risk management. Many researchers have focused on constructing a more robust bankruptcy prediction model. Early studies primarily used statistical techniques such as multiple discriminant analysis (MDA) and logit analysis for bankruptcy prediction. However, many studies have demonstrated that artificial intelligence (AI) approaches, such as artificial neural networks (ANN), decision trees, case-based reasoning (CBR), and support vector machine (SVM), have been outperforming statistical techniques since 1990s for business classification problems because statistical methods have some rigid assumptions in their application. In previous studies on corporate bankruptcy, many researchers have focused on developing a bankruptcy prediction model using financial ratios. However, there are few studies that suggest the specific types of bankruptcy. Previous bankruptcy prediction models have generally been interested in predicting whether or not firms will become bankrupt. Most of the studies on bankruptcy types have focused on reviewing the previous literature or performing a case study. Thus, this study develops a model using data mining techniques for predicting the specific types of bankruptcy as well as the occurrence of bankruptcy in Korean small- and medium-sized construction firms in terms of profitability, stability, and activity index. Thus, firms will be able to prevent it from occurring in advance. We propose a hybrid approach using two artificial neural networks (ANNs) for the prediction of bankruptcy types. The first is a back-propagation neural network (BPN) model using supervised learning for bankruptcy prediction and the second is a self-organizing map (SOM) model using unsupervised learning to classify bankruptcy data into several types. Based on the constructed model, we predict the bankruptcy of companies by applying the BPN model to a validation set that was not utilized in the development of the model. This allows for identifying the specific types of bankruptcy by using bankruptcy data predicted by the BPN model. We calculated the average of selected input variables through statistical test for each cluster to interpret characteristics of the derived clusters in the SOM model. Each cluster represents bankruptcy type classified through data of bankruptcy firms, and input variables indicate financial ratios in interpreting the meaning of each cluster. The experimental result shows that each of five bankruptcy types has different characteristics according to financial ratios. Type 1 (severe bankruptcy) has inferior financial statements except for EBITDA (earnings before interest, taxes, depreciation, and amortization) to sales based on the clustering results. Type 2 (lack of stability) has a low quick ratio, low stockholder's equity to total assets, and high total borrowings to total assets. Type 3 (lack of activity) has a slightly low total asset turnover and fixed asset turnover. Type 4 (lack of profitability) has low retained earnings to total assets and EBITDA to sales which represent the indices of profitability. Type 5 (recoverable bankruptcy) includes firms that have a relatively good financial condition as compared to other bankruptcy types even though they are bankrupt. Based on the findings, researchers and practitioners engaged in the credit evaluation field can obtain more useful information about the types of corporate bankruptcy. In this paper, we utilized the financial ratios of firms to classify bankruptcy types. It is important to select the input variables that correctly predict bankruptcy and meaningfully classify the type of bankruptcy. In a further study, we will include non-financial factors such as size, industry, and age of the firms. Thus, we can obtain realistic clustering results for bankruptcy types by combining qualitative factors and reflecting the domain knowledge of experts.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

The Influence Evaluation of $^{201}Tl$ Myocardial Perfusion SPECT Image According to the Elapsed Time Difference after the Whole Body Bone Scan (전신 뼈 스캔 후 경과 시간 차이에 따른 $^{201}Tl$ 심근관류 SPECT 영상의 영향 평가)

  • Kim, Dong-Seok;Yoo, Hee-Jae;Ryu, Jae-Kwang;Yoo, Jae-Sook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.67-72
    • /
    • 2010
  • Purpose: In Asan Medical Center we perform myocardial perfusion SPECT to evaluate cardiac event risk level for non-cardiac surgery patients. In case of patients with cancer, we check tumor metastasis using whole body bone scan and whole body PET scan and then perform myocardial perfusion SPECT to reduce unnecessary exam. In case of short term in patients, we perform $^{201}Tl$ myocardial perfusion SPECT after whole body bone scan a minimum 16 hours in order to reduce hospitalization period but it is still the actual condition in which the evaluation about the affect of the crosstalk contamination due to the each other dissimilar isotope administration doesn't properly realize. So in our experiments, we try to evaluate crosstalk contamination influence on $^{201}Tl$ myocardial perfusion SPECT using anthropomorphic torso phantom and patient's data. Materials and Methods: From 2009 August to September, we analyzed 87 patients with $^{201}Tl$ myocardial perfusion SPECT. According to $^{201}Tl$ myocardial perfusion SPECT yesterday whole body bone scan possibility of carrying out, a patient was classified. The image data are obtained by using the dual energy window in $^{201}Tl$ myocardial perfusion SPECT. We analyzed $^{201}Tl$ and $^{99m}Tc$ counts ratio in each patients groups obtained image data. We utilized anthropomorphic torso phantom in our experiment and administrated $^{201}Tl$ 14.8 MBq (0.4 mCi) at myocardium and $^{99m}Tc$ 44.4 MBq (1.2 mCi) at extracardiac region. We obtained image by $^{201}Tl$ myocardial perfusion SPECT without gate method application and analyzed spatial resolution using Xeleris ver 2.0551. Results: In case of $^{201}Tl$ window and the counts rate comparison result yesterday whole body bone scan of being counted in $^{99m}Tc$ window, the difference in which a rate to 24 hours exponential-functionally notes in 1:0.114 with Ventri (GE Healthcare, Wisconsin, USA), 1:0.249 after the bone tracer injection in 12 hours in 1:0.411 with 1:0.79 with Infinia (GE healthcare, Wisconsin, USA) according to a reduction a time-out was shown (Ventri p=0.001, Infinia p=0.001). Moreover, the rate of the case in which it doesn't perform the whole body bone scan showed up as the average 1:$0.067{\pm}0.6$ of Ventri, and 1:$0.063{\pm}0.7$ of Infinia. According to the phantom after experiment spatial resolution measurement result, and an addition or no and time-out of $^{99m}Tc$ administrated, it doesn't note any change of FWHM (p=0.134). Conclusion: Through the experiments using anthropomorphic torso phantom and patients data, we found that $^{201}Tl$ myocardium perfusion SPECT image later carried out after the bone tracer injection with 16 hours this confirmed that it doesn't receive notable influence in spatial resolution by $^{99m}Tc$. But this investigation is only aimed to image quality, so it needs more investigation in patient's radiation dose and exam accuracy and precision. The exact guideline presentation about the exam interval should be made of the validation test which is exact and in which it is standardized about the affect of the crosstalk contamination according to the isotope use in which it is different later on.

  • PDF