• Title/Summary/Keyword: standard testing method

Search Result 578, Processing Time 0.028 seconds

Evaluation of Diagnostic Performance of a Polymerase Chain Reaction for Detection of Canine Dirofilaria immitis (개 심장사상충을 진단하기 위한 중합연쇄반응검사 (PCR)의 진단적 특성 평가)

  • Pak, Son-Il;Kim, Doo
    • Journal of Veterinary Clinics
    • /
    • v.24 no.2
    • /
    • pp.77-81
    • /
    • 2007
  • Diagnostic performance of polymerase chain reaction (PCR) for detecting Dirofilaria immitis in dogs was evaluated when no gold standard test was employed. An enzyme-linked immunosorbent assay test kit (SnapTM, IDEXX, USA) with unknown parameters was also employed. The sensitivity and specificity of the PCR from two-population model were estimated by using both maximum likelihood using expectation-maximization (EM) algorithm and Bayesian method, assuming conditional independence between the two tests. A total of 266 samples, 133 samples in each trial, were randomly retrieved from the heartworm database records during the year 2002-2004 in a university animal hospital. These data originated from the test results of military dogs which were brought for routine medical check-up or testing for heartworm infection. When combined 2 trials, sensitivity and specificity of the PCR was 96.4-96.7% and 97.6-98.8% in EM and 94.4-94.8% and 97.1-98% in Bayesian. There were no statistical differences between estimates. This finding indicates that the PCR assay could be useful screening tool for detecting heartworm antigen in dogs. This study was provided further evidences that Bayesian approach is an alternative approach to draw better inference about the performance of a new diagnostic test in case when either gold test is not available.

Effects of Granular Silicate on Watermelon (Citrullus lanatus var. lanatus) Growth, Yield, and Characteristics of Soil Under Greenhouse

  • Kim, Young-Sang;Kang, Hyo-Jung;Kim, Tae-Il;Jeong, Taek-Gu;Han, Jong-Woo;Kim, Ik-Jei;Nam, Sang-Young;Kim, Ki-In
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.48 no.5
    • /
    • pp.456-463
    • /
    • 2015
  • The objective of this study was to determine the effects of granular type of silicate fertilizer on watermelon growth, yield, and characteristics of soil in the greenhouse. Four different levels of silicate fertilizer, 0(control), 600, 1,200, $1,800kg\;ha^{-1}$ were applied for experiment. The silicate fertilizer was applied as a basal fertilization before transplanting watermelon. Compost and basal fertilizers were applied based on the standard fertilizer recommendation rate with soil testing. All of the recommended $P_2O_5$ and 50% of N and $K_2O$ were applied as a basal fertilization. The N and $K_2O$ as additional fertilization was split-applied twice by fertigation method. Watermelon (Citrullus lanatus Thunb.) cultivar was 'Sam-Bok-KKuol and main stem was from rootstock (bottle gourd: Lagenaria leucantha Standl.) 'Bul-Ro-Jang-Sang'. The watermelon was transplanted on April, 15. Soil chemical properties, such as soil pH, EC, available phosphate and exchangeable K, Mg, and available $SiO_2$ levels increased compared to the control, while EC was similar and the concentrations of soil organic matter decreased. Physical properties of soils, such as soil bulk density and porosity were not different among treatments. The growth characteristics of watermelon, such as stem diameter, fresh and dry weight of watermelon at harvest were thicker and heavier for silicate treatment than the control, while number of node was shorter than the control. Merchantable watermelon increased by 3-5% compared to the control and sugar content was 0.4 to $0.7^{\circ}Brix$ higher than the control. These results suggest that silicate fertilizer application in the greenhouse can improve some chemical properties of soils and watermelon stem diameter and dry weight, which are contributed to watermelon quality and marketable watermelon production.

The Ways of Improving Technical Standards to Increase Effectiveness of Wetting Agent (침윤소화약제의 효과성 증대를 위한 기술기준 개선방안)

  • Jang, Kwan Su;Kim, Jung Min;Cho, Young Jae
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.3
    • /
    • pp.581-588
    • /
    • 2022
  • Purpose: This study is about offering ways of improving existing technical standards in order to propose how to deal with coal deep-seated fire and to increase effectiveness of wetting agent. Method: This study conducts infiltration experiment using eight tons of coal, three types of wetting agents and fire water. And this study analyzes domestic and international technical standards, overseas experimental cases. Result: It is found that two findings are identified; one is fire water cannot infiltrate into the coal due to high level of surface tension, and the other is three types of wetting agent can infiltrate into the coal to the depth of 5~25cm. Also, domestic wetting agent technical standards include measuring surface tension only and testing wood on extinguishing capacity test. On the other hand, this study found that deep-seated fire experiment using cotton, B-class fire test using heptane are used from abroad. Besides it is analyze that capillary rise test, sink test, contact angle measurement are conducted to increase effectiveness of wetting agent at the U.S. Bureau of Mines. Conclusion: Based on standards and cases of U.S. NFPA and Bureau of Mines, this study suggests that domestic technical standards should include adding a new test standard which measures infiltration directly.

Suggestion of Evaluation Elements Based on ODD for Automated Vehicles Safety Verification : Case of K-City (자율주행자동차 안전성 검증을 위한 ODD 기반 평가요소 제시 : K-City를 중심으로)

  • Kim, Inyoung;Ko, Hangeom;Yun, Jae-Woong;Lee, Yoseph;Yun, Ilsoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.197-217
    • /
    • 2022
  • As automated vehicle(AV) accidents continue to occur, the importance of safety verification to ensure the safety and reliability of automated driving system(ADS) is being emphasized. In order to encure safety and reliability, it is necessary to define an operational design domain(ODD) of the ADS and verify the safety of the ADS while evaluating its ability to respond in situations outside of the ODD. To this, international associations such as SAE, BSI, NHTSA, ISO, etc. stipulate ODD standards. However, in Korea, there is no standard for the ODD, so automated vehicles's ODD expression method and safety verification and evaluation are not properly conducted. Therefore, this study analyzed overseas ODD standards and selected suitable ODD for safety verification and evaluation, and presented evaluation elements for ADS safety verification and evaluation. In particular, evaluation elements were selected by analyzing the evaluation environment of the automated driving experimental city (K-City) that supports the development of ADS technology.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

The effect of bracket width on frictional force between bracket and arch wire during sliding tooth movement (치아의 활주 이동시 브라켓 폭이 브라켓과 호선 사이의 마찰력에 미치는 효과)

  • Choi, Won-Cheul;Kim, Tae-Woo;Park, Joo-Young;Kwak, Jae-Hyuk;Na, Hyo-Jeong;Park, Du-Nam
    • The korean journal of orthodontics
    • /
    • v.34 no.3 s.104
    • /
    • pp.253-260
    • /
    • 2004
  • Frictional force between the orthodontic bracket and arch wire during sliding tooth movement is related to many factors, such as the size, shape and material of both the bracket and wire, ligation method and the angle formed between the bracket and wire. There have been clear conclusions drawn in regard to most of these factors, but as to the effect of bracket width on frictional force there are only conflicting studies. This study was designed to investigate the effect of bracket width on the amount of frictional forces generated during clinically simulated tooth movement. Three different widths of brackets $(0.018{\times}0.025'\;standard)$ narrow (2.40mm), medium (3.00mm) and wide (4.25mm) were used in tandem with $0.016{\times}0.022'$ stainless steel wire. Three bracket-arch wire combinations were drawn on for 4 minutes on a testing apparatus with a head speed of 0.5mm/min and tested 7 times each. To reproduce biological conditions, dentoalveolar models were designed with indirect technique using a material with similar elastic properties as periodontal ligament (PDL). In addition, to minimize the effect of ligation force, elastomer was used with added resin, which was attached to the bracket to make up for the discrepancies of bracket width. The results were as follows: 1. Maximum frictional force for each bracket-arch wire combination was: Narrow (2.40mm): $68.09\pm4.69gmf$ Medium (3.00mm): $72.75\pm4.98 gmf$ Wide (4.25mm): $72.59\pm4.54gmf$ 2. Frictional force was increased with more displacement of wire through the bracket slot. 3. The ANOVA psot-hoc test showed that the bracker width had no significant effect on frictional force when tested under clinically simulated conditions(p>0.05).

The effects of microplastics on marine ecosystem and future research directions (미세플라스틱의 해양 생태계에 대한 영향과 향후 연구 방향)

  • Kim, Kanghee;Hwang, Junghye;Choi, Jin Soo;Heo, Yunwi;Park, June-Woo
    • Korean Journal of Environmental Biology
    • /
    • v.37 no.4
    • /
    • pp.625-639
    • /
    • 2019
  • Microplastics are one of the substances threatening the marine ecosystem. Here, we summarize the status of research on the effect of microplastics on marine life and suggest future research directions. Microplastics are synthetic polymeric compounds smaller than 5 mm and these materials released into the environment are not only physically small but do not decompose over time. Thus, they accumulate extensively on land, from the coast to the sea, and from the surface to the deep sea. Microplastic can be ingested and accumulated in marine life. Furthermore, the elution of chemicals added to plastic represents another risk. Microplastics accumulated in the ocean affect the growth, development, behavior, reproduction, and death of marine life. However, the properties of microplastics vary widely in size, material, shape, and other aspects and toxicity tests conducted on several properties of microplastics cannot represent the hazards of all other microplastics. It is necessary to evaluate the risks according to the types of microplastic, but due to their variety and the lack of uniformity in research results, it is difficult to compare and analyze the results of previous studies. Therefore, it is necessary to derive a standard test method to estimate the biological risk from different types of microplastics. In addition, while most of the previous studies were conducted mostly on spheres for the convenience of the experiments, they do not properly reflect the reality that fibers and fragments are the main forms of microplastics in the marine environment and in fish and shellfish. Furthermore, studies have been conducted on additives and POPs (persistent organic pollutants) in plastics, but little is known about their toxic effects on the body. The effects of microplastics on the marine ecosystems and humans could be identified in more detail if standard testing methods are developed, microplastics in the form of fibers and fragments rather than spheres are tested, and additives and POPs are analyzed. These investigations will allow us to identify the impact of microplastics on marine ecosystems and humans in more detail.

Accurate Quality Control Method of Bone Mineral Density Measurement -Focus on Dual Energy X-ray Absorptiometry- (골밀도 측정의 정확한 정도관리방법 -이중 에너지 방사선 흡수법을 중심으로-)

  • Kim, Ho-Sung;Dong, Kyung-Rae;Ryu, Young-Hwan
    • Journal of radiological science and technology
    • /
    • v.32 no.4
    • /
    • pp.361-370
    • /
    • 2009
  • The image quality management of bone mineral density is the responsibility and duty of radiologists who carry out examinations. However, inaccurate conclusions due to lack of understanding and ignorance regarding the methodology of image quality management can be a fatal error to the patient. Therefore, objective of this paper is to understand proper image quality management and enumerate methods for examiners and patients, thereby ensuring the reliability of bone mineral density exams. The accuracy and precision of bone mineral density measurements must be at the highest level so that actual biological changes can be detected with even slight changes in bone mineral density. Accuracy and precision should be continuously preserved for image quality of machines. Those factors will contribute to ensure the reliability in bone mineral density exams. Proper equipment management or control methods are set with correcting equipment each morning and after image quality management, a phantom, recommended from the manufacturer, is used for ten to twenty-five measurements in search of a mean value with a permissible range of ${\pm}1.5%$ set as standard. There needs to be daily measurement inspections on the phantom or at least inspections three times a week in order to confirm the existence or nonexistence of changes in values in actual bone mineral density. in addition, bone mineral density measurements were evaluated and recorded following the rules of Shewhart control chart. This type of management has to be conducted for the installation and movement of equipment. For the management methods of inspectors, evaluation of the measurement precision was conducted by testing the reproducibility of the exact same figures without any real biological changes occurring during reinspection. Bone mineral density inspection was applied as the measurement method for patients either taking two measurements thirty times or three measurements fifteen times. An important point when taking measurements was after a measurement whether it was the second or third examination, it was required to descend from the table and then reascend. With a 95% confidence level, the precision error produced from the measurement bone mineral figures came to 2.77 times the minimum of the biological bone mineral density change. The value produced can be stated as the least significant change (LSC) and in the case the value is greater, it can be stated as a section of genuine biological change. From the initial inspection to equipment moving and shifter, management must be carried out and continued in order to achieve the effects. The enforcement of proper quality control of radiologists performing bone mineral density inspections which brings about the durability extensions of equipment and accurate results of calculations will help the assurance of reliable inspections.

  • PDF

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.