• Title/Summary/Keyword: Accuracy test

Search Result 4,817, Processing Time 0.034 seconds

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Effect of the Changing the Lower Limits of Normal and the Interpretative Strategies for Lung Function Tests (폐기능검사 해석에 정상하한치 변화와 새 해석흐름도가 미치는 영향)

  • Ra, Seung Won;Oh, Ji Seon;Hong, Sang-Bum;Shim, Tae Sun;Lim, Chae Man;Koh, Youn Suck;Lee, Sang Do;Kim, Woo Sung;Kim, Dong-Soon;Kim, Won Dong;Oh, Yeon-Mok
    • Tuberculosis and Respiratory Diseases
    • /
    • v.61 no.2
    • /
    • pp.129-136
    • /
    • 2006
  • Background: To interpret lung function tests, it is necessary to determine the lower limits of normal (LLN) and to derive a consensus on the interpretative algorithm. '0.7 of LLN for the $FEV_1$/FVC' was suggested by the COPD International Guideline (GOLD) for defining obstructive disease. A consensus on a new interpretative algorithm was recently achieved by ATS/ERS in 2005. We evaluated the accuracy of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases, and we also determined the effect of the new algorithm on diagnosing ventilatory defects. Methods: We obtained the age, gender, height, weight, $FEV_1$, FVC, and $FEV_1$/FVC from 7362 subjects who underwent spirometry in 2005 at the Asan Medical Center, Korea. For diagnosing obstructive diseases, the accuracy of '0.7 of LLN for the $FEV_1$/FVC' was evaluated in reference to the $5^{th}$ percentile of the LLN. By applying the new algorithm, we determined how many more subjects should have lung volumes testing performed. Evaluation of 1611 patients who had lung volumes testing performed as well as spirometry during the period showed how many more subjects were diagnosed with obstructive diseases according to the new algorithm. Results: 1) The sensitivity of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases increased according to age, but the specificity was decreased according to age; the positive predictive value decreased, but the negative predictive value increased. 2) By applying the new algorithm, 34.5% (2540/7362) more subjects should have lung volumes testing performed. 3) By applying the new algorithm, 13% (205/1611) more subjects were diagnosed with obstructive diseases; these subjects corresponded to 30% (205/681) of the subjects who had been diagnosed with restrictive diseases by the old interpretative algorithm. Conclusion: The sensitivity and specificity of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases changes according to age. By applying the new interpretative algorithm, it was shown that more subjects should have lung volumes testing performed, and there was a higher probability of being diagnosed with obstructive diseases.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

Effects of Anti-thyroglobulin Antibody on the Measurement of Thyroglobulin : Differences Between Immunoradiometric Assay Kits Available (면역방사계수법을 이용한 Thyroglobulin 측정시 항 Thyroglobulin 항체의 존재가 미치는 영향: Thyroglobulin 측정 키트에 따른 차이)

  • Ahn, Byeong-Cheol;Seo, Ji-Hyeong;Bae, Jin-Ho;Jeong, Shin-Young;Yoo, Jeong-Soo;Jung, Jin-Hyang;Park, Ho-Yong;Kim, Jung-Guk;Ha, Sung-Woo;Sohn, Jin-Ho;Lee, In-Kyu;Lee, Jae-Tae;Kim, Bo-Wan
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.4
    • /
    • pp.252-256
    • /
    • 2005
  • Purpose: Thyroglobulin (Tg) is a valuable and sensitive tool as a marker for diagnosis and follow-up for several thyroid disorders, especially, in the follow-up of patients with differentiated thyroid cancer (DTC). Often, clinical decisions rely entirely on the serum Tg concentration. But the Tg assay is one of the most challenging laboratory measurements to perform accurately owing to antithyroglobulin antibody (Anti-Tg). In this study, we have compared the degree of Anti-Tg effects on the measurement of Tg between availale Tg measuring kits. Materials and Methods: Measurement of Tg levels for standard Tg solution was performed with two different kits commercially available (A/B kits) using immunoradiometric assay technique either with absence or presence of three different concentrations of Anti-Tg. Measurement of Tg for patient's serum was also performed with the same kits. Patient's serum samples were prepared with mixtures of a serum containing high Tg levels and a serum containg high Anti-Tg concentrations. Results: In the measurements of standard Tg solution, presence of Anti-Tg resulted in falsely lower Tg level by both A and B kits. Degree of Tg underestimation by h kit was more prominent than B kit. The degree of underestimation by B kit was trivial therefore clinically insignificant, but statistically significant. Addition of Anti-Tg to patient serum resulted in falsely lower Tg levels with only A kit. Conclusion: Tg level could be underestimated in the presence of anti-Tg. Anti-Tg effect on Tg measurement was variable according to assay kit used. Therefore, accuracy test must be performed for individual Tg-assay kit.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Study on the Utilzation of Two Furrow Combine (2조형(條型) Combine의 이용(利用)에 관(關)한 연구(硏究))

  • Lee, Sang Woo;Kim, Soung Rai
    • Korean Journal of Agricultural Science
    • /
    • v.3 no.1
    • /
    • pp.95-104
    • /
    • 1976
  • This study was conducted to test the harvesting operation of two kinds of rice varieties such as Milyang #15 and Tong-il with a imported two furrow Japanese combine and was performed to find out the operational accuracy of it, the adaptability of this machine, and the feasibility of supplying this machine to rural area in Korea. The results obtained in this study are summarized as follows; 1. The harvesting test of the Milyang #15 was carried out 5 times from the optimum harvesting operation was good regardless of its maturity. The field grain loss ratio and the rate of unthreshed paddy were all about 1 percent. 2. The field grain loss of Tong-il harvested was increased from 5.13% to 10.34% along its maturity as shown in Fig 1. In considering this, it was needed that the combine mechanism should be improved mechanically for harvesting of Tong-il rice variety. 3. The rate of unthreshed paddy of Tong-il rice variety of which stem was short was average 1.6 percent, because the sample combine used in this study was developed on basisof the long stem variety in Japan, therefore some ears owing to the uneven stem of Tong-il rice could nat reach the teeth of the threshing drum. 4. The cracking rates of brown rice depending mostly upon the revolution speed of the threshing drum(240-350 rpm) in harvesting of Tong-il and Milyang #15 were all below 1 percent, and there was no significance between two varieties. 5. Since the ears of Tong-il rice variety covered with its leaves, a lots of trashes was produced, especially when threshed in raw materials, and the cleaning and the trashout mechanisms were clogged with those trashes very often, and so these two mechanisms were needed for being improved. 6. The sample combine of which track pressure was $0.19kg/cm^2$ could drive on the soft ground of which sinking was even 25cm as shown in Fig 3. But in considering the reaping height adjustment, 5cm sinking may be afford to drive the combine on the irregular sinking level ground without any readjustment of the resaping height. 7. The harvesting expenses per ha. by the sample combine of which annual coverage area is 4.7 ha. under conditions that the yearly workable days is 40, percentage of days being good for harvesting operation is 60%, field efficiency is 56%, working speed is 0.273m/sec, and daily workable hours is 8 hrs is reasonable to spread this combine to rural area in Korea, comparing to the expenses by the conventional harvesting expenses, if mechanical improvement is supplemented so as to harvest Tong-il rice. 8. In order to harvest Tong-il rice, the two furrow combine should be needed some mechanical improvements that divider can control not to touch ears of paddy, the space between the feeding chain and the thrshing drum is reduced, trash treatment apparatus must be improved, fore and rear adjust-interval is enlarged, and width of track must be enlarged so as to drive on the soft ground.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on the Establishment of Acceptable Range for Internal Quality Control of Radioimmunoassay (핵의학 검체검사 내부정도관리 허용범위 설정에 관한 고찰)

  • Young Ji, LEE;So Young, LEE;Sun Ho, LEE
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.43-47
    • /
    • 2022
  • Purpose Radioimmunoassay implement quality control by systematizing the internal quality control system for quality assurance of test results. This study aims to contribute to the quality assurance of radioimmunoassay results and to implement systematic quality control by measuring the average CV of internal quality control and external quality control by plenty of institutions for reference when setting the laboratory's own acceptable range. Materials and Methods We measured the average CV of internal quality control and the bounce rate of more than 10.0% for a total of 42 items from October 2020 to December 2021. According to the CV result, we classified and compared the upper group (5.0% or less), the middle group (5.0~10.0%) and the lower group (10.0% or more). The bounce rate of 10.0% or more was compared by classifying the item of five or more institutions into tumor markers, thyroid hormones and other hormones. The average CV was measured by the overall average and standard deviation of the external quality control results for 28 items from the first quarter to the fourth quarter of 2021. In addition, the average CV was measured by the overall average and standard deviation of the proficiency results between institutions for 13 items in the first half and the second half of 2021. The average CV of internal quality control and external quality control was compared by item so we compared and analyzed the items that implement well to quality control and the items that require attention to quality control. Results As a result of measuring the precision average of internal quality control for 42 items of six institutions, the top group (5.0% or less) are Ferritin, HGH, SHBG, and 25-OH-VitD, while the bottom group (≤10.0%) are cortisol, ATA, AMA, renin, and estradiol. When comparing more than 10.0% bounce rate of CV for tumor markers, CA-125 (6.7%), CA-19-9 (9.8%) implemented well, while SCC-Ag (24.3%), CA-15-3 (26.7%) were among the items that require attention to control. As a result of comparing the bounce rate of more than 10.0% of CV for thyroid hormones examination, free T4 (2.1%), T3 (9.3%) showed excellent performance and AMA (39.6%), ATA (51.6%) required attention to control. When comparing the bounce rate of 10.0% or more of CV for other hormones, IGF-1 (8.8%), FSH (9.1%), prolactin (9.2%) showed excellent performance, however estradiol (37.3%), testosterone (37.7%), cortisol (44.4%) required attention to control. As a result of measuring the average CV of the whole institutions participating at external quality control for 28 items, HGH and SCC-Ag were included in the top group (≤10.0%), however ATA, estradiol, TSI, and thyroglobulin included in bottom group (≥30.0%). Conclusion As a result of evaluating 42 items of six institutions, the average CV was 3.7~12.2% showing a 3.3 times difference between the upper group and the lower group. Cortisol, ATA, AMA, Renin and estradiol tests with high CV will require continuous improvement activities to improve precision. In addition, we measured and compared the overall average CV of the internal quality control, the external quality control and the proficiency between institutions participating of six institutions for 41 items excluding HBs-Ab. As a result, ATA, AMA, Renin and estradiol belong to the same subgroup so we require attention to control and consider setting a higher acceptable range. It is recommended to set and control the acceptable range standard of internal quality control CV in consideration of many things in the laboratory due to the different reagents and instruments, and the results vary depending on the test's proficiency and quality control materials. It is thought that the accuracy and reliability of radioimmunoassay results can be improved if systematic quality control is implemented based on the set acceptable range.

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

Diagnostic Efficacy of Anorectal Manometry for the Diagnosis of Hirschsprung's Disease (Hirschsprung병에서 항문직장 내압검사의 진단적 유용성)

  • Chang, Soo-Hee;Min, Uoo-Gyung;Choi, Ok-Ja;Kim, Dae-Yeon;Kim, Seong-Chul;Yu, Chang-Sik;Kim, Jin-Cheon;Kim, In-Koo;Yoon, Jong-Hyun;Kim, Kyung-Mo
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.6 no.1
    • /
    • pp.24-31
    • /
    • 2003
  • Purpose: As diagnostic tools for Hirschsprung's disease (HD), barium enema and rectal biopsy have radiation exposure and invasiveness respectively; however anorectal manometry does not have these disadvantages. We therefore performed this study to evaluate the diagnostic efficacy of anorectal manometry. Methods: We reviewed medical records of infants with one or two symptoms of vomiting, abdominal distension, chronic diarrhea or constipation who had a anorectal manometry followed by barium enema and/or biopsy from July 1995 to May 2002. We evaluated the sensitivity, specificity and predictive value of anorectal manometry and barium enema for diagnosis of HD. We also measured sphincter length, median value of balloon volume at which rectoanal inhibitory reflex (RAIR) occurred. Results: All 61 patients received anorectal manometry, 33 of 61 received barium enema. 18 of 61 were diagnosed as HD according to histology and 43 of 61 were evaluated as a control. The sensitivity, specificity, positive predictive value, negative predictive value of anorectal manometry and barium enema for diagnosis of HD were 1.00, 0.91, 0.82, 1.00 and 0.93, 0.67, 0.70, 0.92 respectively. The mean value of sphincter length in control was $1.68{\pm}0.67$ cm and correlated with age, weight and significantly longitudinal length. The median value of balloon volume at which RAIR occurred was 10 mL and did not correlated with age, weight and longitudinal length. Conclusion: This study suggests that anorectal manometry is an excellent initial screening test for Hirschsprung's disease because of its safety and accuracy.

  • PDF