• Title/Summary/Keyword: 통계 오류

Search Result 384, Processing Time 0.02 seconds

Modulation Code for Removing Error Patterns on 4-Level NAND Flash Memory (4-레벨 낸드 플래시 메모리에서 오류 발생 패턴 제거 변조 부호)

  • Park, Dong-Hyuk;Lee, Jae-Jin;Yang, Gi-Ju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.965-970
    • /
    • 2010
  • In the NAND flash memory storing two bits per cell, data is discriminated among four levels of electrical charges. We refer to these four levels as E, P1, P2, and P3 from the low voltage. In the statistics, many errors occur when E and P3 are stored at the next cells. Therefore, we propose a coding scheme for avoiding E-P3 or P3-E data patterns. We investigate two modulation codes for 9/10 code (9 bit input and 5 symbol codeword) and 11/12 code (11 bit input and 6 symbol codeword).

Aggregating Prediction Outputs of Multiple Classification Techniques Using Mixed Integer Programming (다수의 분류 기법의 예측 결과를 결합하기 위한 혼합 정수 계획법의 사용)

  • Jo, Hongkyu;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.71-89
    • /
    • 2003
  • Although many studies demonstrate that one technique outperforms the others for a given data set, there is often no way to tell a priori which of these techniques will be most effective in the classification problems. Alternatively, it has been suggested that a better approach to classification problem might be to integrate several different forecasting techniques. This study proposes the linearly combining methodology of different classification techniques. The methodology is developed to find the optimal combining weight and compute the weighted-average of different techniques' outputs. The proposed methodology is represented as the form of mixed integer programming. The objective function of proposed combining methodology is to minimize total misclassification cost which is the weighted-sum of two types of misclassification. To simplify the problem solving process, cutoff value is fixed and threshold function is removed. The form of mixed integer programming is solved with the branch and bound methods. The result showed that proposed methodology classified more accurately than any of techniques individually did. It is confirmed that Proposed methodology Predicts significantly better than individual techniques and the other combining methods.

  • PDF

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on polynomial hazard function (다항 위험함수에 근거한 NHPP 소프트웨어 신뢰모형에 관한 통계적 공정관리 접근방법 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.5
    • /
    • pp.345-353
    • /
    • 2015
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do parameter inference for software reliability models based on finite failure model and non-homogeneous Poisson Processes (NHPP). For someone making a decision to market software, the conditional failure rate is an important variables. In this case, finite failure model are used in a wide variety of practical situations. Their use in characterization problems, detection of outlier, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many study. Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, proposed a control mechanism based on NHPP using mean value function of polynomial hazard function.

Usability of the National Science and Technology Information System (웹 사용성 개선에 관한 연구 - 국가과학기술정보시스템을 중심으로 -)

  • Park, Min-Soo;Hyun, Mi-Hwan
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.4
    • /
    • pp.5-19
    • /
    • 2011
  • The purpose of this study is to identify possible needs for system improvements and reflect them on the operation and development of the system as a result of the usability assessment of an information site in science and technology. For this study, a variety of data collection techniques, including search logs, interviews, and think-alouds, were used. The search log data was processed to quantify four evaluation aspects, which were the effectiveness, efficiency, satisfaction, and errors. The verbal data collected by think-alouds and post-interviews were used to identify possible needs of enhancement in a qualitative analysis. The comparison of the usability before and after the system enhancement revealed an increase of 15 points for effectiveness, 35 seconds decrease in efficiency, 5 points increase in satisfaction, and 1.1 errors decreased, implying an overall improvement of the usability of the current system.

Analysis and Prediction of Prosodic Phrage Boundary (운율구 경계현상 분석 및 텍스트에서의 운율구 추출)

  • Kim, Sang-Hun;Seong, Cheol-Jae;Lee, Jung-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.24-32
    • /
    • 1997
  • This study aims to describe, at one aspect, the relativity between syntactic structure and prosodic phrasing, and at the other, to establish a suitable phrasing pattern to produce more natural synthetic speech. To get meaningful results, all the word boundaries in the prosodic database were statistically analyzed, and assigned by the proper boundary type. The resulting 10 types of prosodic boundaries were classified into 3 types according to the strength of the breaks, which are zero, minor, and major break respectively. We have found out that the durational information was a main cue to determine the major prosodic boundary. Using the bigram and trigram of syntactic information, we predicted major and minor classification of boundary types. With brigram model, we obtained the correct major break prediction rates of 4.60%, 38.2%, the insertion error rates of 22.8%, 8.4% on each Test-I and Test-II text database respectively. With trigram mode, we also obtained the correct major break prediction rates of 58.3%, 42.8%, the insertion error rates of 30.8%, 42.8%, the insertion error rates of 30.8%, 11.8% on Test-I and Test-II text database respectively.

  • PDF

An Automatic Test Case Generation Method from Checklist (한글 체크리스트로부터 테스트 케이스 자동 생성 방안)

  • Kim, Hyun Dong;Kim, Dae Joon;Chung, Ki Hyun;Choi, Kyung Hee;Park, Ho Joon;Lee, Yong Yoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.8
    • /
    • pp.401-410
    • /
    • 2017
  • This paper proposes a method to generate test cases in an automatic manner, based on checklist containing test cases used for testing embedded systems. In general, the items to be tested are defined in a checklist. However, most test case generation strategies recommend to test a system with not only the defined test items but also various mutated test conditions. The proposed method parses checklist in Korean file and figures out the system inputs and outputs, and operation information. With the found information and the user defined test case generation strategy, the test cases are automatically generated. With the proposed method, the errors introduced during manual test case generation may be reduced and various test cases not defined in checklist can be generated. The proposed method is implemented and the experiment is performed with the checklist for an medical embedded system. The feasibility of the proposed method is shown through the test cases generated from the checklist. The test cases are adequate to the coverages and their statistics are correct.

The Verification of Causality among Accident, Depression, and Cognitive Failure of the Train Drivers (철도기관사의 사고, 우울감, 인지실패 간의 인과관계 검증)

  • Ro, Choon-Ho;Shin, Tack-Hyun
    • Journal of the Korea Society for Simulation
    • /
    • v.25 no.4
    • /
    • pp.109-115
    • /
    • 2016
  • This study intended to testify the causality among three variables such as accident, depression and cognitive failure of the train drivers. For this purpose, two research models were suggested. Model 1 hypothesized the causality among three variables as 'depression ${\rightarrow}$ cognitive failure ${\rightarrow}$ accident'. On the other hand, model 2 hypothesized the causality among three variables as 'accident ${\rightarrow}$ depression ${\rightarrow}$ cognitive failure'. Results based on AMOS using 416 train drivers' questionnaire showed that model 2 is more valid than model 1. The statistical result of model 1 showed that depression has a positive effect on cognitive failure, however no significant relationship between depression and accident as well as between cognitive failure and accident. In model 2, the result showed that the accident has a positive effect on cognitive failure mediated by depression. This result suggests the necessity for establishment of countermeasures to mitigate mistake and cognitive failure caused by train drivers in a wider context, considering the causality between accident and depression.

Application of Patient-based Real-time Quality Control (환자 기반 실시간 정도관리의 적용)

  • Seung Mo LEE;Kyung-A SHIN
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.56 no.2
    • /
    • pp.105-114
    • /
    • 2024
  • Clinical laboratories endeavor to secure quality by establishing effective quality management systems. However, laboratory environments are complex, and single quality control procedures may inadequately detect many errors. Patient-based real-time quality control (PBRTQC) is a laboratory tool that monitors the testing process using algorithms such as Bull's algorithm and several variables, such as average of normal, moving median, moving average, and exponentially weighted moving average. PBRTQC has many advantages over conventional quality control, including low cost, commutability, continuous real-time performance monitoring, and sensitivity to pre-analytical errors. However, PBRTQC is not easily implemented as it requires statistical algorithm selection, the design of appropriate rules and protocols, and performance verification. This review describes the basic concepts, methods, and procedures of PBRTQC and presents guidelines for implementing a patient-based quality management system. Furthermore, we propose the combined use of PBRTQC when the performance of internal quality control is limited. However, clinical evaluations were not conducted during this review, and thus, future evaluation is required.

STANDARDIZATION OF WORD/NONWORD READING TEST AND LETTER-SYMBOL DISCRIMINATION TASK FOR THE DIAGNOSIS OF DEVELOPMENTAL READING DISABILITY (발달성 읽기 장애 진단을 위한 단어/비단어 읽기 검사와 글자기호감별검사의 표준화 연구)

  • Cho, Soo-Churl;Lee, Jung-Bun;Chungh, Dong-Seon;Shin, Sung-Woong
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.14 no.1
    • /
    • pp.81-94
    • /
    • 2003
  • Objectives:Developmental reading disorder is a condition which manifests significant developmenttal delay in reading ability or persistent errors. About 3-7% of school-age children have this condition. The purpose of the present study was to validate the diagnostic values of Word/Nonword Reading Test and Letter-Symbol Discrimination Task for the purpose of overcoming the caveats of Basic Learning Skills Test. Methods:Sixty-three reading-disordered patients(mean age 10.48 years old) and sex, age-matched 77 normal children(mean age 10.33 years old) were selected by clinical evaluation and DSM-IV criteria. Reading I and II of Basic Learning Skills Test, Word/Nonword Reading Test, and Letter-Symbol Discrimination Task were carried out to them. Word/Nonword Reading Test:One hundred usual highfrequency words and one hundred meaningless nonwords were presented to the subjects within 1.2 and 2.4 seconds, respectively. Through these results, automatized phonological processing ability and conscious letter-sound matching ability were estimated. Letter-Symbol Discrimination Task:mirror image letters which reading-disordered patients are apt to confuse were used. Reliability, concurrent validity, construct validity, and discriminant validity tests were conducted. Results:Word/Nonword Reading Test:the reliability(alpha) was 0.96, and concurrent validity with Basic Learning Skills test was 0.94. The patients with developmental reading disorders differed significantly from normal children in Word/Nonword Reading Test performances. Through discriminant analysis, 83.0% of original cases were correctly classified by this test. Letter-Symbol Discrimination Task:the reliability(alpha) was 0.86, and concurrent validity with Basic Learning Skills test was 0.86. There were significant differences in scores between the patients and normal children. Factor analysis revealed that this test were composed of saccadic mirror image processing, global accuracy, mirror image processing deficit, static image processing, global vigilance deficit, and inattention-impulsivity factors. By discriminant analysis, 87.3% of the patients and normal children were correctly classified. Conclusion:The patients with developmental reading disorders had deficits in automatized visuallexical route, morpheme-phoneme conversion mechanism, and visual information processing. These deficits were reliably and validly evaluated by Word/Nonword Reading Test and Letter-Symbol Discrimination Task.

  • PDF

A Development of Generalized Coupled Markov Chain Model for Stochastic Prediction on Two-Dimensional Space (수정 연쇄 말콥체인을 이용한 2차원 공간의 추계론적 예측기법의 개발)

  • Park Eun-Gyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.10 no.5
    • /
    • pp.52-60
    • /
    • 2005
  • The conceptual model of under-sampled study area will include a great amount of uncertainty. In this study, we investigate the applicability of Markov chain model in a spatial domain as a tool for minimizing the uncertainty arose from the lack of data. A new formulation is developed to generalize the previous two-dimensional coupled Markov chain model, which has more versatility to fit any computational sequence. Furthermore, the computational algorithm is improved to utilize more conditioning information and reduce the artifacts, such as the artificial parcel inclination, caused by sequential computation. A generalized 20 coupled Markov chain (GCMC) is tested through applying a hypothetical soil map to evaluate the appropriateness as a substituting model for conventional geostatistical models. Comparing to sequential indicator model (SIS), the simulation results from GCMC shows lower entropy at the boundaries of indicators which is closer to real soil maps. For under-sampled indicators, however, GCMC under-estimates the presence of the indicators, which is a common aspect of all other geostatistical models. To improve this under-estimation, further study on data fusion (or assimilation) inclusion in the GCMC is required.