• Title/Summary/Keyword: YOON's test

Search Result 2,047, Processing Time 0.042 seconds

The Comparison of the Unconditional and Conditional Exact Power of Fisher's Exact Tes

  • Kang, Seung-Ho;Park, Yoon-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.5
    • /
    • pp.883-890
    • /
    • 2010
  • Since Fisher's exact test is conducted conditional on the observed value of the margin, there are two kinds of the exact power, the conditional and the unconditional exact power. The conditional exact power is computed at a given value of the margin whereas the unconditional exact power is calculated by incorporating the uncertainty of the margin. Although the sample size is determined based on the unconditional exact power, the actual power which Fisher's exact test has is the conditional power after the experiment is finished. This paper investigates differences between the conditional and unconditional exact power Fisher's exact test. We conclude that such discrepancy is a disadvantage of Fisher's exact test.

PDA Based Audiometric System and its Database Structure (PDA 기반의 청력 검사 시스템 및 데이터베이스 구성)

  • Kim K. S.;Lee J.H;Shin S.W;Yoon T.H;Lee S.M
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.1
    • /
    • pp.42-44
    • /
    • 2006
  • In this paper, we tried to implement a PDA (Personal Digital Assistant)-based audiometric system to test hearing disorder. Due to the inherent handy nature of PDA system, our hearing test system can be easily performed in an user's local environment and consequently the measured audiometric data can be stored and queryed locally via a built-in PDA database system.

Finding Unexpected Test Accuracy by Cross Validation in Machine Learning

  • Yoon, Hoijin
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.549-555
    • /
    • 2021
  • Machine Learning(ML) splits data into 3 parts, which are usually 60% for training, 20% for validation, and 20% for testing. It just splits quantitatively instead of selecting each set of data by a criterion, which is very important concept for the adequacy of test data. ML measures a model's accuracy by applying a set of validation data, and revises the model until the validation accuracy reaches on a certain level. After the validation process, the complete model is tested with the set of test data, which are not seen by the model yet. If the set of test data covers the model's attributes well, the test accuracy will be close to the validation accuracy of the model. To make sure that ML's set of test data works adequately, we design an experiment and see if the test accuracy of model is always close to its validation adequacy as expected. The experiment builds 100 different SVM models for each of six data sets published in UCI ML repository. From the test accuracy and its validation accuracy of 600 cases, we find some unexpected cases, where the test accuracy is very different from its validation accuracy. Consequently, it is not always true that ML's set of test data is adequate to assure a model's quality.