• Title/Summary/Keyword: test data

Search Result 35,928, Processing Time 0.062 seconds

Statistical Estimation of Specified Concrete Strength by Applying Non-Destructive Test Data (비파괴시험 자료를 적용한 콘크리트 기준강도의 통계적 추정)

  • Paik, Inyeol
    • Journal of the Korean Society of Safety
    • /
    • v.30 no.1
    • /
    • pp.52-59
    • /
    • 2015
  • The aim of the paper is to introduce the statistical definition of the specified compressive strength of the concrete to be used for safety evaluation of the existing structure in domestic practice and to present the practical method to obtain the specified strength by utilizing the non-destructive test data as well as the limited number of core test data. The statistical definition of the specified compressive strength of concrete in the design codes is reviewed and the consistent formulations to statistically estimate the specified strength for assessment are described. In order to prevent estimating an unrealistically small value of the specified strength due to limited number of data, it is proposed that the information from the non-destructive test data is combined to that of the minimum core test data. The the sample mean, standard deviation and total number of concrete test are obtained from combined test data. The proposed procedures are applied to an example test data composed of the artificial numerical values and the actual evaluation data collected from the bridge assessment reports. The calculation results show that the proposed statistical estimation procedures yield reasonable values of the specified strength for assessment by applying the non-destructive test data in addition to the limited number of core test data.

New Test for IDMRL(DIMRL) Alternatives using Censored Data

  • Na, Myung-Hwan;Lee, Hyun-Woo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.1
    • /
    • pp.57-65
    • /
    • 1999
  • In a resent paper, Na, Lee and Kim(1998) develop a test statistic for testing whether or not the mean residual life changes its trend based on complete data and show that the new test performs better than previously known tests. In this paper, we extend their test to the randomly censored data. The asymptotic normality of the test statistic is established. Monte Carlo simulations are conducted to compare our test with a previously known test by the power of tests.

  • PDF

Finding Unexpected Test Accuracy by Cross Validation in Machine Learning

  • Yoon, Hoijin
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.549-555
    • /
    • 2021
  • Machine Learning(ML) splits data into 3 parts, which are usually 60% for training, 20% for validation, and 20% for testing. It just splits quantitatively instead of selecting each set of data by a criterion, which is very important concept for the adequacy of test data. ML measures a model's accuracy by applying a set of validation data, and revises the model until the validation accuracy reaches on a certain level. After the validation process, the complete model is tested with the set of test data, which are not seen by the model yet. If the set of test data covers the model's attributes well, the test accuracy will be close to the validation accuracy of the model. To make sure that ML's set of test data works adequately, we design an experiment and see if the test accuracy of model is always close to its validation adequacy as expected. The experiment builds 100 different SVM models for each of six data sets published in UCI ML repository. From the test accuracy and its validation accuracy of 600 cases, we find some unexpected cases, where the test accuracy is very different from its validation accuracy. Consequently, it is not always true that ML's set of test data is adequate to assure a model's quality.

Comprehensive comparison of normality tests: Empirical study using many different types of data

  • Lee, Chanmi;Park, Suhwi;Jeong, Jaesik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1399-1412
    • /
    • 2016
  • We compare many normality tests consisting of different sources of information extracted from the given data: Anderson-Darling test, Kolmogorov-Smirnov test, Cramervon Mises test, Shapiro-Wilk test, Shaprio-Francia test, Lilliefors, Jarque-Bera test, D'Agostino' D, Doornik-Hansen test, Energy test and Martinzez-Iglewicz test. For the purpose of comparison, those tests are applied to the various types of data generated from skewed distribution, unsymmetric distribution, and distribution with different length of support. We then summarize comparison results in terms of two things: type I error control and power. The selection of the best test depends on the shape of the distribution of the data, implying that there is no test which is the most powerful for all distributions.

DESIGN OF COMMON TEST HARNESS SYSTEM FOR SATELLITE GROUND SEGMENT DEVELOPMENT

  • Seo, Seok-Bae;Kim, Su-Jin;Koo, In-Hoi;Ahn, Sang-Il
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.544-547
    • /
    • 2007
  • Because data processing systems in recent years are more complicated, main function of the data processing is divided as several sub-functions which are implemented and verified in each subsystem of the data processing system. For the verification of data processing system, many interface tests among subsystems are required and also a lot of simulation systems are demanded. This paper proposes CTHS (Common Test Harness System) for satellite ground segment development which has all of functions for interface test of the data processing system in one PC. Main functions of the CTHS software are data interface, system log generation, and system information display. For the interface test of the data processing system, all of actions of the CTHS are executed by a pre-defined operation scenario which is written by purpose of the data processing system test.

  • PDF

Test of Homogeneity Baseon Complex Survey Data : Discussion Based on Power of Test

  • Heo, Sun-Yeong;Yi, Su-Cheol
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.3
    • /
    • pp.609-620
    • /
    • 2005
  • In the secondary data analysis for categorical data, situations often arise in which the estimated cell variances are available, but not the full matrix of variances. In this case researchers are often inclined to use Pearson-type test statistics for homogeneity. However, for a complex sample observed cell proportions are not distributed as multinomial and Pearson-type test statistic generally is not distributed asymptotically as chi-square distribution. This paper evaluates powers for Wald test and Pearson-type test and the first order corrected test of Pearson-type test for homogeneity. The resulting power curves indicate that as the misspecification effect increases, the amount of inflation of significance level and the loss of power Pearson-type test are getting more severe.

  • PDF

Nonparametric Test for Money and Income Causality

  • Jeong, Ki-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.485-493
    • /
    • 2004
  • This paper considers the test of money and income causality. Jeong (1991, 2003) developed a nonparametric causality test based on the kernel estimation method. We apply the nonparametric test to USA data of money and income. We also compare the test results with ones of the conventional parametric test.

  • PDF

Evaluation of Tensile Properties of Cast Stainless Steel Using Ball Indentation Test

  • Kim Jin Weon
    • Nuclear Engineering and Technology
    • /
    • v.36 no.3
    • /
    • pp.237-247
    • /
    • 2004
  • To investigate the applicability of automated ball indentation (ABI) tests in the evaluation of the tensile properties of cast stainless steel (CSS), ABI tests were performed on four types of unaged CSS and on 316 stainless steel, all of which had a different microstructure and strength. The reliability of ABI test data was analyzed by evaluating the data scattering of the ABI test and by comparing tensile properties obtained from the ABI test and the tensile test. The results show that the degree of scattering of the ABI test data is reasonably acceptable in comparison with that of standard tensile data, when two points data that exhibit out-of-trend are excluded from five to seven points data tested on a specimen. In addition, the scattering decreases slightly as the content of ${\delta}-ferrite$ in CSS increases. Moreover, the ABI test can directly measure the flow parameters of CSS with error bounds of about ${\pm}10\%$ for the ultimate tensile stress and the strength coefficient, and about ${\pm}15\%$ for the yield stress and the strain hardening exponent. The accuracy of the ABI test data is independent of the amount of ${\delta}-ferrite$ in the CSS.

Model Checking for Time-Series Count Data

  • Lee, Sung-Im
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.359-364
    • /
    • 2005
  • This paper considers a specification test of conditional Poisson regression model for time series count data. Although conditional models for count data have received attention and proposed in several ways, few studies focused on checking its adequacy. Motivated by the test of martingale difference assumption, a specification test via Ljung-Box statistic is proposed in the conditional model of the time series count data. In order to illustrate the performance of Ljung- Box test, simulation results will be provided.

Software Test Automation Using Data-Driven Approach : A Case Study on the Payment System for Online Shopping (데이터 주도 접근법을 활용한 소프트웨어 테스트 자동화 : 온라인 쇼핑몰 결제시스템 사례)

  • Kim, Sungyong;Min, Daihwan;Rim, Seongtaek
    • Journal of Information Technology Services
    • /
    • v.17 no.1
    • /
    • pp.155-170
    • /
    • 2018
  • This study examines a data-driven approach for software test automation at an online shopping site. Online shopping sites typically change prices dynamically, offer various discounts or coupons, and provide diverse delivery and payment options such as electronic fund transfer, credit cards, mobile payments (KakaoPay, NaverPay, SyrupPay, ApplePay, SamsungPay, etc.) and so on. As a result, they have to test numerous combinations of possible customer choices continuously and repetitively. The total number of test cases is almost 584 billion. This requires somehow automation of tests in settling payments. However, the record playback approach has difficulties in maintaining automation scripts due to frequent changes and complicated component identification. In contrast, the data-driven approach minimizes changes in scripts and component identification. This study shows that the data-driven approach to test automation is more effective than the traditional record playback method. In 2014 before the test automation, the monthly average defects were 5.6 during the test and 12.5 during operation. In 2015 after the test automation, the monthly average defects were 9.4 during the test and 2.8 during operation. The comparison of live defects and detected errors during the test shows statistically significant differences before and after introducing the test automation using the data-driven approach.