• 제목/요약/키워드: testing data

검색결과 4,708건 처리시간 0.027초

공진주 실험의 이론적 모델링에 의한 자료분석 및 해석기법의 제안 (Data Reduction and Analysis Technique for the Resonant Column Testing by Its Theoretical Modeling)

  • 조성호;황선근;강태호;권병성
    • 한국지반공학회:학술대회논문집
    • /
    • 한국지반공학회 2003년도 봄 학술발표회 논문집
    • /
    • pp.291-298
    • /
    • 2003
  • The resonant column testing is a laboratory testing method to determine the shear modulus and the material damping factor of soils. The method has been widely used for many applications and its importance has been increased. Since the establishment of the testing method in 1963, the low-technology electronic devices for testing and data acquisition have limited the measurement to the amplitude of the linear spectrum. The limitations of the testing method were also attributed to the assumption of the linear-elastic material in the theory of the resonant column testing and to the use of the wave equation for the dynamic response of the specimen. For the better theoretical formulation of the resonant column testing, this study derived the equation of motion and provided its solution. This study also proposed the improved data reduction and analysis method for the resonant column testing, based on the advanced data acquisition system and the proposed theoretical solution for the resonant column testing system. For the verification of the proposed data reduction and analysis method, the numerical simulation of the resonant column testing was performed by the finite element analysis. Also, a series of resonant column testing were peformed for Joomunjin sand, which verified the feasibility, of the proposed method and showed the limitations of the conventional data reduction and analysis method.

  • PDF

공업통계분야에서 동등성 검정 및 그 응용 (Equivalence testing and its applications in industry)

  • 백재욱
    • 품질경영학회지
    • /
    • 제36권4호
    • /
    • pp.1-6
    • /
    • 2008
  • As more and more data are collected one may ask whether the data collected within a short period of time are same. In this case traditional hypothesis testing of $H_o:{\mu}_1={\mu}_2$ vs $H_1:{\mu}_1{\neq}{\mu}_2$ is used to determine whether the data are same when there is no knowledge about equivalence testing. However, this type of hypothesis testing has the undesirable property of penalizing higher precision. So TOST is to be performed in the event of equivalence testing. In this study equivalence testing is introduced where one can find the applications in industry. Traditional two sample t testing is to be compared with the equivalent testing and the procedure to perform the equivalence testing is to be presented along with an example. Finally equivalence testing in terms of the other parameters such as variance, proportion or failure rate is to be sought.

선체구조 모델 데이터의 교환 표준에 따른 적합성 시험 기준의 개발 (Development of Conformance Testing Criteria for STEP AP218 (Ship Structure))

  • 황호진
    • 한국해양공학회지
    • /
    • 제24권2호
    • /
    • pp.74-81
    • /
    • 2010
  • Ship STEP is the international standard for the exchange of ship modeling data between heterogeneous systems. It is expected that STEP AP218 can be used for seamless data exchange between various CAD/CAM/CAE systems used in the shipbuilding design process. Although the conformance assessment for standards would maximize the performance and confidence about data exchanges, most research has been directed toward interoperability testing. ISO SC4/TC184 only provides the method for conformance testing, and it can be used with test cases on application protocols. Even though standards have been defined for conformance assessment and testing, there is no organization or association. CAD vendors have focused on interoperability testing for evaluation of the performance of their systems. In this paper, the conformance testing criteria for AP218 have been developed with abstract test cases of ship structures. The requested STEP translator was also reviewed with a developed item pool of testing criteria. The criteria methodology would be a guideline for the development of translators and interfaces. The item pool method of testing criteria for conformance assessment would increase performance and efficiency of data translators for Ship STEP and other standards.

로지스틱 테스트 노력함수를 이용한 소프트웨어의 최적인도시기 결정에 관한 연구 (A Study on the Optimal Release Time Decision of a Developed Software by using Logistic Testing Effort Function)

  • 최규식;김용경
    • Journal of Information Technology Applications and Management
    • /
    • 제12권2호
    • /
    • pp.1-13
    • /
    • 2005
  • This paper proposes a software-reliability growth model incoporating the amount of testing effort expended during the software testing phase after developing it. The time-dependent behavior of testing effort expenditures is described by a Logistic curve. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, a software-reliability growth model is formulated by a nonhomogeneous Poisson process. Using this model the method of data analysis for software reliability measurement is developed. After defining a software reliability, This paper discusses the relations between testing time and reliability and between duration following failure fixing and reliability are studied. SRGM in several literatures has used the exponential curve, Railleigh curve or Weibull curve as an amount of testing effort during software testing phase. However, it might not be appropriate to represent the consumption curve for testing effort by one of already proposed curves in some software development environments. Therefore, this paper shows that a logistic testing-effort function can be adequately expressed as a software development/testing effort curve and that it gives a good predictive capability based on real failure data.

  • PDF

국제표준 ISO/IEC 25023 을 기반으로 한 소프트웨어 품질평가 (The Software Quality Testing on the basis of the International Standard ISO/IEC 25023)

  • 정혜정
    • 한국융합학회논문지
    • /
    • 제7권6호
    • /
    • pp.35-41
    • /
    • 2016
  • 소프트웨어의 중요성이 높아지면서 소프트웨어 품질평가에 대한 관심이 높아지고 있다. 본 연구에서는 소프트웨어 품질 평가를 위한 국제 표준 문서를 비교 분석하고 테스트 데이터 분석을 통한 평가 방안을 제시한다. 국제표준 ISO/IEC 9126-2의 평가 모델과 ISO/IEC 25023의 평가 모델에 대한 차이점을 비교했다. ISO/IEC 25023의 평가모델인 8가지 품질 특성, 즉 기능성, 신뢰성, 사용성, 유지보수성, 이식성, 효율성, 상호운영성, 보안성적인 측면에서 평가 메트릭을 제시했다. 실제 테스트를 통해 얻어진 331개 자료를 분석해서 테스트 데이터의 발견된 오류 특징을 파악했다. 또한 결함 자료를 분석하고 차이점을 파악했다. 테스트데이터가 남녀에 따라서 시험 일수나 발견하는 품질 특성별 오류의 수에는 차이가 있음을 증명하고 시험일수를 기능성, 사용성, 성별을 가지고 예측했으며, 제품의 종류에 따라서도 오류수에 차이가 있음을 증명했다.

An Adequacy Based Test Data Generation Technique Using Genetic Algorithms

  • Malhotra, Ruchika;Garg, Mohit
    • Journal of Information Processing Systems
    • /
    • 제7권2호
    • /
    • pp.363-384
    • /
    • 2011
  • As the complexity of software is increasing, generating an effective test data has become a necessity. This necessity has increased the demand for techniques that can generate test data effectively. This paper proposes a test data generation technique based on adequacy based testing criteria. Adequacy based testing criteria uses the concept of mutation analysis to check the adequacy of test data. In general, mutation analysis is applied after the test data is generated. But, in this work, we propose a technique that applies mutation analysis at the time of test data generation only, rather than applying it after the test data has been generated. This saves significant amount of time (required to generate adequate test cases) as compared to the latter case as the total time in the latter case is the sum of the time to generate test data and the time to apply mutation analysis to the generated test data. We also use genetic algorithms that explore the complete domain of the program to provide near-global optimum solution. In this paper, we first define and explain the proposed technique. Then we validate the proposed technique using ten real time programs. The proposed technique is compared with path testing technique (that use reliability based testing criteria) for these ten programs. The results show that the adequacy based proposed technique is better than the reliability based path testing technique and there is a significant reduce in number of generated test cases and time taken to generate test cases.

Bayesian hypothesis testing for homogeneity of coecients of variation in k Normal populationsy

  • Kang, Sang-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권1호
    • /
    • pp.163-172
    • /
    • 2010
  • In this paper, we deal with the problem for testing homogeneity of coecients of variation in several normal distributions. We propose Bayesian hypothesis testing procedures based on the Bayes factor under noninformative prior. The noninformative prior is usually improper which yields a calibration problem that makes the Bayes factor to be dened up to a multiplicative constant. So we propose the objective Bayesian hypothesis testing procedures based on the fractional Bayes factor and the intrinsic Bayes factor under the reference prior. Simulation study and a real data example are provided.

Bayesian Hypothesis Testing for the Difference of Quantiles in Exponential Models

  • Kang, Sang-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권4호
    • /
    • pp.1379-1390
    • /
    • 2008
  • This article deals with the problem of testing the difference of quantiles in exponential distributions. We propose Bayesian hypothesis testing procedures for the difference of two quantiles under the noninformative prior. The noninformative prior is usually improper which yields a calibration problem that makes the Bayes factor to be defined up to a multiplicative constant. So we propose the objective Bayesian hypothesis testing procedures based on the fractional Bayes factor and the intrinsic Bayes factor under the matching prior. Simulation study and a real data example are provided.

  • PDF

AUTOSAR XML을 이용한 테스팅 자동화 시스템 개발 (Automated Testing System Using AUTOSAR XML)

  • 금대현;이성훈;박광민;조정훈
    • 대한임베디드공학회논문지
    • /
    • 제4권4호
    • /
    • pp.156-163
    • /
    • 2009
  • Recently a standard software platform for automotive, AUTOSAR, has been developed to manage growing software complexity and improve software reuseability. However reuse of testing system and test data are difficult because they are dependant on implementation language and testing phases. In this paper, we suggest a automated testing approach for AUTOSAR software component using a standardized testing language, TTCN-3. AUTOSAR defines the AUTOSAR XML Schema for the data exchange format so that it is possible to automatically convert AUTOSAR model into TTCN-3 testing model. Therefore our approach is to present generation techniques for the TTCN-3 testing system from a AUTOSAR XML description. With the proposed testing techniques we can reduce time and effort to build the testing system and reuse testing environment.

  • PDF

Machine Learning Frameworks for Automated Software Testing Tools : A Study

  • Kim, Jungho;Ryu, Joung Woo;Shin, Hyun-Jeong;Song, Jin-Hee
    • International Journal of Contents
    • /
    • 제13권1호
    • /
    • pp.38-44
    • /
    • 2017
  • Increased use of software and complexity of software functions, as well as shortened software quality evaluation periods, have increased the importance and necessity for automation of software testing. Automating software testing by using machine learning not only minimizes errors in manual testing, but also allows a speedier evaluation. Research on machine learning in automated software testing has so far focused on solving special problems with algorithms, leading to difficulties for the software developers and testers, in applying machine learning to software testing automation. This paper, proposes a new machine learning framework for software testing automation through related studies. To maximize the performance of software testing, we analyzed and categorized the machine learning algorithms applicable to each software test phase, including the diverse data that can be used in the algorithms. We believe that our framework allows software developers or testers to choose a machine learning algorithm suitable for their purpose.