• Title/Summary/Keyword: failure time data

Search Result 1,081, Processing Time 0.03 seconds

Anomaly Detection of Big Time Series Data Using Machine Learning (머신러닝 기법을 활용한 대용량 시계열 데이터 이상 시점탐지 방법론 : 발전기 부품신호 사례 중심)

  • Kwon, Sehyug
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.2
    • /
    • pp.33-38
    • /
    • 2020
  • Anomaly detection of Machine Learning such as PCA anomaly detection and CNN image classification has been focused on cross-sectional data. In this paper, two approaches has been suggested to apply ML techniques for identifying the failure time of big time series data. PCA anomaly detection to identify time rows as normal or abnormal was suggested by converting subjects identification problem to time domain. CNN image classification was suggested to identify the failure time by re-structuring of time series data, which computed the correlation matrix of one minute data and converted to tiff image format. Also, LASSO, one of feature selection methods, was applied to select the most affecting variables which could identify the failure status. For the empirical study, time series data was collected in seconds from a power generator of 214 components for 25 minutes including 20 minutes before the failure time. The failure time was predicted and detected 9 minutes 17 seconds before the failure time by PCA anomaly detection, but was not detected by the combination of LASSO and PCA because the target variable was binary variable which was assigned on the base of the failure time. CNN image classification with the train data of 10 normal status image and 5 failure status images detected just one minute before.

Modeling of Rate-of-Occurrence-of-Failure According to the Failure Data Type of Water Distribution Cast Iron Pipes and Estimation of Optimal Replacement Time Using the Modified Time Scale (상수도 주철 배수관로의 파손자료 유형에 따른 파손율 모형화와 수정된 시간척도를 이용한 최적교체시기의 산정)

  • Park, Su-Wan;Jun, Hwan-Don;Kim, Jung-Wook
    • Journal of Korea Water Resources Association
    • /
    • v.40 no.1 s.174
    • /
    • pp.39-50
    • /
    • 2007
  • This paper presents applications of the log-linear ROCOF(rate-of-occurrence-of-failure) and the Weibull ROCOF to model the failure rate of individual cast iron pipes in a water distribution system and provides a method of estimating the economically optimal replacement time of the pipes using the 'modified time-scale'. The performance of the two ROCOFs is examined using the maximized log-likelihood estimates of the ROCOFs for the two types of failure data: 'failure-time data' and 'failure-number data'. The optimal replacement time equations for the two models are developed by applying the 'modified time-scale' to ensure the numerical convergence of the estimated values of the model parameters. The methodology is applied to the case study water distribution cast iron pipes and it is found that the log-linear ROCOF has better modeling capability than the Weibull ROCOF when the 'failure-time data' is used. Furthermore, the 'failure-time data' is determined to be more appropriate for both ROCOFs compared to the 'failure-number data' in terms of the ROCOF modeling performances for the water mains under study, implying that recording each failure time results in better modeling of the failure rate than recording failure numbers in some time intervals.

The Study for Software Future Forecasting Failure Time Using ARIMA AR(1) (ARIMA AR(1) 모형을 이용한 소프트웨어 미래 고장 시간 예측에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Convergence Security Journal
    • /
    • v.8 no.2
    • /
    • pp.35-40
    • /
    • 2008
  • Software failure time presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing. For data analysis of software reliability model, data scale tools of trend analysis are developed. The methods of trend analysis are arithmetic mean test and Laplace trend test. Trend analysis only offer information of outline content. In this paper, we discuss forecasting failure time case of failure time censoring. The used software failure time data for forecasting failure time is random number of Weibull distribution(shaper parameter 1, scale parameter 0.5), Using this data, we are proposed to ARIMA(AR(1)) and simulation method for forecasting failure time. The practical ARIMA method is presented.

  • PDF

Development of Reliability Analysis Procedures for Repairable Systems with Interval Failure Time Data and a Related Case Study (구간 고장 데이터가 주어진 수리가능 시스템의 신뢰도 분석절차 개발 및 사례연구)

  • Cho, Cha-Hyun;Yum, Bong-Jin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.5
    • /
    • pp.859-870
    • /
    • 2011
  • The purpose of this paper is to develop reliability analysis procedures for repairable systems with interval failure time data and apply the procedures for assessing the storage reliability of a subsystem of a certain type of guided missile. In the procedures, the interval failure time data are converted to pseudo failure times using the uniform random generation method, mid-point method or equispaced intervals method. Then, such analytic trend tests as Laplace, Lewis-Robinson, Pair-wise Comparison Nonparametric tests are used to determine whether the failure process follows a renewal or non-renewal process. Monte Carlo simulation experiments are conducted to compare the three conversion methods in terms of the statistical performance for each trend test when the underlying process is homogeneous Poisson, renewal, or non-homogeneous Poisson. The simulation results show that the uniform random generation method is best among the three. These results are applied to actual field data collected for a subsystem of a certain type of guided missile to identify its failure process and to estimate its mean time to failure and annual mean repair cost.

Obtaining bootstrap data for the joint distribution of bivariate survival times

  • Kwon, Se-Hyug
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.933-939
    • /
    • 2009
  • The bivariate data in clinical research fields often has two types of failure times, which are mark variable for the first failure time and the final failure time. This paper showed how to generate bootstrap data to get Bayesian estimation for the joint distribution of bivariate survival times. The observed data was generated by Frank's family and the fake date is simulated with the Gamma prior of survival time. The bootstrap data was obtained by combining the mimic data with the observed data and the simulated fake data from the observed data.

  • PDF

Semiparametric accelerated failure time model for the analysis of right censored data

  • Jin, Zhezhen
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.6
    • /
    • pp.467-478
    • /
    • 2016
  • The accelerated failure time model or accelerated life model relates the logarithm of the failure time linearly to the covariates. The parameters in the model provides a direct interpretation. In this paper, we review some newly developed practically useful estimation and inference methods for the model in the analysis of right censored data.

Maximizing Mean Time to the Catastrophic Failure through Burn-In

  • Cha, Ji-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.997-1005
    • /
    • 2003
  • In this paper, the problem of determining optimal burn-in time is considered under a general failure model. There are two types of failure in the general failure model. One is Type I failure (minor failure) which can be removed by a minimal repair and the other is Type II failure (catastrophic failure) which can be removed only by a complete repair. In this model, when the unit fails at its age t, Type I failure occurs with probability 1 - p(t) and Type II failure occurs with probability p(t), $0{\leq}p(t)\leq1$. Under the model, the properties of optimal burn-in time maximizing mean time to the catastrophic failure during field operation are obtained. The obtained results are also applied to some illustrative examples.

  • PDF

A Study on Revision Method of Historical Fault Data Considering Maintenance Effect to Use Proportional Aging Reduction(PAR) (PAR기법을 이용하여 유지보수 영향을 고려한 고장 데이터의 보정기법에 관한 연구)

  • Chu, Cheol-Min;Kim, Jae-Chul;Moon, Jong-Fil;Lee, Hee-Tae;Park, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2006.11a
    • /
    • pp.9-11
    • /
    • 2006
  • This paper suggests a revision method for historical fault data using Proportional Aging Reduction(PAR) to consider maintenance effect in time-varying failure rate. In order to product time-varying failure rate, the historical fault data are necessary. However, the maintenance record could be left out in historical data by spot operator's mistake. In this case, the failure rate is produced less than the average failure rate for increasing equipments' life-time by maintenance effect. Hence, it is necessary for new time-varying failure rate to extract maintenance effect from the existing fault data. In this paper, the revision method to reduce equipments' life-time, adversely using PAR among three techniques to consider maintenance effect.

  • PDF

Synthesizing Failure Data of Pump in PCB Manufacturing using Bayesian Method (베이지안 방법을 이용한 PCB 제조공정의 펌프 고장 데이터 합성)

  • Woo, Jeong Jae;Kim, Min Hwan;Chu, Chang Yeop;Baek, Jong Bae
    • Journal of the Korean Society of Safety
    • /
    • v.35 no.1
    • /
    • pp.79-86
    • /
    • 2020
  • Failure data that has systematically managed for a long time has high reliability to an estimated volume. But since much cost and effort are needed to secure reliability data, data from overseas country is used in quantitative risk analysis in many workplaces. Reliability of the data that can be collected in workplaces can be dropped because of insufficient sample or lack of observation time. Therefore, estimated data is difficult to use as it is and environment and characteristic of the workplace cannot be reflected by using data from overseas country. So this study used Bayesian method that can be used reflecting both reliability data from overseas country and workplace failure data that has less samples. As a setting toward difficult situation that securing sufficient failure data cannot be achieved, we composed workplace failure data equivalent to mass observation time 20%(t=17000), 40%(t=24000), 60%(t=31000), 80%(t=38000) and IEEE data by using Bayesian method.

Scalable Approach to Failure Analysis of High-Performance Computing Systems

  • Shawky, Doaa
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.1023-1031
    • /
    • 2014
  • Failure analysis is necessary to clarify the root cause of a failure, predict the next time a failure may occur, and improve the performance and reliability of a system. However, it is not an easy task to analyze and interpret failure data, especially for complex systems. Usually, these data are represented using many attributes, and sometimes they are inconsistent and ambiguous. In this paper, we present a scalable approach for the analysis and interpretation of failure data of high-performance computing systems. The approach employs rough sets theory (RST) for this task. The application of RST to a large publicly available set of failure data highlights the main attributes responsible for the root cause of a failure. In addition, it is used to analyze other failure characteristics, such as time between failures, repair times, workload running on a failed node, and failure category. Experimental results show the scalability of the presented approach and its ability to reveal dependencies among different failure characteristics.