• 제목/요약/키워드: failure data

Search Result 3,986, Processing Time 0.034 seconds

Analysis of Marginal Count Failure Data by using Covariates

  • Karim, Md.Rezaul;Suzuki, Kazuyuki
    • International Journal of Reliability and Applications
    • /
    • v.4 no.2
    • /
    • pp.79-95
    • /
    • 2003
  • Manufacturers collect and analyze field reliability data to enhance the quality and reliability of their products and to improve customer satisfaction. To reduce the data collecting and maintenance costs, the amount of data maintained for evaluating product quality and reliability should be minimized. With this in mind, some industrial companies assemble warranty databases by gathering data from different sources for a particular time period. This “marginal count failure data” does not provide (i) the number of failures by when the product entered service, (ii) the number of failures by product age, or (iii) information about the effects of the operating season or environment. This article describes a method for estimating age-based claim rates from marginal count failure data. It uses covariates to identify variations in claims relative to variables such as manufacturing characteristics, time of manufacture, operating season or environment. A Poisson model is presented, and the method is illustrated using warranty claims data for two electrical products.

  • PDF

An Event-Driven Failure Analysis System for Real-Time Prognosis (실시간 고장 예방을 위한 이벤트 기반 결함원인분석 시스템)

  • Lee, Yang Ji;Kim, Duck Young;Hwang, Min Soon;Cheong, Young Soo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.4
    • /
    • pp.250-257
    • /
    • 2013
  • This paper introduces a failure analysis procedure that underpins real-time fault prognosis. In the previous study, we developed a systematic eventization procedure which makes it possible to reduce the original data size into a manageable one in the form of event logs and eventually to extract failure patterns efficiently from the reduced data. Failure patterns are then extracted in the form of event sequences by sequence-mining algorithms, (e.g. FP-Tree algorithm). Extracted patterns are stored in a failure pattern library, and eventually, we use the stored failure pattern information to predict potential failures. The two practical case studies (marine diesel engine and SIRIUS-II car engine) provide empirical support for the performance of the proposed failure analysis procedure. This procedure can be easily extended for wide application fields of failure analysis such as vehicle and machine diagnostics. Furthermore, it can be applied to human health monitoring & prognosis, so that human body signals could be efficiently analyzed.

Frailty, Sarcopenia, Cachexia, and Malnutrition in Heart Failure

  • Daichi Maeda;Yudai Fujimoto;Taisuke Nakade;Takuro Abe;Shiro Ishihara;Kentaro Jujo;Yuya Matsue
    • Korean Circulation Journal
    • /
    • v.54 no.7
    • /
    • pp.363-381
    • /
    • 2024
  • With global aging, the number of patients with heart failure has increased markedly. Heart failure is a complex condition intricately associated with aging, organ damage, frailty, and cognitive decline, resulting in a poor prognosis. The relationship among frailty, sarcopenia, cachexia, malnutrition, and heart failure has recently received considerable attention. Although these conditions are distinct, they often exhibit a remarkably close relationship. Overlapping diagnostic criteria have been observed in the recently proposed guidelines and position statements, suggesting that several of these conditions may coexist in patients with heart failure. Therefore, a comprehensive understanding of these conditions is essential, and interventions must not only target these conditions individually, but also provide comprehensive management strategies. This review article provides an overview of the epidemiology, diagnostic methods, overlap, and prognosis of frailty, sarcopenia, cachexia, and malnutrition in patients with heart failure, incorporating insights from the FRAGILEHF study data. Additionally, based on existing literature, this article discusses the impact of these conditions on the effectiveness of guideline-directed medical therapy for patients with heart failure. While recognizing these conditions early and promptly implementing interventions may be advantageous, further data, particularly from well-powered, large-scale, randomized controlled trials, are necessary to refine personalized treatment strategies for patients with heart failure.

A Study of Failure Rate Calculation Methods in Distribution System Reliability (배전 계통 신뢰도에서 고장률 산출 기법에 관한 연구)

  • Chai, Hui-Seok;Shin, Hee-Sang;Kang, Byoung-Wook;Ryu, Ki-Hwan;Kim, Jae-Chul;Choo, Dong-Wook
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.326-327
    • /
    • 2011
  • Failure rate serves as a pivotal role in reliability study. When the system operates, failure datum of the system reflects the actual operating environment. Therefore, when we estimate the system's with the component failure data, we can find the more exact failure rate that reflected the system's operating environment. In this paper, we use the components' fault data and find out failure rate.

  • PDF

Risk Evaluation of Failure Cause for FMEA under a Weibull Time Delay Model (와이블 지연시간 모형 하에서의 FMEA를 위한 고장원인의 위험평가)

  • Kwon, Hyuck Moo;Lee, Min Koo;Hong, Sung Hoon
    • Journal of the Korean Society of Safety
    • /
    • v.33 no.3
    • /
    • pp.83-91
    • /
    • 2018
  • This paper suggests a weibull time delay model to evaluate failure risks in FMEA(failure modes and effects analysis). Assuming three types of loss functions for delayed time in failure cause detection, the risk of each failure cause is evaluated as its occurring frequency and expected loss. Since the closed form solution of the risk metric cannot be obtained, a statistical computer software R program is used for numerical calculation. When the occurrence and detection times have a common shape parameter, though, some simple results of mathematical derivation are also available. As an enormous quantity of field data becomes available under recent progress of data acquisition system, the proposed risk metric will provide a more practical and reasonable tool for evaluating the risks of failure causes in FMEA.

Design of Reliability Qualification Test based on Performance Distribution (성능분포에 기초한 신뢰성 인정시험 설계)

  • Kwon, Young-Il
    • Journal of Applied Reliability
    • /
    • v.10 no.1
    • /
    • pp.1-9
    • /
    • 2010
  • In general, the performance of a component degrades as time goes by and failure of a component occurs when the performance degradation reaches a pre-specified level. It is difficult to obtain the failure time distribution data or the necessary number of failure data especially for the metal or machine part. Thus, a design of reliability qualification test based on performance distribution is more effective than failure time distribution. In this study, a performance-based reliability qualification test is developed and a numerical example is provided to illustrate the use of the developed reliability qualification test. This approach could be applied to many kinds of metal or machine part whose magnitude of strength could not be evaluated during at any random points but judgement can be made by only failure of the part. Besides, it is also possible that any parts which have a similar failure characteristics could be applicable to the developed reliability qualification test.

Estimating the Population Variability Distribution Using Dependent Estimates From Generic Sources (종속적 문헌 추정치를 이용한 모집단 변이 분포의 추정)

  • 임태진
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.20 no.3
    • /
    • pp.43-59
    • /
    • 1995
  • This paper presents a method for estimating the population variability distribution of the failure parameter (failure rate or failure probability) for each failure mode considered in PSA (Probabilistic Safety Assessment). We focus on the utilization of generic estimates from various industry compendia for the estimation. The estimates are complicated statistics of failure data from plants. When the failure data referred in two or more sources are overlapped, dependency occurs among the estimates provided by the sources. This type of problem is first addressed in this paper. We propose methods based on ML-II estimation in Bayesian framework and discuss the characteristics of the proposed estimators. The proposed methods are easy to apply in real field. Numerical examples are also provided.

  • PDF

A Study on a Reliability Prognosis based on Censored Failure Data (정시중단 고장자료를 이용한 신뢰성예측 연구)

  • Baek, Jae-Jin;Rhie, Kwang-Won;Meyna, Arno
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.18 no.1
    • /
    • pp.31-36
    • /
    • 2010
  • Collecting all failures during life cycle of vehicle is not easy way because its life cycle is normally over 10 years. Warranty period can help gathering failures data because most customers try to repair its failures during warranty period even though small failures. This warranty data, which means failures during warranty period, can be a good resource to predict initial reliability and permanence reliability. However uncertainty regarding reliability prediction remains because this data is censored. University of Wuppertal and major auto supplier developed the reliability prognosis model considering censored data and this model introduce to predict reliability estimate further "failure candidate". This paper predicts reliability of telecommunications system in vehicle using the model and describes data structure for reliability prediction.

Analysis of Failure in Product Design Experiments by using Product Data Analytics (제품자료 분석을 통한 제품설계 실험 실패 요인 분석)

  • Do, Namchul
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.4
    • /
    • pp.366-374
    • /
    • 2014
  • This study assessed and analysed a result of a product design experiment through Product Data Analytics (PDA), to find reasons for failure of some projects in the experiment. PDA is a computer-based data analysis that uses Product Data Management (PDM) databases as its operational databases. The study examines 20 product design projects in the experiment, which are prepared to follow same product development process by using an identical PDM system. The design result in the PDM database is assessed and analysed by On-Line Analytical Processing (OLAP) and data mining tools in PDA. The assesment and analysis reveals the lateness in creation of 3D CAD models as the main reason of the failure.

Study on Failure Classification of Missile Seekers Using Inspection Data from Production and Manufacturing Phases (생산 및 제조 단계의 검사 데이터를 이용한 유도탄 탐색기의 고장 분류 연구)

  • Ye-Eun Jeong;Kihyun Kim;Seong-Mok Kim;Youn-Ho Lee;Ji-Won Kim;Hwa-Young Yong;Jae-Woo Jung;Jung-Won Park;Yong Soo Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.30-39
    • /
    • 2024
  • This study introduces a novel approach for identifying potential failure risks in missile manufacturing by leveraging Quality Inspection Management (QIM) data to address the challenges presented by a dataset comprising 666 variables and data imbalances. The utilization of the SMOTE for data augmentation and Lasso Regression for dimensionality reduction, followed by the application of a Random Forest model, results in a 99.40% accuracy rate in classifying missiles with a high likelihood of failure. Such measures enable the preemptive identification of missiles at a heightened risk of failure, thereby mitigating the risk of field failures and enhancing missile life. The integration of Lasso Regression and Random Forest is employed to pinpoint critical variables and test items that significantly impact failure, with a particular emphasis on variables related to performance and connection resistance. Moreover, the research highlights the potential for broadening the scope of data-driven decision-making within quality control systems, including the refinement of maintenance strategies and the adjustment of control limits for essential test items.