• Title/Summary/Keyword: Human failure event

Search Result 30, Processing Time 0.024 seconds

An Event-Driven Failure Analysis System for Real-Time Prognosis (실시간 고장 예방을 위한 이벤트 기반 결함원인분석 시스템)

  • Lee, Yang Ji;Kim, Duck Young;Hwang, Min Soon;Cheong, Young Soo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.4
    • /
    • pp.250-257
    • /
    • 2013
  • This paper introduces a failure analysis procedure that underpins real-time fault prognosis. In the previous study, we developed a systematic eventization procedure which makes it possible to reduce the original data size into a manageable one in the form of event logs and eventually to extract failure patterns efficiently from the reduced data. Failure patterns are then extracted in the form of event sequences by sequence-mining algorithms, (e.g. FP-Tree algorithm). Extracted patterns are stored in a failure pattern library, and eventually, we use the stored failure pattern information to predict potential failures. The two practical case studies (marine diesel engine and SIRIUS-II car engine) provide empirical support for the performance of the proposed failure analysis procedure. This procedure can be easily extended for wide application fields of failure analysis such as vehicle and machine diagnostics. Furthermore, it can be applied to human health monitoring & prognosis, so that human body signals could be efficiently analyzed.

How to incorporate human failure event recovery into minimal cut set generation stage for efficient probabilistic safety assessments of nuclear power plants

  • Jung, Woo Sik;Park, Seong Kyu;Weglian, John E.;Riley, Jeff
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.110-116
    • /
    • 2022
  • Human failure event (HFE) dependency analysis is a part of human reliability analysis (HRA). For efficient HFE dependency analysis, a maximum number of minimal cut sets (MCSs) that have HFE combinations are generated from the fault trees for the probabilistic safety assessment (PSA) of nuclear power plants (NPPs). After collecting potential HFE combinations, dependency levels of subsequent HFEs on the preceding HFEs in each MCS are analyzed and assigned as conditional probabilities. Then, HFE recovery is performed to reflect these conditional probabilities in MCSs by modifying MCSs. Inappropriate HFE dependency analysis and HFE recovery might lead to an inaccurate core damage frequency (CDF). Using the above process, HFE recovery is performed on MCSs that are generated with a non-zero truncation limit, where many MCSs that have HFE combinations are truncated. As a result, the resultant CDF might be underestimated. In this paper, a new method is suggested to incorporate HFE recovery into the MCS generation stage. Compared to the current approach with a separate HFE recovery after MCS generation, this new method can (1) reduce the total time and burden for MCS generation and HFE recovery, (2) prevent the truncation of MCSs that have dependent HFEs, and (3) avoid CDF underestimation. This new method is a simple but very effective means of performing MCS generation and HFE recovery simultaneously and improving CDF accuracy. The effectiveness and strength of the new method are clearly demonstrated and discussed with fault trees and HFE combinations that have joint probabilities.

A Study on Data Pre-filtering Methods for Fault Diagnosis (시스템 결함원인분석을 위한 데이터 로그 전처리 기법 연구)

  • Lee, Yang-Ji;Kim, Duck-Young;Hwang, Min-Soon;Cheong, Young-Soo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.2
    • /
    • pp.97-110
    • /
    • 2012
  • High performance sensors and modern data logging technology with real-time telemetry facilitate system fault diagnosis in a very precise manner. Fault detection, isolation and identification in fault diagnosis systems are typical steps to analyze the root cause of failures. This systematic failure analysis provides not only useful clues to rectify the abnormal behaviors of a system, but also key information to redesign the current system for retrofit. The main barriers to effective failure analysis are: (i) the gathered data (event) logs are too large in general, and further (ii) they usually contain noise and redundant data that make precise analysis difficult. This paper therefore applies suitable pre-processing techniques to data reduction and feature extraction, and then converts the reduced data log into a new format of event sequence information. Finally the event sequence information is decoded to investigate the correlation between specific event patterns and various system faults. The efficiency of the developed pre-filtering procedure is examined with a terminal box data log of a marine diesel engine.

Direct fault-tree modeling of human failure event dependency in probabilistic safety assessment

  • Ji Suk Kim;Sang Hoon Han;Man Cheol Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.119-130
    • /
    • 2023
  • Among the various elements of probabilistic safety assessment (PSA), human failure events (HFEs) and their dependencies are major contributors to the quantification of risk of a nuclear power plant. Currently, the dependency among HFEs is reflected using a post-processing method in PSA, wherein several drawbacks, such as limited propagation of minimal cutsets through the fault tree and improper truncation of minimal cutsets exist. In this paper, we propose a method to model the HFE dependency directly in a fault tree using the if-then-else logic. The proposed method proved to be equivalent to the conventional post-processing method while addressing the drawbacks of the latter. We also developed a software tool to facilitate the implementation of the proposed method considering the need for modeling the dependency between multiple HFEs. We applied the proposed method to a specific case to demonstrate the drawbacks of the conventional post-processing method and the advantages of the proposed method. When applied appropriately under specific conditions, the direct fault-tree modeling of HFE dependency enhances the accuracy of the risk quantification and facilitates the analysis of minimal cutsets.

A classification of electrical component failures and their human error types in South Korean NPPs during last 10 years

  • Cho, Won Chul;Ahn, Tae Ho
    • Nuclear Engineering and Technology
    • /
    • v.51 no.3
    • /
    • pp.709-718
    • /
    • 2019
  • The international nuclear industry has undergone a lot of changes since the Fukushima, Chernobyl and TMI nuclear power plant accidents. However, there are still large and small component deficiencies at nuclear power plants in the world. There are many causes of electrical equipment defects. There are also factors that cause component failures due to human errors. This paper analyzed the root causes of failure and types of human error in 300 cases of electrical component failures. We analyzed the operating experience of electrical components by methods of root causes in K-HPES (Korean-version of Human Performance Enhancement System) and by methods of human error types in HuRAM+ (Human error-Related event root cause Analysis Method Plus). As a result of analysis, the most electrical component failures appeared as circuit breakers and emergency generators. The major causes of failure showed deterioration and contact failure of electrical components by human error of operations management. The causes of direct failure were due to aged components. Types of human error affecting the causes of electrical equipment failure are as follows. The human error type group I showed that errors of commission (EOC) were 97%, the human error type group II showed that slip/lapse errors were 74%, and the human error type group III showed that latent errors were 95%. This paper is meaningful in that we have approached the causes of electrical equipment failures from a comprehensive human error perspective and found a countermeasure against the root cause. This study will help human performance enhancement in nuclear power plants. However, this paper has done a lot of research on improving human performance in the maintenance field rather than in the design and construction stages. In the future, continuous research on types of human error and prevention measures in the design and construction sector will be required.

An Investigation of Fire Human Reliability Analysis (HRA) Factors for Quantification of Post-fire Operator Manual Actions (OMA) (화재 후 운전원수동조치(OMA) 정량화를 위한 화재 인간신뢰도분석 (HRA) 요소에 대한 고찰)

  • Sun Yeong Choi;Dae Il Kang;Yong Hun Jung
    • Journal of the Korean Society of Safety
    • /
    • v.38 no.6
    • /
    • pp.72-78
    • /
    • 2023
  • The purpose of this paper is to derive a quantified approach for Operator Manual Actions (OMAs) based on the existing fire Human Reliability Analysis (HRA) methodology developed by the Korea Atomic Energy Research Institute (KAERI). The existing fire HRA method was reviewed, and supplementary considerations for OMA quantification were established through a comparative analysis with NUREG-1852 criteria and the review of the existing literature. The OMA quantification approach involves a timeline that considers the occurrence of Multiple Spurious Operations (MSOs) during a Main Control Room Abandonment (MCRA) determination and movement towards the Remote Shutdown Panel (RSP) in the event of a Main Control Room (MCR) fire. The derived failure probability of an OMA from the approach proposed in this paper is expected to enhance the understanding of its reliability. Therefore, it allows moving beyond the deterministic classification of "reliable" or "unreliable" in NUREG-1852. Also, in the event of a nuclear power plant fire where multiple OMAs are required within a critical time range, it is anticipated that the OMA failure probability could serve as a criterion for prioritizing OMAs and determining their order of importance.

A Safety Assessment Methodology for a Digital Reactor Protection System

  • Lee Dong-Young;Choi Jong-Gyun;Lyou Joon
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.1
    • /
    • pp.105-112
    • /
    • 2006
  • The main function of a reactor protection system is to maintain the reactor core integrity and the reactor coolant system pressure boundary. Generally, the reactor protection system adopts the 2-out-of-m redundant architecture to assure a reliable operation. This paper describes the safety assessment of a digital reactor protection system using the fault tree analysis technique. The fault tree technique can be expressed in terms of combinations of the basic event failures such as the random hardware failures, common cause failures, operator errors, and the fault tolerance mechanisms implemented in the reactor protection system. In this paper, a prediction method of the hardware failure rate is suggested for a digital reactor protection system, and applied to the reactor protection system being developed in Korea to identify design weak points from a safety point of view.

Feasibility Study on the Fault Tree Analysis Approach for the Management of the Faults in Running PCR Analysis (PCR 과정의 오류 관리를 위한 Fault Tree Analysis 적용에 관한 시범적 연구)

  • Lim, Ji-Su;Park, Ae-Ri;Lee, Seung-Ju;Hong, Kwang-Won
    • Applied Biological Chemistry
    • /
    • v.50 no.4
    • /
    • pp.245-252
    • /
    • 2007
  • FTA (fault tree analysis), an analytical method for system failure management, was employed in the management of faults in running PCR analysis. PCR is executed through several processes, in which the process of PCR machine operation was selected for the analysis by FTA. The reason for choosing the simplest process in the PCR analysis was to adopt it as a first trial to test a feasibility of the FTA approach. First, fault events-top event, intermediate event, basic events-were identified by survey on expert knowledge of PCR. Then those events were correlated deductively to build a fault tree in hierarchical structure. The fault tree was evaluated qualitatively and quantitatively, yielding minimal cut sets, structural importance, common cause vulnerability, simulation of probability of occurrence of top event, cut set importance, item importance and sensitivity. The top event was 'errors in the step of PCR machine operation in running PCR analysis'. The major intermediate events were 'failures in instrument' and 'errors in actions in experiment'. The basic events were four events, one event and one event based on human errors, instrument failure and energy source failure, respectively. Those events were combined with Boolean logic gates-AND or OR, constructing a fault tree. In the qualitative evaluation of the tree, the basic events-'errors in preparing the reaction mixture', 'errors in setting temperature and time of PCR machine', 'failure of electrical power during running PCR machine', 'errors in selecting adequate PCR machine'-proved the most critical in the occurrence of the fault of the top event. In the quantitative evaluation, the list of the critical events were not the same as that from the qualitative evaluation. It was because the probability value of PCR machine failure, not on the list above though, increased with used time, and the probability of the events of electricity failure and defective of PCR machine were given zero due to rare likelihood of the events in general. It was concluded that this feasibility study is worth being a means to introduce the novel technique, FTA, to the management of faults in running PCR analysis.

A Modification of Human Error Analysis Technique for Designing Man-Machine Interface in Nuclear Power Plants (원자력 발전소 주제어실 인터페이스 설계를 위한 인적오류 분석 기법의 보완)

  • Lee, Yong-Hui;Jang, Tong-Il;Im, Hyeon-Gyo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.31-42
    • /
    • 2003
  • This study describes a modification of the technique for human error analysis in nuclear power plants (NPPs) which adopts advanced Man-Machine Interface (MMI) features based on computerized working environment, such as LCOs. Flat Panels. Large Wall Board, and computerized procedures. Firstly, the state of the art on human error analysis methods and efforts were briefly reviewed. Human error analysis method applied to NPP design has been THERP and ASEP mainly utilizing Swain's HRA handbook, which has not been facilitated enough to put the varied characteristics of MMI into HRA process. The basic concepts on human errors and the system safety approach were revisited, and adopted the process of FMEA with the new definition of Error Segment (ESJ. A modified human error analysis process was suggested. Then, the suggested method was applied to the failure of manual pump actuation through LCD touch screen in loss of feed water event in order to verify the applicability of the proposed method in practices. The example showed that the method become more facilitated to consider the concerns of the introduction of advanced MMI devices, and to integrate human error analysis process not only into HRA/PRA but also into the MMI and interface design. Finally, the possible extensions and further efforts required to obtain the applicability of the suggested method were discussed.

An Analysis of Supervisory Control Performance under Urgent Enviornments (감시제어작업에서 긴급상황의 수행도 분석)

  • 오영진;이근희
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.17 no.32
    • /
    • pp.243-253
    • /
    • 1994
  • Work environments have been changed with the advent of new technologies, such as computer technology. The newer technologies, the more changes in our work conditions. However, human cognitive limits can't keep up with the change of work environments. Mental workload has been an important factors in designing modem work environments such as human-computer interaction. Designing man-machine systems requires knowledge and evaluation of the human cognitive processes which control information flow workload. Futhermore, under an urgent situation, human operator may suffer the work stress, work error, and resultant deleterious work performance. To describe the work performance in the urgent work situations, with time stress and dynamic event occurence, a new concept of information density was introduced. For a series of experiments performed for this study, three independent variables(information amount system processing time, information density) were evaluated using such dependent variables as reaction time, number of error, and number of failure. The results of statistical anlysiss indicate that the amount of information effected on all of five dependent measure. Number of failure and number of secondary task score were effected by both amount of information and operational speed of system, but reaction time of secondary task were effected by both amount of information and information density.

  • PDF