• Title/Summary/Keyword: fault trees

Search Result 62, Processing Time 0.028 seconds

Development of a Method for Uncertainty Analysis in the Top Event Unavailability (고장수목 정점사상 이용 불능도의 불확실성 분석용 방법 개발)

  • Sang Hoon Han;Chang Hyun Chung;Kun Joong Yoo
    • Nuclear Engineering and Technology
    • /
    • v.16 no.2
    • /
    • pp.97-105
    • /
    • 1984
  • A method and computer code for the uncertainty analysis in the top event unavailability are developed and tested by combining Monte Carlo Method and Moments method with fault tree reduction technique. Using system fault trees and unavailability data selected in WASH-1400, the efficiency of the proposed method is tested and these results are compared with those obtained by Monte Carlo method. It is shown that the results are sufficiently good in accuracy and computation time is considerably reduced compared with those by Monte Carlo method.

  • PDF

An Application of Decision Tree Method for Fault Diagnosis of Induction Motors

  • Tran, Van Tung;Yang, Bo-Suk;Oh, Myung-Suck
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2006.11a
    • /
    • pp.54-59
    • /
    • 2006
  • Decision tree is one of the most effective and widely used methods for building classification model. Researchers from various disciplines such as statistics, machine learning, pattern recognition, and data mining have considered the decision tree method as an effective solution to their field problems. In this paper, an application of decision tree method to classify the faults of induction motors is proposed. The original data from experiment is dealt with feature calculation to get the useful information as attributes. These data are then assigned the classes which are based on our experience before becoming data inputs for decision tree. The total 9 classes are defined. An implementation of decision tree written in Matlab is used for these data.

  • PDF

Vital Area Identification for the Physical Protection of Nuclear Power Plants during Low Power and Shutdown Operation (원자력발전소 정지저출력 운전 기간의 물리적방호를 위한 핵심구역파악)

  • Kwak, Myung Woong;Jung, Woo Sik;Lee, Jeong-ho;Baek, Min
    • Journal of the Korean Society of Safety
    • /
    • v.35 no.1
    • /
    • pp.107-115
    • /
    • 2020
  • This paper introduces the first vital area identification (VAI) process for the physical protection of nuclear power plants (NPPs) during low power and shutdown (LPSD) operation. This LPSD VAI is based on the 3rd generation VAI method which very efficiently utilizes probabilistic safety assessment (PSA) event trees (ETs). This LPSD VAI process was implemented to the virtual NPP during LPSD operation in this study. Korea Atomic Energy Research Institute (KAERI) had developed the 2nd generation full power VAI method that utilizes whole internal and external (fire and flooding) PSA results of NPPs during full power operation. In order to minimize the huge burden of the 2nd generation full power VAI method, the 3rd generation full power VAI method was developed, which utilizes ETs and minimal PSA fault trees instead of using the whole PSA fault tree. In the 3rd generation full power VAI method, (1) PSA ETs are analyzed, (2) minimal mitigation systems for avoiding core damage are selected from ETs by calculating system-level target sets and prevention sets, (3) relatively small sabotage fault tree that has the systems in the shortest system-level prevention set is composed, (4) room-level target sets and prevention sets are calculated from this small sabotage fault tree, and (5) the rooms in the shortest prevention set are defined as vital areas that should be protected. Currently, the 3rd generation full power VAI method is being employed for the VAI of Korean NPPs. This study is the first development and application of the 3rd generation VAI method to the LPSD VAI of NPP. For the LPSD VAI, (1) many LPSD ETs are classified into a few representative LPSD ETs based on the functional similarity of accident scenarios, (2) a few representative LPSD ETs are simplified with some VAI rules, and then (3) the 3rd generation VAI is performed as mentioned in the previous paragraph. It is well known that the shortest room-level prevention sets that are calculated by the 2nd and 3rd generation VAI methods are identical.

Integrated Level 1-Level 2 decommissioning probabilistic risk assessment for boiling water reactors

  • Mercurio, Davide;Andersen, Vincent M.;Wagner, Kenneth C.
    • Nuclear Engineering and Technology
    • /
    • v.50 no.5
    • /
    • pp.627-638
    • /
    • 2018
  • This article describes an integrated Level 1-Level 2 probabilistic risk assessment (PRA) methodology to evaluate the radiological risk during postulated accident scenarios initiated during the decommissioning phase of a typical Mark I containment boiling water reactor. The fuel damage scenarios include those initiated while the reactor is permanently shut down, defueled, and the spent fuel is located into the spent fuel storage pool. This article focuses on the integrated Level 1-Level 2 PRA aspects of the analysis, from the beginning of the accident to the radiological release into the environment. The integrated Level 1-Level 2 decommissioning PRA uses event trees and fault trees that assess the accident progression until and after fuel damage. Detailed deterministic severe accident analyses are performed to support the fault tree/event tree development and to provide source term information for the various pieces of the Level 1-Level 2 model. Source terms information is collected from accidents occurring in both the reactor pressure vessel and the spent fuel pool, including simultaneous accidents. The Level 1-Level 2 PRA model evaluates the temporal and physical changes in plant conditions including consideration of major uncertainties. The goal of this article is to provide a methodology framework to perform a decommissioning Probabilistic Risk Assessment (PRA), and an application to a real case study is provided to show the use of the methodology. Results will be derived from the integrated Level 1-Level 2 decommissioning PSA event tree in terms of fuel damage frequency, large release frequency, and large early release frequency, including uncertainties.

How to incorporate human failure event recovery into minimal cut set generation stage for efficient probabilistic safety assessments of nuclear power plants

  • Jung, Woo Sik;Park, Seong Kyu;Weglian, John E.;Riley, Jeff
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.110-116
    • /
    • 2022
  • Human failure event (HFE) dependency analysis is a part of human reliability analysis (HRA). For efficient HFE dependency analysis, a maximum number of minimal cut sets (MCSs) that have HFE combinations are generated from the fault trees for the probabilistic safety assessment (PSA) of nuclear power plants (NPPs). After collecting potential HFE combinations, dependency levels of subsequent HFEs on the preceding HFEs in each MCS are analyzed and assigned as conditional probabilities. Then, HFE recovery is performed to reflect these conditional probabilities in MCSs by modifying MCSs. Inappropriate HFE dependency analysis and HFE recovery might lead to an inaccurate core damage frequency (CDF). Using the above process, HFE recovery is performed on MCSs that are generated with a non-zero truncation limit, where many MCSs that have HFE combinations are truncated. As a result, the resultant CDF might be underestimated. In this paper, a new method is suggested to incorporate HFE recovery into the MCS generation stage. Compared to the current approach with a separate HFE recovery after MCS generation, this new method can (1) reduce the total time and burden for MCS generation and HFE recovery, (2) prevent the truncation of MCSs that have dependent HFEs, and (3) avoid CDF underestimation. This new method is a simple but very effective means of performing MCS generation and HFE recovery simultaneously and improving CDF accuracy. The effectiveness and strength of the new method are clearly demonstrated and discussed with fault trees and HFE combinations that have joint probabilities.

High-level Modeling and Test Generation With VHDL for Sequential Circuits (상위레벨에서의 VHDL에 의한 순차회로 모델링과 테스트생성)

  • Lee, Jae-Min;Lee, Jong-Han
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.5
    • /
    • pp.1346-1353
    • /
    • 1996
  • In this paper, we propose a modeling method for the flip-flops and test generation algorithms to detect the faults in the sequential circuits using VHDL in the high-level design environment. RS, JK, D and T flip-flops are modeled using data flow types. The sequence of micro-operation which is the basic structure of a chip-level leads to a control point where varnishing occurs to one of two micro- operation sequence. In order to model the fault of one micro-operation(FMOP) that perturb another micro-operation effectively, the concept of goal trees and some heuristic rules are used. Given a faulty FMOP or fault of control point (FCON), a test pattern is generated by fault sensitization, path sensitization and determination of the imput combinations that will justify the path sensitization. The fault models are restricted to the data flow model in the ARCHITECTURE statement of VHDL. The proposed algorithm is implemented in the C language and its efficiency is confirmed by some examples.

  • PDF

Evaluation of Uncertainty Importance Measure by Experimental Method in Fault Tree Analysis (결점나무 분석에서 실험적 방법을 이용한 불확실성 중요도 측도의 평가)

  • Cho, Jae-Gyeun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.5
    • /
    • pp.187-195
    • /
    • 2009
  • In a fault tree analysis, an uncertainty importance measure is often used to assess how much uncertainty of the top event probability (Q) is attributable to the uncertainty of a basic event probability ($q_i$), and thus, to identify those basic events whose uncertainties need to be reduced to effectively reduce the uncertainty of Q. For evaluating the measures suggested by many authors which assess a percentage change in the variance V of Q with respect to unit percentage change in the variance $\upsilon_i$ of $q_i$, V and ${\partial}V/{\partial}{\upsilon}_i$ need to be estimated analytically or by Monte Carlo simulation. However, it is very complicated to analytically compute V and ${\partial}V/{\partial}{\upsilon}_i$ for large-sized fault trees, and difficult to estimate them in a robust manner by Monte Carlo simulation. In this paper, we propose a method for experimentally evaluating the measure using a Taguchi orthogonal array. The proposed method is very computationally efficient compared to the method based on Monte Carlo simulation, and provides a stable uncertainty importance of each basic event.

A Framework for Wide-area Monitoring of Tree-related High Impedance Faults in Medium-voltage Networks

  • Bahador, Nooshin;Matinfar, Hamid Reza;Namdari, Farhad
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.1-10
    • /
    • 2018
  • Wide-area monitoring of tree-related high impedance fault (THIF) efficiently contributes to increase reliability of large-scaled network, since the failure to early location of them may results in critical lines tripping and consequently large blackouts. In the first place, this wide-area monitoring of THIF requires managing the placement of sensors across large power grid network according to THIF detection objective. For this purpose, current paper presents a framework in which sensors are distributed according to a predetermined risk map. The proposed risk map determines the possibility of THIF occurrence on every branch in a power network, based on electrical conductivity of trees and their positions to power lines which extracted from spectral data. The obtained possibility value can be considered as a weight coefficient assigned to each branch in sensor placement problem. The next step after sensors deployment is to on-line monitor based on moving data window. In this on-line process, the received data window is evaluated for obtaining a correlation between low frequency and high frequency components of signal. If obtained correlation follows a specified pattern, received signal is considered as a THIF. Thereafter, if several faulted section candidates are found by deployed sensors, the most likely location is chosen from the list of candidates based on predetermined THIF risk map.

Ensemble Methods Applied to Classification Problem

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.1
    • /
    • pp.47-53
    • /
    • 2019
  • The idea of ensemble learning is to train multiple models, each with the objective to predict or classify a set of results. Most of the errors from a model's learning are from three main factors: variance, noise, and bias. By using ensemble methods, we're able to increase the stability of the final model and reduce the errors mentioned previously. By combining many models, we're able to reduce the variance, even when they are individually not great. In this paper we propose an ensemble model and applied it to classification problem. In iris, Pima indian diabeit and semiconductor fault detection problem, proposed model classifies well compared to traditional single classifier that is logistic regression, SVM and random forest.

GAM: A Criticality Prediction Model for Large Telecommunication Systems (GAM: 대형 통신 시스템을 위한 위험도 예측 모델)

  • Hong, Euy-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.2
    • /
    • pp.33-40
    • /
    • 2003
  • Criticality prediction models that determine whether a design entity is fault-prone or non fault-prone play an important role in reducing system development costs because the problems in early phases largely affect the quality of the late products. Real-time systems such as telecommunication systems are so large that criticality prediction is mere important in real-time system design. The current models are based on the technique such as discriminant analysis, neural net and classification trees. These models have some problems with analyzing causes of the prediction results and low extendability. This paper builds a new prediction model, GAM, based on Genetic Algorithm. GAM is different from other models because it produces a criticality function. So GAM can be used for comparison between entities by criticality. GAM is implemented and compared with a well-known prediction model, BackPropagation neural network Model(BPM), considering Internal characteristics and accuracy of prediction.

  • PDF