• Title/Summary/Keyword: Fault Event

Search Result 328, Processing Time 0.025 seconds

Improvement of the Reliability Graph with General Gates to Analyze the Reliability of Dynamic Systems That Have Various Operation Modes

  • Shin, Seung Ki;No, Young Gyu;Seong, Poong Hyun
    • Nuclear Engineering and Technology
    • /
    • v.48 no.2
    • /
    • pp.386-403
    • /
    • 2016
  • The safety of nuclear power plants is analyzed by a probabilistic risk assessment, and the fault tree analysis is the most widely used method for a risk assessment with the event tree analysis. One of the well-known disadvantages of the fault tree is that drawing a fault tree for a complex system is a very cumbersome task. Thus, several graphical modeling methods have been proposed for the convenient and intuitive modeling of complex systems. In this paper, the reliability graph with general gates (RGGG) method, one of the intuitive graphical modeling methods based on Bayesian networks, is improved for the reliability analyses of dynamic systems that have various operation modes with time. A reliability matrix is proposed and it is explained how to utilize the reliability matrix in the RGGG for various cases of operation mode changes. The proposed RGGG with a reliability matrix provides a convenient and intuitive modeling of various operation modes of complex systems, and can also be utilized with dynamic nodes that analyze the failure sequences of subcomponents. The combinatorial use of a reliability matrix with dynamic nodes is illustrated through an application to a shutdown cooling system in a nuclear power plant.

A Stochastic Differential Equation Model for Software Reliability Assessment and Its Goodness-of-Fit

  • Shigeru Yamada;Akio Nishigaki;Kim, Mitsuhiro ura
    • International Journal of Reliability and Applications
    • /
    • v.4 no.1
    • /
    • pp.1-12
    • /
    • 2003
  • Many software reliability growth models (SRGM's) based on a nonhomogeneous Poisson process (NHPP) have been proposed by many researchers. Most of the SRGM's which have been proposed up to the present treat the event of software fault-detection in the testing and operational phases as a counting process. However, if the size of the software system is large, the number of software faults detected during the testing phase becomes large, and the change of the number of faults which are detected and removed through debugging activities becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. Therefore, in such a situation, we can model the software fault-detection process as a stochastic process with a continuous state space. In this paper, we propose a new software reliability growth model describing the fault-detection process by applying a mathematical technique of stochastic differential equations of an Ito type. We also compare our model with the existing SRGM's in terms of goodness-of-fit for actual data sets.

  • PDF

Hydroacoustic Observation on the 2011 Tohoku Earthquake (2011년 토호쿠 대지진의 수중음향 관측)

  • Yun, Sukyoung;Lee, Won Sang
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.4
    • /
    • pp.234-239
    • /
    • 2013
  • The $M_W$ 9.0 thrust-fault earthquake has occurred in the Pacific coast of Tohoku, Japan, on March 11, 2011. We present the detection of the great earthquake and analyze T-waves associated with the main event and two other big aftershocks ($M_W$ > 7) recorded in a hydroacoustic array (H11N) in the Pacific Ocean by performing array and spectral analysis to examine characteristics of T-waves generated from the big events. The complex rupture process of the main event directly influences on the shape of the T-waves, and the peak locates on where T-waves excited from fast rupturing process arrive. We compare the two aftershocks with different fault type and show that the fault type and the source depth change shape and spectral contents of T-waves.

Optical Network Monitoring System Using Smart Phone (스마트 폰을 이용한 광 통신망 감시 시스템)

  • Jung, So-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.1
    • /
    • pp.218-226
    • /
    • 2017
  • In this paper, optical transport network in real time monitoring system using smart phone. The existing housing using monitoring was a smart phone of optical transport network access switch about an event with new installation of cognitive system in real time. This paper can this problem to be solved of the invention in real time maintenance using smart phone application and optical cable closure switch. If you want to find optical cable closure fault location, this smart phone web is very useful. Cable tie is isolation of fiber spare board from fiber switch tie occur push message. Housing and access, and an external failures otdr the measurement of the global positioning to be able to easily using the This paper can find event of optical cable closure unauthorized work and fault using smart phone OTDR function. the optical cable fault time reduction and network transport quality by managing real time optical cable section by using the smart phone can be maintained efficiently.

Understanding radiation effects in SRAM-based field programmable gate arrays for implementing instrumentation and control systems of nuclear power plants

  • Nidhin, T.S.;Bhattacharyya, Anindya;Behera, R.P.;Jayanthi, T.;Velusamy, K.
    • Nuclear Engineering and Technology
    • /
    • v.49 no.8
    • /
    • pp.1589-1599
    • /
    • 2017
  • Field programmable gate arrays (FPGAs) are getting more attention in safety-related and safety-critical application development of nuclear power plant instrumentation and control systems. The high logic density and advancements in architectural features make static random access memory (SRAM)-based FPGAs suitable for complex design implementations. Devices deployed in the nuclear environment face radiation particle strike that causes transient and permanent failures. The major reasons for failures are total ionization dose effects, displacement damage dose effects, and single event effects. Different from the case of space applications, soft errors are the major concern in terrestrial applications. In this article, a review of radiation effects on FPGAs is presented, especially soft errors in SRAM-based FPGAs. Single event upset (SEU) shows a high probability of error in the dependable application development in FPGAs. This survey covers the main sources of radiation and its effects on FPGAs, with emphasis on SEUs as well as on the measurement of radiation upset sensitivity and irradiation experimental results at various facilities. This article also presents a comparison between the major SEU mitigation techniques in the configuration memory and user logics of SRAM-based FPGAs.

Study on a Quantitative Risk Assessment of a Large-scale Hydrogen Liquefaction Plant (대형 수소 액화 플랜트의 정량적 위험도 평가에 관한 연구)

  • Do, Kyu Hyung;Han, Yong-Shik;Kim, Myung-Bae;Kim, Taehoon;Choi, Byung-Il
    • Transactions of the Korean hydrogen and new energy society
    • /
    • v.25 no.6
    • /
    • pp.609-619
    • /
    • 2014
  • In the present study, the frequency of the undesired accident was estimated for a quantitative risk assessment of a large-scale hydrogen liquefaction plant. As a representative example, the hydrogen liquefaction plant located in Ingolstadt, Germany was chosen. From the analysis of the liquefaction process and operating conditions, it was found that a $LH_2$ storage tank was one of the most dangerous facilities. Based on the accident scenarios, frequencies of possible accidents were quantitatively evaluated by using both fault tree analysis and event tree analysis. The overall expected frequency of the loss containment of hydrogen from the $LH_2$ storage tank was $6.83{\times}10^{-1}$times/yr (once per 1.5 years). It showed that only 0.1% of the hydrogen release from the $LH_2$ storage tank occurred instantaneously. Also, the incident outcome frequencies were calculated by multiplying the expected frequencies with the conditional probabilities resulting from the event tree diagram for hydrogen release. The results showed that most of the incident outcomes were dominated by fire, which was 71.8% of the entire accident outcome. The rest of the accident (about 27.7%) might have no effect to the population.

A study of Kem County earthquake (Kern County 지진에 대한 연구)

  • 김준경
    • The Journal of Engineering Geology
    • /
    • v.2 no.2
    • /
    • pp.155-165
    • /
    • 1992
  • The purpose of this study is to evaluate compatability of seismic source characteristics of the Kern County earthquake to those of Korean Peninsula seismotectonics. The compatability could be used to make Korean type response spectrum from the strong ground motions observed from the assingned earthquake. The July 21, 1952, Kern County, California, earthquake is the largest earthquake to occur in the western U.S. since 1906, and the repeat of this event poses a significant seismic hazard. The Kern County event was a complex thrusting event, with a surface rupture pattern that varied from pure leftlateral strike-slip to pure dip-slip. A time dependent moment tensor inversion was applied to ten observed teleseismic long-period body waves to investigate the source complexity. Since a conventional moment tensor inversion(constant geometry through time) returns a non-double-couple source when the seismic source changes(fault orientation and direction of slip) with time, we are required to use the time dependent moment tensor which allows a first-order mapping of the geometric and temporal complexity. From the moment tensor inversion, a two-point seismic source model with significant overlap for the White Wolf fault, which propagates upward(20 km to 5 km) from SW to NE, fits most of the observed seismic waveforms in the least squares sense. Comparison of P, T and B axes of focal mechanisms and focal depths suggests that seismic source characteristics of the Kern County earthquake is consistant with those of Korean Peninsula Seismotectonics.

  • PDF

Feasibility Study on the Risk Quantification Methodology of Railway Level Crossings (철도건널목 위험도 정량평가 방법론 적용성 연구)

  • Kang, Hyun-Gook;Kim, Man-Cheol;Park, Joo-Nam;Wang, Jong-Bae
    • Proceedings of the KSR Conference
    • /
    • 2007.05a
    • /
    • pp.605-613
    • /
    • 2007
  • In order to overcome the difficulties of quantitative risk analysis such as complexity of model, we propose a systematic methodology for risk quantification of railway system which consists of 6 steps: The identification of risk factors, the determination of major scenarios for each risk factor by using event tree, the development of supplementary fault trees for evaluating branch probabilities, the evaluation of event probabilities, the quantification of risk, and the analysis in consideration of accident situation. In this study, in order to address the feasibility of the propose methodology, this framework is applied to the prototype risk model of nation-wide railway level crossings. And the quantification result based on the data of 2005 in Korea will also be presented.

  • PDF

Integrated Level 1-Level 2 decommissioning probabilistic risk assessment for boiling water reactors

  • Mercurio, Davide;Andersen, Vincent M.;Wagner, Kenneth C.
    • Nuclear Engineering and Technology
    • /
    • v.50 no.5
    • /
    • pp.627-638
    • /
    • 2018
  • This article describes an integrated Level 1-Level 2 probabilistic risk assessment (PRA) methodology to evaluate the radiological risk during postulated accident scenarios initiated during the decommissioning phase of a typical Mark I containment boiling water reactor. The fuel damage scenarios include those initiated while the reactor is permanently shut down, defueled, and the spent fuel is located into the spent fuel storage pool. This article focuses on the integrated Level 1-Level 2 PRA aspects of the analysis, from the beginning of the accident to the radiological release into the environment. The integrated Level 1-Level 2 decommissioning PRA uses event trees and fault trees that assess the accident progression until and after fuel damage. Detailed deterministic severe accident analyses are performed to support the fault tree/event tree development and to provide source term information for the various pieces of the Level 1-Level 2 model. Source terms information is collected from accidents occurring in both the reactor pressure vessel and the spent fuel pool, including simultaneous accidents. The Level 1-Level 2 PRA model evaluates the temporal and physical changes in plant conditions including consideration of major uncertainties. The goal of this article is to provide a methodology framework to perform a decommissioning Probabilistic Risk Assessment (PRA), and an application to a real case study is provided to show the use of the methodology. Results will be derived from the integrated Level 1-Level 2 decommissioning PSA event tree in terms of fuel damage frequency, large release frequency, and large early release frequency, including uncertainties.

How to incorporate human failure event recovery into minimal cut set generation stage for efficient probabilistic safety assessments of nuclear power plants

  • Jung, Woo Sik;Park, Seong Kyu;Weglian, John E.;Riley, Jeff
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.110-116
    • /
    • 2022
  • Human failure event (HFE) dependency analysis is a part of human reliability analysis (HRA). For efficient HFE dependency analysis, a maximum number of minimal cut sets (MCSs) that have HFE combinations are generated from the fault trees for the probabilistic safety assessment (PSA) of nuclear power plants (NPPs). After collecting potential HFE combinations, dependency levels of subsequent HFEs on the preceding HFEs in each MCS are analyzed and assigned as conditional probabilities. Then, HFE recovery is performed to reflect these conditional probabilities in MCSs by modifying MCSs. Inappropriate HFE dependency analysis and HFE recovery might lead to an inaccurate core damage frequency (CDF). Using the above process, HFE recovery is performed on MCSs that are generated with a non-zero truncation limit, where many MCSs that have HFE combinations are truncated. As a result, the resultant CDF might be underestimated. In this paper, a new method is suggested to incorporate HFE recovery into the MCS generation stage. Compared to the current approach with a separate HFE recovery after MCS generation, this new method can (1) reduce the total time and burden for MCS generation and HFE recovery, (2) prevent the truncation of MCSs that have dependent HFEs, and (3) avoid CDF underestimation. This new method is a simple but very effective means of performing MCS generation and HFE recovery simultaneously and improving CDF accuracy. The effectiveness and strength of the new method are clearly demonstrated and discussed with fault trees and HFE combinations that have joint probabilities.