• Title/Summary/Keyword: Failure Detection

Search Result 867, Processing Time 0.033 seconds

Estimation of Effect Zone for the Establishment of Damage-Minimizing Plan of Chemical Plants (화학공장의 사고피해 최소화 대책수립을 위한 영향범위 평가)

  • Lee, Hern-Chang;Han, Seong-Hwan;Cho, Ji-Hoon;Shin, Dong-Il;Kim, Tae-Ok
    • Journal of the Korean Institute of Gas
    • /
    • v.15 no.2
    • /
    • pp.69-74
    • /
    • 2011
  • As a way to propose a method for the establishment of practical damage-minimizing plan for chemical plants, the release scenario was established by using API-581 BRD and the effect zone was estimated by the KS-RBI program supporting the quantitative cause analysis. And the risk assessment was performed. As a result, we found that to minimize the damage of a chemical plant, it is effective to use four release hole diameters (small, medium, large and rupture) and release time estimated according to the classes of detection and isolation systems. In addition, it can be safely said that through appling the damage areas considering weighted average by failure frequency and worst-case simultaneously, industrial sites can come up with the effective emergency response plan.

The Comparative Study for Property of Learning Effect based on Software Reliability Model using Doubly Bounded Power Law Distribution (이중 결합 파우어 분포 특성을 이용한 유한고장 NHPP모형에 근거한 소프트웨어 학습효과 비교 연구)

  • Kim, Hee Cheul;Kim, Kyung-Soo
    • Convergence Security Journal
    • /
    • v.13 no.1
    • /
    • pp.71-78
    • /
    • 2013
  • In this study, software products developed in the course of testing, software managers in the process of testing software test and test tools for effective learning effects perspective has been studied using the NHPP software. The doubly bounded power law distribution model makeup Weibull distribution applied to distribution was based on finite failure NHPP. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than automatic error that is generally efficient model could be confirmed. This paper, a numerical example of applying using time between failures and parameter estimation using maximum likelihood estimation method, after the efficiency of the data through trend analysis model selection were efficient using the mean square error and $R^2$.

An update of preimplantation genetic diagnosis in gene diseases, chromosomal translocation, and aneuploidy screening

  • Chang, Li-Jung;Chen, Shee-Uan;Tsai, Yi-Yi;Hung, Chia-Cheng;Fang, Mei-Ya;Su, Yi-Ning;Yang, Yu-Shih
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.38 no.3
    • /
    • pp.126-134
    • /
    • 2011
  • Preimplantation genetic diagnosis (PGD) is gradually widely used in prevention of gene diseases and chromosomal abnormalities. Much improvement has been achieved in biopsy technique and molecular diagnosis. Blastocyst biopsy can increase diagnostic accuracy and reduce allele dropout. It is cost-effective and currently plays an important role. Whole genome amplification permits subsequent individual detection of multiple gene loci and screening all 23 pairs of chromosomes. For PGD of chromosomal translocation, fluorescence $in-situ$ hybridization (FISH) is traditionally used, but with technical difficulty. Array comparative genomic hybridization (CGH) can detect translocation and 23 pairs of chromosomes that may replace FISH. Single nucleotide polymorphisms array with haplotyping can further distinguish between normal chromosomes and balanced translocation. PGD may shorten time to conceive and reduce miscarriage for patients with chromosomal translocation. PGD has a potential value for mitochondrial diseases. Preimplantation genetic haplotyping has been applied for unknown mutation sites of single gene disease. Preimplantation genetic screening (PGS) using limited FISH probes in the cleavage-stage embryo did not increase live birth rates for patients with advanced maternal age, unexplained recurrent abortions, and repeated implantation failure. Polar body and blastocyst biopsy may circumvent the problem of mosaicism. PGS using blastocyst biopsy and array CGH is encouraging and merit further studies. Cryopreservation of biopsied blastocysts instead of fresh transfer permits sufficient time for transportation and genetic analysis. Cryopreservation of embryos may avoid ovarian hyperstimulation syndrome and possible suboptimal endometrium.

Development of Automatic Inspection System for ALC Block Using Distortion Correction Technique (왜곡 보정 기법을 이용한 ALC 블럭의 자동 검사 시스템 개발)

  • Han, Kwang-Hee;Huh, Kyung-Moo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.1
    • /
    • pp.1-6
    • /
    • 2010
  • The lens distortion in the machine vision system is inevitable phenomenon. Distortion is getting worse, due to the selection of lens in the trend of reducing prices and size of the system. In this trend, the distortion correction becomes more important. But, the traditional correction methods has problems, such as complexity and requiring more operations. Effective distorted digital image correction is the precondition of target detection and recognition based on vision inspection. To overcome the disadvantage of traditional distortion correction algorithms, such as complex modeling, massive computation and marginal information loss, an image distortion correction algorithm based on photogrammetry method is proposed in this paper. In our method, we use the lattice image as the measurement target. Through the experimental results, we could find that we can reduce the processing time by 4ms. And also the inspection failure rate of our method was reduced by 2.3% than human-eyes inspection method.

Modified Adaptive Random Testing through Iterative Partitioning (반복 분할 기반의 적응적 랜덤 테스팅 향상 기법)

  • Lee, Kwang-Kyu;Shin, Seung-Hun;Park, Seung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.180-191
    • /
    • 2008
  • An Adaptive Random Testing (ART) is one of test case generation algorithms that are designed to detect common failure patterns within input domain. The ART algorithm shows better performance than that of pure Random Testing (RT). Distance-bases ART (D-ART) and Restriction Random Testing (RRT) are well known examples of ART algorithms which are reported to have good performances. But significant drawbacks are observed as quadratic runtime and non-uniform distribution of test case. They are mainly caused by a huge amount of distance computations to generate test case which are distance based method. ART through Iterative Partitioning (IP-ART) significantly reduces the amount of computation of D-ART and RRT with iterative partitioning of input domain. However, non-uniform distribution of test case still exists, which play a role of obstacle to develop a scalable algerian. In this paper we propose a new ART method which mitigates the drawback of IP-ART while achieving improved fault-detection capability. Simulation results show that the proposed one has about 9 percent of improved F-measures with respect to other algorithms.

The Study of NHPP Software Reliability Model from the Perspective of Learning Effects (학습 효과 기법을 이용한 NHPP 소프트웨어 신뢰도 모형에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Convergence Security Journal
    • /
    • v.11 no.1
    • /
    • pp.25-32
    • /
    • 2011
  • In this study, software products developed in the course of testing, software managers in the process of testing software test and test tools for effective learning effects perspective has been studied using the NHPP software. The Weibull distribution applied to distribution was based on finite failure NHPP. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than automatic error that is generally efficient model could be confirmed. This paper, a numerical example of applying using time between failures and parameter estimation using maximum likelihood estimation method, after the efficiency of the data through trend analysis model selection were efficient using the mean square error and $R_{sq}$.

A Kernel-Level Group Communication System for Highly Available Linux Cluster (리눅스 클러스터의 고가용성 보장을 위한 커널 수준 그룹 통신 시스템)

  • 이상균;박성용
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.10
    • /
    • pp.533-543
    • /
    • 2003
  • With the increase of interests in cluster, there have been a number of research efforts to address the high availability issues on cluster. However, there are no kernel-level group communication systems to support the development of kernel-level application programs and it is not easy to use traditional user-level group communication systems for the kernel-level applications. This paper presents the design and implementation issues of KCGCS(Kernel-level Cluster Group Communication System), which is a kernel-level group communication module for linux cluster. Unlike traditional user-level group communication systems, the KCGCS uses light-weight heartbeat messages and a ring-based heartbeat mechanism, which allows users to implement scalable failure detection mechanisms. Moreover, the KCGCS improves the reliability by using distributed coordinators to maintain membership information.

Test Item Prioritizing Metrics for a Selective Software Testing (차별화된 소프트웨어 시험을 위한 시험항목 우선순위 조정)

  • Lee, Jae-Ki;Lee, Jae-Jeong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.1B
    • /
    • pp.38-47
    • /
    • 2008
  • The system test was accomplished in delivery time for a suitable of various requirements at the software market. Especially, critical faults must be detected and removed for a close main functions and users against target system. In generally, proposed test methods are executed with a calendar time, not a competitive and effectiveness method as selective software testing. These methods are inapplicable to short term test or early system development stage. Moreover, it's accompanied by heavy cost. Overcoming of these problems, must attempted to new software test method role of core function in the system test. Selective software testing method is decided to mixing with the three-information such as a frequency, complexity of use scenario and fault impact. Using this information, searching a fatal error and usefully system test for an executed test scenario. In this paper, we have proposed new test method and verified testing results for the detection of critical faults or search a fatal errors with a system main function.

A Study on the Optimum Release Model of a Developed Software with Weibull Testing Efforts (웨이블 시험노력을 이용한 개발 소프트웨어의 최적발행 모델에 관한 연구)

  • Choe, Gyu-Sik;Jang, Yun-Seung
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.835-842
    • /
    • 2001
  • We propose a software-reliability growth model incoporating the amount of testing effort expended during the software testing phase. The time-dependent behavior of testing effort expenditures is described by a Weibull curve. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, a software-reliability growth model is formulated by a nonhomogeneous Poisson process. Using this model the method of data analysis for software reliability measurement is developed. After defining a software reliability, we discuss the relations between testing time and reliability and between duration following failure fixing and reliability are studied in this paper. The release time making the testing cost to be minimum is determined through studying the cost for each condition. Also, the release time is determined depending on the conditions of the specified reliability. The optimum release time is determined by simultaneously studying optimum release time issue that determines both the cost related time and the specified reliability related time.

  • PDF

Customer Barcode Support System for the Cost Saving of Mail Items (우편물 처리원가 절감을 위한 고객 바코드 지원 시스템)

  • Hwang, Jae-Gak;Park, Moon-Sung;Song, Jae-Gwan;Woo, Dong-Chin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.10
    • /
    • pp.2563-2573
    • /
    • 1999
  • In most mail automatic processing centers, after facing and canceling, letter mails are passed through an Optical Character Recognition/Barcode Sorter(OCR/BS) to read the postal code and 3 of 5 fluorescent (luminescent) barcode is applied. Normally, 31%∼35% of this mails are rejected. The main reasons for reading failures are poor printing quality of addresses and barcodes, script printing, writing in a cursive hand, variety fonts, and failure to locate the address. Our goal is to provide mailer with top quality service and customer barcode service as we move toward 100% barcoding automation of letter mail. In this paper, we propose a method of printing 3 of 5 customer barcode, postal code management, and detection of postal code based on postal address for increase the performance of automatic processing system in mail items. Using postal code generating rules, which are automatically extracted from postal addresses and address numbers, creates postal codes. The customer barcode support system is implemented by C++ language and runs on IBM PC under Windows 95.

  • PDF