• Title/Summary/Keyword: Poisson과정

Search Result 160, Processing Time 0.019 seconds

The Study for NHPP Software Reliability Model based on Chi-Square Distribution (카이제곱 NHPP에 의한 소프트웨어 신뢰성 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.45-53
    • /
    • 2006
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the $x^2$ reliability model, which can capture the increasing nature of the failure occurrence rate per fault. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE, AIC statistics and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using real data set, SYS2(Allen P.Nikora and Michael R.Lyu), for the sake of proposing shape parameter of the $x^2$ distribution using the degree of freedom, was employed. This analysis of failure data compared with the $x^2$ model and the existing model using arithmetic and Laplace trend tests, Kolmogorov test is presented.

  • PDF

The Assessing Comparative Study for Statistical Process Control of Software Reliability Model Based on Musa-Okumo and Power-law Type (Musa-Okumoto와 Power-law형 NHPP 소프트웨어 신뢰모형에 관한 통계적 공정관리 접근방법 비교연구)

  • Kim, Hee-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.483-490
    • /
    • 2015
  • There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do likelihood inference for software reliability models based on finite failure model and non-homogeneous Poisson Processes (NHPP). For someone making a decision about when to market software, the conditional failure rate is an important variables. The infinite failure model are used in a wide variety of practical situations. Their use in characterization problems, detection of outlier, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many study. Statistical process control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper, proposed a control mechanism based on NHPP using mean value function of Musa-Okumo and Power law type property.

A study of epidemic model using SEIR model (SEIR 모형을 이용한 전염병 모형 예측 연구)

  • Do, Mijin;Kim, Jongtae;Choi, Boseung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.2
    • /
    • pp.297-307
    • /
    • 2017
  • The epidemic model is used to model the spread of disease and to control the disease. In this research, we utilize SEIR model which is one of applications the SIR model that incorporates Exposed step to the model. The SEIR model assumes that a people in the susceptible contacted infected moves to the exposed period. After staying in the period, the infectee tends to sequentially proceed to the status of infected, recovered, and removed. This type of infection can be used for research in cases where there is a latency period after infectious disease. In this research, we collected respiratory infectious disease data for the Middle East Respiratory Syndrome Coronavirus (MERSCoV). Assuming that the spread of disease follows a stochastic process rather than a deterministic one, we utilized the Poisson process for the variation of infection and applied epidemic model to the stochastic chemical reaction model. Using observed pandemic data, we estimated three parameters in the SIER model; exposed rate, transmission rate, and recovery rate. After estimating the model, we applied the fitted model to the explanation of spread disease. Additionally, we include a process for generating the Exposed trajectory during the model estimation process due to the lack of the information of exact trajectory of Exposed.

A Study on the Estimating Visitor's Economic Value of the Mt. Kumjung by Using Individual Travel Cost Model (개인여행비용법(Individual Travel Cost Model)에 의한 금정산 방문객의 경제적 가치추정)

  • Joo, Soo-Hyun;Lee, Dong-Cheol;Hur, Yoon-Jung
    • Management & Information Systems Review
    • /
    • v.33 no.2
    • /
    • pp.301-315
    • /
    • 2014
  • The purpose of this study is to estimate the economic value of the Kumjung Mountain, using a Individual Travel Cost Model(ITCM). This paper compares Poisson and negative binomial count data models to measure the tourism demands. Interviewers were instructed to interview only individuals. So the sample was taken in 700. A dependent variable that is defined on the non-negative integers and subject to sampling truncation is the result of a truncated count data process. The results suggest that the truncated negative binomial model is improved overdispersion problem and more preferred than the other models in the study. This study emphasizes in particular 'travel cost' that is not only monetary cost but also including opportunity cost of 'travel time'. According to the truncated negative binomial model, estimates the Consumer Surplus(CS) values per trip of about 60,669 Korean won and the total economic value was estimated to be 252,383 Korean won.

  • PDF

Development and validation of poisson cluster stochastic rainfall generation web application across South Korea (포아송 클러스터 가상강우생성 웹 어플리케이션 개발 및 검증 - 우리나라에 대해서)

  • Han, Jaemoon;Kim, Dongkyun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.4
    • /
    • pp.335-346
    • /
    • 2016
  • This study produced the parameter maps of the Modified Bartlett-Lewis Rectangular Pulse (MBLRP) stochastic rainfall generation model across South Korea and developed and validated the web application that automates the process of rainfall generation based on the produced parameter maps. To achieve this purpose, three deferent sets of parameters of the MBLRP model were estimated at 62 ground gage locations in South Korea depending on the distinct purpose of the synthetic rainfall time series to be used in hydrologic modeling (i.e. flood modeling, runoff modeling, and general purpose). The estimated parameters were spatially interpolated using the Ordinary Kriging method to produce the parameter maps across South Korea. Then, a web application has been developed to automate the process of synthetic rainfall generation based on the parameter maps. For validation, the synthetic rainfall time series has been created using the web application and then various rainfall statistics including mean, variance, autocorrelation, probability of zero rainfall, extreme rainfall, extreme flood, and runoff depth were calculated, then these values were compared to the ones based on the observed rainfall time series. The mean, variance, autocorrelation, and probability of zero rainfall of the synthetic rainfall were similar to the ones of the observed rainfall while the extreme rainfall and extreme flood value were smaller than the ones derived from the observed rainfall by the degree of 16%-40%. Lastly, the web application developed in this study automates the entire process of synthetic rainfall generation, so we expect the application to be used in a variety of hydrologic analysis needing rainfall data.

Hybrid Integration of P-Wave Velocity and Resistivity for High-Quality Investigation of In Situ Shear-Wave Velocities at Urban Areas (도심지 지반 전단파속도 탐사를 위한 P-파 속도와 전기비저항의 이종 결합)

  • Joh, Sung-Ho;Kim, Bong-Chan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.1C
    • /
    • pp.45-51
    • /
    • 2010
  • In urban area, design and construction of civil engineering structures such as subway tunnel, underground space and deep excavation is impeded by unreliable site investigation. Variety of embedded objects, electric noises and traffic vibrations degrades the quality of site investigation, whatever the site-investigation technique would be. In this research, a preliminary research was performed to develop a dedicated site investigation technique for urban geotechnical sites, which can overcome the limitations of urban sites. HiRAS (Hybrid Integration of Surface Waves and Resistivity) technique which is the first outcome of the preliminary research was proposed in this paper. The technique combines surface wave as well as electrical resistivity. CapSASW method for surface-wave technique and PDC-R technique for electrical resistivity survey were incorporated to develop HiRAS technique. CapSASW method is a good method for evaluating material stiffness and PDC-R technique is a reliable method for determination of underground stratification even in a site with electrical noise. For the inversion analysis of HiRAS techniuqe, a site-specific relationship between stress-wave velocity and resistivity was employed. As for outgrowth of this research, the 2-D distribution of Poisson's ratio could be also determined.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.

An Introduction to Kinetic Monte Carlo Methods for Nano-scale Diffusion Process Modeling (나노 스케일 확산 공정 모사를 위한 동력학적 몬테칼로 소개)

  • Hwang, Chi-Ok;Seo, Ji-Hyun;Kwon, Oh-Seob;Kim, Ki-Dong;Won, Tae-Young
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.6
    • /
    • pp.25-31
    • /
    • 2004
  • In this paper, we introduce kinetic Monte Carlo (kMC) methods for simulating diffusion process in nano-scale device fabrication. At first, we review kMC theory and backgrounds and give a simple point defect diffusion process modeling in thermal annealing after ion (electron) implantation into Si crystalline substrate to help understand kinetic Monte Carlo methods. kMC is a kind of Monte Carlo but can simulate time evolution of diffusion process through Poisson probabilistic process. In kMC diffusion process, instead of. solving differential reaction-diffusion equations via conventional finite difference or element methods, it is based on a series of chemical reaction (between atoms and/or defects) or diffusion events according to event rates of all possible events. Every event has its own event rate and time evolution of semiconductor diffusion process is directly simulated. Those event rates can be derived either directly from molecular dynamics (MD) or first-principles (ab-initio) calculations, or from experimental data.

Optimal Release Problems based on a Stochastic Differential Equation Model Under the Distributed Software Development Environments (분산 소프트웨어 개발환경에 대한 확률 미분 방정식 모델을 이용한 최적 배포 문제)

  • Lee Jae-Ki;Nam Sang-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.7A
    • /
    • pp.649-658
    • /
    • 2006
  • Recently, Software Development was applied to new-approach methods as a various form : client-server system and web-programing, object-orient concept, distributed development with a network environments. On the other hand, it be concerned about the distributed development technology and increasing of object-oriented methodology. These technology is spread out the software quality and improve of software production, reduction of the software develop working. Futures, we considered about the distributed software development technique with a many workstation. In this paper, we discussed optimal release problem based on a stochastic differential equation model for the distributed Software development environments. In the past, the software reliability applied to quality a rough guess with a software development process and approach by the estimation of reliability for a test progress. But, in this paper, we decided to optimal release times two method: first, SRGM with an error counting model in fault detection phase by NHPP. Second, fault detection is change of continuous random variable by SDE(stochastic differential equation). Here, we decide to optimal release time as a minimum cost form the detected failure data and debugging fault data during the system test phase and operational phase. Especially, we discussed to limitation of reliability considering of total software cost probability distribution.

The Study for Performance Analysis of Software Reliability Model using Fault Detection Rate based on Logarithmic and Exponential Type (로그 및 지수형 결함 발생률에 따른 소프트웨어 신뢰성 모형에 관한 신뢰도 성능분석 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.306-311
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, reliability software cost model considering logarithmic and exponential fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software reliability model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. The logarithmic and exponential fault detection model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.