• Title/Summary/Keyword: Maximum likelihood estimation (MLE)

Search Result 149, Processing Time 0.024 seconds

Statistical models from weigh-in-motion data

  • Chan, Tommy H.T.;Miao, T.J.;Ashebo, Demeke B.
    • Structural Engineering and Mechanics
    • /
    • v.20 no.1
    • /
    • pp.85-110
    • /
    • 2005
  • This paper aims at formulating various statistical models for the study of a ten year Weigh-in-Motion (WIM) data collected from various WIM stations in Hong Kong. In order to study the bridge live load model it is important to determine the mathematical distributions of different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc. Each of the above parameters is analyzed by various stochastic processes in order to obtain the mathematical distributions and the Maximum Likelihood Estimation (MLE) method is adopted to calculate the statistical parameters, expected values and standard deviations from the given samples of data. The Kolmogorov-Smirnov (K-S) method of approach is used to check the suitability of the statistical model selected for the particular parameter and the Monte Carlo method is used to simulate the distributions of maximum value stochastic processes of a series of given stochastic processes. Using the statistical analysis approach the maximum value of gross vehicle weight and axle weight in bridge design life has been determined and the distribution functions of these parameters are obtained under both free-flowing traffic and dense traffic status. The maximum value of bending moments and shears for wide range of simple spans are obtained by extrapolation. It has been observed that the obtained maximum values of the gross vehicle weight and axle weight from this study are very close to their legal limitations of Hong Kong which are 42 tonnes for gross weight and 10 tonnes for axle weight.

iHaplor: A Hybrid Method for Haplotype Reconstruction

  • Jung, Ho-Youl;Heo, Jee-Yeon;Cho, Hye-Yeung;Ryu, Gil-Mi;Lee, Ju-Young;Koh, In-Song;Kimm, Ku-Chan;Oh, Berm-Seok
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.221-228
    • /
    • 2003
  • This paper presents a novel method that can identify the individual's haplotype from the given genotypes. Because of the limitation of the conventional single-locus analysis, haplotypes have gained increasing attention in the mapping of complex-disease genes. Conventionally there are two approaches which resolve the individual's haplotypes. One is the molecular haplotypings which have many potential limitations in cost and convenience. The other is the in-silico haplotypings which phase the haplotypes from the diploid genotyped populations, and are cost effective and high-throughput method. In-silico haplotyping is divided into two sub-categories - statistical and computational method. The former computes the frequencies of the common haplotypes, and then resolves the individual's haplotypes. The latter directly resolves the individual's haplotypes using the perfect phylogeny model first proposed by Dan Gusfield [7]. Our method combines two approaches in order to increase the accuracy and the running time. The individuals' haplotypes are resolved by considering the MLE (Maximum Likelihood Estimation) in the process of computing the frequencies of the common haplotypes.

  • PDF

At-site Low Flow Frequency Analysis Using Bayesian MCMC: II. Application and Comparative Studies (Bayesian MCMC를 이용한 저수량 점 빈도분석: II. 빈도분석의 적용 및 결과의 평가)

  • Kim, Sang-Ug;Lee, Kil-Seong;Kim, Kyung-Tae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2008.05a
    • /
    • pp.1125-1128
    • /
    • 2008
  • 본 연구에서는 Bayesian MCMC 방법과 2차 근사식을 이용한 최우추정(Maximum Likelihood Estimation, MLE)방법 방법을 이용하여 낙동강 유역의 본류지점인 낙동, 왜관, 고령교, 진동지점에 대한 점 빈도분석을 수행하고 그 결과로써 불확실성을 포함한 빈도곡선을 작성하였다. 통계적 실험을 통한 두 가지 추정방법의 분석을 위하여 먼저 자료의 길이가 100인 8개의 합성 유량자료 셋을 생성하여 비교 연구를 수행하였으며, 이를 자료길이 36인 실측 유량자료의 추정결과와 비교하였다. Bayesian MCMC 방법에 의한 평균값과 2차 근사식을 이용한 취우추정방법에 의한 모드에서의 2모수 Weibull 분포의 모수 추정값은 비슷한 결과를 보였으나, 불확실성을 나타내는 하한값과 상한값의 차이는 Bayesian MCMC 방법이 2차 근사식을 이용한 취우추정방법보다 불확실성을 감소시켜 나타내는 것을 알 수 있었다. 또한 실측 유량자료를 이용한 결과, 2차 근사식을 이용한 최우추정방법의 경우 자료의 길이가 감소됨에 따라 불확실성의 범위가 합성 유량자료를 사용한 경우에 비해 상대적으로 증가되지만, Bayesian MCMC 방법의 경우에는 자료의 길이에 대한 영향이 거의 없다는 결론을 얻을 수 있었다. 그러므로 저수량 빈도분석을 수행하기 위해 충분한 자료를 확보할 수 없는 국내의 상황을 감안할 때, 위와 같은 결론으로부터 Bayesian MCMC 방법이 불확실성을 표현하는데 있어서 2차 근사식을 이용한 최우추정방법에 비해 합리적일 수 있다는 결론을 얻을 수 있었다.

  • PDF

A Study on the Reliability Analysis for the High Precision Pneumatic Actuator within Tape Feeder (테이프 피더 내장 공압 액추에이터에 대한 신뢰성 평가에 관한 연구)

  • Choi Jin-Hwa;Jeon Byung-Cheol;Cho Myeong-Woo;Kang Sung-Min;Lee Soo-Jin
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.15 no.4
    • /
    • pp.63-68
    • /
    • 2006
  • This research presents the reliability analysis of the pneumatic actuator within the tape feeder that is used to transfer the correct force to linked parts during l.0E+7 cycles. First, the degradation analysis for thrust and air leakage is executed to obtain the failure data of a product based on its performance over time. Second, once the parameters has been calculated using the weibull 2-parameter distribution and MLE(Maximum Likelihood Estimation), information related to life such as reliability, failure rate, probability density function is estimated. Finally, MTTF(Mean Time To Failure) and $B_{10}$ life of actuators are calculated. MTTF means the mean life at the confidence level and $B_{10}$ life refers to the time by which 10% of the product would fail. In this study, failure causes and solutions are examined using the reliability analysis.

Two-Dimensional Model of Hidden Markov Lattice (이차원 은닉 마르코프 격자 모형)

  • 신봉기
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.6
    • /
    • pp.566-574
    • /
    • 2000
  • Although a numbed of variants of 2D HMM have been proposed in the literature, they are, in a word, too simple to model the variabilities of images for diverse classes of objects; they do not realize the modeling capability of the 1D HMM in 2D. Thus the author thinks they are poor substitutes for the HMM in 2D. The new model proposed in this paper is a hidden Markov lattice or, we can dare say, a 2D HMM with the causality of top-down and left-right direction. Then with the addition of a lattice constraint, the two algorithms for the evaluation of a model and the maximum likelihood estimation of model parameters are developed in the theoretical perspective. It is a more natural extension of the 1D HMM. The proposed method will provide a useful way of modeling highly variable patterns such as offline cursive characters.

  • PDF

Design of range measurement systems using a sonar and a camera (초음파 센서와 카메라를 이용한 거리측정 시스템 설계)

  • Moon, Chang-Soo;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.

Comparison of the Korean and US Stock Markets Using Continuous-time Stochastic Volatility Models

  • CHOI, SEUNGMOON
    • KDI Journal of Economic Policy
    • /
    • v.40 no.4
    • /
    • pp.1-22
    • /
    • 2018
  • We estimate three continuous-time stochastic volatility models following the approach by Aït-Sahalia and Kimmel (2007) to compare the Korean and US stock markets. To do this, the Heston, GARCH, and CEV models are applied to the KOSPI 200 and S&P 500 Index. For the latent volatility variable, we generate and use the integrated volatility proxy using the implied volatility of short-dated at-the-money option prices. We conduct MLE in order to estimate the parameters of the stochastic volatility models. To do this we need the transition probability density function (TPDF), but the true TPDF is not available for any of the models in this paper. Therefore, the TPDFs are approximated using the irreducible method introduced in Aït-Sahalia (2008). Among three stochastic volatility models, the Heston model and the CEV model are found to be best for the Korean and US stock markets, respectively. There exist relatively strong leverage effects in both countries. Despite the fact that the long-run mean level of the integrated volatility proxy (IV) was not statistically significant in either market, the speeds of the mean reversion parameters are statistically significant and meaningful in both markets. The IV is found to return to its long-run mean value more rapidly in Korea than in the US. All parameters related to the volatility function of the IV are statistically significant. Although the volatility of the IV is more elastic in the US stock market, the volatility itself is greater in Korea than in the US over the range of the observed IV.

A Parameter Estimation of Software Reliability Growth Model with Change-Point (변화점을 고려한 소프트웨어 신뢰도 성장모형의 모수추정)

  • Kim, Do-Hoon;Park, Chun-Gun;Nam, Kyung-H.
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.5
    • /
    • pp.813-823
    • /
    • 2008
  • The non-homogeneous Poisson process(NHPP) based software reliability growth models are proved quite successful in practical software reliability engineering. The fault detection rate is usually assumed to be the continuous and monotonic function. However, the fault detection rate can be affected by many factors such as the testing strategy, running environment and resource allocation. This paper describes a parameter estimation of software reliability growth model with change-point problem. We obtain the maximum likelihood estimate(MLE) and least square estimate(LSE), and compare goodness-of-fit.

An Effective Method for Selection of WGN Band in Man Made Noise(MMN) Environment (인공 잡음 환경하에서의 효율적인 백색 가우시안 잡음 대역 선정 방법)

  • Shin, Seung-Min;Kim, Young-Soo
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.11
    • /
    • pp.1295-1303
    • /
    • 2010
  • In this paper, an effective method has been proposed for selection of white Gaussian noise(WGN) band for radio background noise measurement system under broad band noise environment. MMN which comes from industrial devices and equipment mostly happens in the shape of broad band noise mostly like impulsive noise and this is the main reason for increasing level in the present radio noise measurements. The existing method based on singular value decomposition has weak point that it cannot give good performance for the broad band signal because it uses signal's white property. The proposed method overcomes such a weakness of singular value decomposition based method by using signal's Gaussian property based method in parallel. Moreover, this proposed method hires a modelling based method which uses parameter estimation algorithm like maximum likelihood estimation(MLE) and gives more accurate result than the method using amplitude probability distribution(APD) graph. Experiment results under the natural environment has done to verify feasibility of the proposed method.

A Review on the Analysis of Life Data Based on Bayesian Method: 2000~2016 (베이지안 기법에 기반한 수명자료 분석에 관한 문헌 연구: 2000~2016)

  • Won, Dong-Yeon;Lim, Jun Hyoung;Sim, Hyun Su;Sung, Si-il;Lim, Heonsang;Kim, Yong Soo
    • Journal of Applied Reliability
    • /
    • v.17 no.3
    • /
    • pp.213-223
    • /
    • 2017
  • Purpose: The purpose of this study is to arrange the life data analysis literatures based on the Bayesian method quantitatively and provide it as tables. Methods: The Bayesian method produces a more accurate estimates of other traditional methods in a small sample size, and it requires specific algorithm and prior information. Based on these three characteristics of the Bayesian method, the criteria for classifying the literature were taken into account. Results: In many studies, there are comparisons of estimation methods for the Bayesian method and maximum likelihood estimation (MLE), and sample size was greater than 10 and not more than 25. In probability distributions, a variety of distributions were found in addition to the distributions of Weibull commonly used in life data analysis, and MCMC and Lindley's Approximation were used evenly. Finally, Gamma, Uniform, Jeffrey and extension of Jeffrey distributions were evenly used as prior information. Conclusion: To verify the characteristics of the Bayesian method which are more superior to other methods in a smaller sample size, studies in less than 10 samples should be carried out. Also, comparative study is required by various distributions, thereby providing guidelines necessary.