• 제목/요약/키워드: Local likelihood

검색결과 145건 처리시간 0.029초

가능도함수를 이용한 불연속점 수의 추정 (Estimation of the number of discontinuity points based on likelihood)

  • 허집
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권1호
    • /
    • pp.51-59
    • /
    • 2010
  • 일반화선형모형에서 회귀함수가 하나의 불연속점을 가질 때, Huh (2009)는 하나의 모수를 가지는 지수족의 가능도함수를 한쪽방향커널을 이용하여 그 불연속점의 위치와 점프크기를 추정하였다. 이 논문에서는 미지의 불연속점 수 q개를 가지는 회귀함수인 경우에, Huh (2009)가 제안한 점프크기 추정량의 점근분포를 이용한 가설검정법을 소개하고, 그 가설검정법을 이용한 불연속점 수를 추정하는 알고리듬을 제안하고, 모의실험을 통하여 추정의 정도를 알아보고자 한다.

An EM Algorithm for a Doubly Smoothed MLE in Normal Mixture Models

  • Seo, Byung-Tae
    • Communications for Statistical Applications and Methods
    • /
    • 제19권1호
    • /
    • pp.135-145
    • /
    • 2012
  • It is well known that the maximum likelihood estimator(MLE) in normal mixture models with unequal variances does not fall in the interior of the parameter space. Recently, a doubly smoothed maximum likelihood estimator(DS-MLE) (Seo and Lindsay, 2010) was proposed as a general alternative to the ordinary maximum likelihood estimator. Although this method gives a natural modification to the ordinary MLE, its computation is cumbersome due to intractable integrations. In this paper, we derive an EM algorithm for the DS-MLE under normal mixture models and propose a fast computational tool using a local quadratic approximation. The accuracy and speed of the proposed method is then presented via some numerical studies.

Maximum Likelihood (ML)-Based Quantizer Design for Distributed Systems

  • Kim, Yoon Hak
    • Journal of information and communication convergence engineering
    • /
    • 제13권3호
    • /
    • pp.152-158
    • /
    • 2015
  • We consider the problem of designing independently operating local quantizers at nodes in distributed estimation systems, where many spatially distributed sensor nodes measure a parameter of interest, quantize these measurements, and send the quantized data to a fusion node, which conducts the parameter estimation. Motivated by the discussion that the estimation accuracy can be improved by using the quantized data with a high probability of occurrence, we propose an iterative algorithm with a simple design rule that produces quantizers by searching boundary values with an increased likelihood. We prove that this design rule generates a considerably reduced interval for finding the next boundary values, yielding a low design complexity. We demonstrate through extensive simulations that the proposed algorithm achieves a significant performance gain with respect to traditional quantizer designs. A comparison with the recently published novel algorithms further illustrates the benefit of the proposed technique in terms of performance and design complexity.

SMUCE와 FDR segmentation 방법에 의한 다중변화점 추정법 비교 (Comparison of multiscale multiple change-points estimators)

  • 김재희
    • 응용통계연구
    • /
    • 제32권4호
    • /
    • pp.561-572
    • /
    • 2019
  • 본 연구는 다층적 다중변화점 추정법으로 FDRSeg 기법과 SMUCE 기법의 이론적 특성을 파악하고 모의실험을 통해 경험적 특성을 비교하고자한다. FDRSeg (False discovery rate segmentation)기법은 FDR 기반 조절을 하여 변화점을 추정하고 SMUCE (simultaneous multiscale change-point estimator) 기법은 국소우도함수 기반 다중 검정으로 변화점을 추정한다. 변화점의 개수가 작을경우에는 두 기법에 의한 추정능력이 비슷하다. 변화점 개수가 많을수록 FDRSeg 의 추정이 변화점 개수와 추정측도 면에서 더 좋은 편이다. 실제 데이터 분석으로 검층 주상도 데이터에 대해 각 기법으로 다중변화점 추정을 하고 비교한다.

추세계수 국소선형근사법의 특성과 해석 (Mathematical Review on the Local Linearizing Method of Drift Coefficient)

  • 윤민;최영수;이윤동
    • 응용통계연구
    • /
    • 제21권5호
    • /
    • pp.801-811
    • /
    • 2008
  • 확산모형은 금융현상을 모형화하기 위한 방법으로 자주 사용된다. 특히 최근에 제안된 다양한 확산모형들은 정교한 추론방법을 필요로 하게 되고, 이러한 필요성에 따라 정밀도가 높은 여러 가지 추론 방법에 대한 연구가 진행되고 있다. 본 논문에서는 확률편미분방정식에 의하여 표현되는 확산과정의 추론을 위하여 사용되는 여러 가지 방법 중 우도추론법에 대하여 살펴보게 된다. 다양한 우도추론법 중에서도, 근사적 우도추론법의 일종인 추세계수 국소선형근사법을 중심으로 그 수리적 성질을 검토한다.

Plagiarism Detection among Source Codes using Adaptive Methods

  • Lee, Yun-Jung;Lim, Jin-Su;Ji, Jeong-Hoon;Cho, Hwaun-Gue;Woo, Gyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권6호
    • /
    • pp.1627-1648
    • /
    • 2012
  • We propose an adaptive method for detecting plagiarized pairs from a large set of source code. This method is adaptive in that it uses an adaptive algorithm and it provides an adaptive threshold for determining plagiarism. Conventional algorithms are based on greedy string tiling or on local alignments of two code strings. However, most of them are not adaptive; they do not consider the characteristics of the program set, thereby causing a problem for a program set in which all the programs are inherently similar. We propose adaptive local alignment-a variant of local alignment that uses an adaptive similarity matrix. Each entry of this matrix is the logarithm of the probabilities of the keywords based on their frequency in a given program set. We also propose an adaptive threshold based on the local outlier factor (LOF), which represents the likelihood of an entity being an outlier. Experimental results indicate that our method is more sensitive than JPlag, which uses greedy string tiling for detecting plagiarism-suspected code pairs. Further, the adaptive threshold based on the LOF is shown to be effective, and the detection performance shows high sensitivity with negligible loss of specificity, compared with that using a fixed threshold.

Variational Expectation-Maximization Algorithm in Posterior Distribution of a Latent Dirichlet Allocation Model for Research Topic Analysis

  • Kim, Jong Nam
    • 한국멀티미디어학회논문지
    • /
    • 제23권7호
    • /
    • pp.883-890
    • /
    • 2020
  • In this paper, we propose a variational expectation-maximization algorithm that computes posterior probabilities from Latent Dirichlet Allocation (LDA) model. The algorithm approximates the intractable posterior distribution of a document term matrix generated from a corpus made up by 50 papers. It approximates the posterior by searching the local optima using lower bound of the true posterior distribution. Moreover, it maximizes the lower bound of the log-likelihood of the true posterior by minimizing the relative entropy of the prior and the posterior distribution known as KL-Divergence. The experimental results indicate that documents clustered to image classification and segmentation are correlated at 0.79 while those clustered to object detection and image segmentation are highly correlated at 0.96. The proposed variational inference algorithm performs efficiently and faster than Gibbs sampling at a computational time of 0.029s.

ROBUST TEST BASED ON NONLINEAR REGRESSION QUANTILE ESTIMATORS

  • CHOI, SEUNG-HOE;KIM, KYUNG-JOONG;LEE, MYUNG-SOOK
    • 대한수학회논문집
    • /
    • 제20권1호
    • /
    • pp.145-159
    • /
    • 2005
  • In this paper we consider the problem of testing statistical hypotheses for unknown parameters in nonlinear regression models and propose three asymptotically equivalent tests based on regression quantiles estimators, which are Wald test, Lagrange Multiplier test and Likelihood Ratio test. We also derive the asymptotic distributions of the three test statistics both under the null hypotheses and under a sequence of local alternatives and verify that the asymptotic relative efficiency of the proposed test statistics with classical test based on least squares depends on the error distributions of the regression models. We give some examples to illustrate that the test based on the regression quantiles estimators performs better than the test based on the least squares estimators of the least absolute deviation estimators when the disturbance has asymmetric and heavy-tailed distribution.

일반화최대우도함수에 의해 추정된 평활모수에 대한 진단 (Diagnostics for Estimated Smoothing Parameter by Generalized Maximum Likelihood Function)

  • 정원태;이인석;정혜정
    • Journal of the Korean Data and Information Science Society
    • /
    • 제7권2호
    • /
    • pp.257-262
    • /
    • 1996
  • 본 논문은 스플라인 희귀모형에서 평활모수를 추정할 때 사전 작업으로 영향력 진단을 하는 문제를 다룬다. 평활모수의 추정방법으로 일반화최대우도함수법을 사용할 때, 얻어지는 추정 치에 영향을 주는 관측치를 진단하는 측도를 제안하고, 찾아낸 영향력 관측치를 수정하여 올바른 평활모수 추정치를 찾는 방법을 소개한다.

  • PDF

Smoothed Local PC0A by BYY data smoothing learning

  • Liu, Zhiyong;Xu, Lei
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.109.3-109
    • /
    • 2001
  • The so-called curse of dimensionality arises when Gaussian mixture is used on high-dimensional small-sample-size data, since the number of free elements that needs to be specied in each covariance matrix of Gaussian mixture increases exponentially with the number of dimension d. In this paper, by constraining the covariance matrix in its decomposed orthonormal form we get a local PCA model so as to reduce the number of free elements needed to be specified. Moreover, to cope with the small sample size problem, we adopt BYY data smoothing learning which is a regularization over maximum likelihood learning obtained from BYY harmony learning to implement this local PCA model.

  • PDF