• Title/Summary/Keyword: Likelihood Analysis

Search Result 1,331, Processing Time 0.03 seconds

Semiparametric kernel logistic regression with longitudinal data

  • Shim, Joo-Yong;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.385-392
    • /
    • 2012
  • Logistic regression is a well known binary classification method in the field of statistical learning. Mixed-effect regression models are widely used for the analysis of correlated data such as those found in longitudinal studies. We consider kernel extensions with semiparametric fixed effects and parametric random effects for the logistic regression. The estimation is performed through the penalized likelihood method based on kernel trick, and our focus is on the efficient computation and the effective hyperparameter selection. For the selection of optimal hyperparameters, cross-validation techniques are employed. Numerical results are then presented to indicate the performance of the proposed procedure.

A Comparison of the Reliability Estimation Accuracy between Bayesian Methods and Classical Methods Based on Weibull Distribution (와이블분포 하에서 베이지안 기법과 전통적 기법 간의 신뢰도 추정 정확도 비교)

  • Cho, HyungJun;Lim, JunHyoung;Kim, YongSoo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.4
    • /
    • pp.256-262
    • /
    • 2016
  • The Weibull is widely used in reliability analysis, and several studies have attempted to improve estimation of the distribution's parameters. least squares estimation (LSE) or Maximum likelihood estimation (MLE) are often used to estimate distribution parameters. However, it has been proven that Bayesian methods are more suitable for small sample sizes than LSE and MLE. In this work, the Weibull parameter estimation accuracy of LSE, MLE, and Bayesian method are compared for sample sets with 3 to 30 data points. The Bayesian method was most accurate for sample sizes under 25, and the accuracy of the Bayesian method was similar to LSE and MLE as the sample size increased.

Comparison of Parameter Estimation Methods in A Kappa Distribution

  • Park Jeong-Soo;Hwang Young-A
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.285-294
    • /
    • 2005
  • This paper deals with the comparison of parameter estimation methods in a 3-parameter Kappa distribution which is sometimes used in flood frequency analysis. Method of moment estimation(MME), L-moment estimation(L-ME), and maximum likelihood estimation(MLE) are applied to estimate three parameters. The performance of these methods are compared by Monte-carlo simulations. Especially for computing MME and L-ME, three dimensional nonlinear equations are simplified to one dimensional equation which is calculated by the Newton-Raphson iteration under constraint. Based on the criterion of the mean squared error, L-ME (or MME) is recommended to use for small sample size( n$\le$100) while MLE is good for large sample size.

Performance Analysis of ICI reduction in OFDM system (OFDM시스템에서 ICI 감소 기술의 성능해석)

  • Jang, Eun-Young;Byon, Kun-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.6
    • /
    • pp.1150-1155
    • /
    • 2007
  • Orthogonal Frequency Division Multiplexing (OFDM) is an emerging multi-carrier modulation scheme, which has been adopted for several wireless standards such as IEEE 802.11a and HiperLAN2. A well-known problem of OFDM is its sensitivity to frequency offset between the transmitted and received carrier frequencies. This frequency offset introduces inter-carrier interference (ICI) in the OFDM symbol. This paper investigates three methods for combating the effects of ICI: ICI self-cancellation (SC), maximum likelihood (ML) estimation, and extended Kalman filter (EKF) method. These three methods are compared in terms of bit error rate performance.

Linear regression under log-concave and Gaussian scale mixture errors: comparative study

  • Kim, Sunyul;Seo, Byungtae
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.633-645
    • /
    • 2018
  • Gaussian error distributions are a common choice in traditional regression models for the maximum likelihood (ML) method. However, this distributional assumption is often suspicious especially when the error distribution is skewed or has heavy tails. In both cases, the ML method under normality could break down or lose efficiency. In this paper, we consider the log-concave and Gaussian scale mixture distributions for error distributions. For the log-concave errors, we propose to use a smoothed maximum likelihood estimator for stable and faster computation. Based on this, we perform comparative simulation studies to see the performance of coefficient estimates under normal, Gaussian scale mixture, and log-concave errors. In addition, we also consider real data analysis using Stack loss plant data and Korean labor and income panel data.

Revisiting a Gravity Model of Immigration: A Panel Data Analysis of Economic Determinants

  • Kim, Kyunghun
    • East Asian Economic Review
    • /
    • v.26 no.2
    • /
    • pp.143-169
    • /
    • 2022
  • This study investigates the effect of economic factors on immigration using the gravity model of immigration. Cross-sectional regression and panel data analyses are conducted from 2000 to 2019 using the OECD International Migration Database, which consists of 36 destination countries and 201 countries of origin. The Poisson pseudo-maximum-likelihood method, which can effectively correct potential biased estimates caused by zeros in the immigration data, is used for estimation. The results indicate that the economic factors strengthened after the global financial crisis. Additionally, this effect varies depending on the type of immigration (the income level of origin country). The gravity model applied to immigration performs reasonably well, but it is necessary to consider the country-specific and time-varying characteristics.

Maximum Likelihood Estimator of the Segregation Parameter under Multiple Ascertainment$(0 with Known$\pi$ (Multiple Ascertainment $\pi$가 존재할 때 분리확률모수 $\theta$치의 우도추정치로서 통계모형의 구성과 유전병에 감염된 출생아의 예측)

  • Shin, Han Poong
    • Journal of the Korean Statistical Society
    • /
    • v.6 no.2
    • /
    • pp.167-177
    • /
    • 1977
  • 유전적 질환이 있는 가계에서 출생하는 자녀중에서 유전적인 질환을 보유할 수 있는 확률을 예측하는 방법의 하나로서 우도추정치(likelihood estimator)를 사용하는 것은 분리분석(segregation analysis)에서 중요한 역할을 하고 있다. Elston과 Stewart(1971)는 이러한 분석방법의 일반적인 통계모형을 정립하였으며 필자(1974)와 Morton 등 (1974)은 complex segregation이 될 때에 분석되는 4가지의 통계모형을 주장하였다. 본 연구의 목적은 multiple ascertainment $\pi$가 존재하는 경우 분리확률모수(segregation parameter) $\theta$의 우도추정치를 구하고 둘째로 oligogenic case에 대한 이론적인 배경을 구명하고자 한다.

  • PDF

Maximum product of spacings under a generalized Type-II progressive hybrid censoring scheme

  • Young Eun, Jeon;Suk-Bok, Kang;Jung-In, Seo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.665-677
    • /
    • 2022
  • This paper proposes a new estimation method based on the maximum product of spacings for estimating unknown parameters of the three-parameter Weibull distribution under a generalized Type-II progressive hybrid censoring scheme which guarantees a constant number of observations and an appropriate experiment duration. The proposed approach is appropriate for a situation where the maximum likelihood estimation is invalid, especially, when the shape parameter is less than unity. Furthermore, it presents the enhanced performance in terms of the bias through the Monte Carlo simulation. In particular, the superiority of this approach is revealed even under the condition where the maximum likelihood estimation satisfies the classical asymptotic properties. Finally, to illustrate the practical application of the proposed approach, the real data analysis is conducted, and the superiority of the proposed method is demonstrated through a simple goodness-of-fit test.

Genetic classification of various familial relationships using the stacking ensemble machine learning approaches

  • Su Jin Jeong;Hyo-Jung Lee;Soong Deok Lee;Ji Eun Park;Jae Won Lee
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.3
    • /
    • pp.279-289
    • /
    • 2024
  • Familial searching is a useful technique in a forensic investigation. Using genetic information, it is possible to identify individuals, determine familial relationships, and obtain racial/ethnic information. The total number of shared alleles (TNSA) and likelihood ratio (LR) methods have traditionally been used, and novel data-mining classification methods have recently been applied here as well. However, it is difficult to apply these methods to identify familial relationships above the third degree (e.g., uncle-nephew and first cousins). Therefore, we propose to apply a stacking ensemble machine learning algorithm to improve the accuracy of familial relationship identification. Using real data analysis, we obtain superior relationship identification results when applying meta-classifiers with a stacking algorithm rather than applying traditional TNSA or LR methods and data mining techniques.

An Longitudinal Analysis of Changing Beliefs on the Use in IT Educatee by Elaboration Likelihood Model (정교화 가능성 모형에 의한 IT 피교육자 신용 믿음 변화의 종단분석)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.147-165
    • /
    • 2008
  • IT education can be summarized as persuading the educatee to accept IT. The persuasion is made by delivering the messages for how-to-use and where-to-use to the educatee, which leads formulation of a belief structure for using IT. Therefore, message based persuasion theory, as well as IT acceptance theories such as technology acceptance model(TAM), would play a very important role for explaining IT education. According to elaboration likelihood model(ELM) that has been considered as one of the most influential persuasion theories, people change attitude or perception by two routes, central route and peripheral route. In central route, people would think critically about issue-related arguments in an informational message. In peripheral route, subjects rely on cues regarding the target behavior with less cognitive efforts. Moreover, such persuasion process is not a one-shot program but continuous repetition with feedbacks, which leads to changing a belief structure for using IT. An educatee would get more knowledge and experiences of using IT as following an education program, and be more dependent on a central route than a peripheral route. Such change would reformulate a belief structure which is different from the intial one. The objectives of this study are the following two: First, an identification of the relationship between ELM and belief structures for using IT. Especially, we analyze the effects of message interpretation through both of central and peripheral routes on perceived usefulness which is an important explaining variable in TAM and perceived use control which have perceived ease of use and perceived controllability as sub-dimensions. Second, a longitudinal analysis of the above effects. In other words, change of the relationship between interpretation of message delivered by IT education and beliefs of IT using is analyzed longitudinally. For achievement of our objectives, we suggest a research model, which is constructed as three-layered. While first layer has a dependent variable, use intention, second one has perceived usefulness and perceived use control that has two sub-concepts, perceived ease of use and perceived controllability. Finally, third one is related with two routes in ELM, source credibility and argument quality which are operationalization of peripheral route and central route respectively. By these variables, we suggest five hypotheses. In addition to relationship among variables, we suggest two additional hypotheses, moderation effects of time in the relationships between perceived usefulness and two routes. That is, source credibility's influence on perceived usefulness is decreased as time flows, and argument quality's influence is increased. For validation of it, our research model is tested empirically. With measurements which have been validated in the other studies, we survey students in an Excel class two times for longitudinal analysis. Data Analysis is done by partial least square(PLS), which is known as an appropriate approach for multi-group comparison analysis with a small sized sample as like this study. In result. all hypotheses are statistically supported. One of theoretical contributions in this study is an analysis of IT education based on ELM and TAM which are considered as important theories in psychology and IS theories respectively. A longitudinal analysis by comparison between two surveys based on PLS is also considered as a methodological contribution. In practice, finding the importance of peripheral route in early stage of IT education should be notable.