• 제목/요약/키워드: jackknife resampling

검색결과 8건 처리시간 0.02초

Comparison of EM with Jackknife Standard Errors and Multiple Imputation Standard Errors

  • Kang, Shin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.1079-1086
    • /
    • 2005
  • Most discussions of single imputation methods and the EM algorithm concern point estimation of population quantities with missing values. A second concern is how to get standard errors of the point estimates obtained from the filled-in data by single imputation methods and EM algorithm. Now we focus on how to estimate standard errors with incorporating the additional uncertainty due to nonresponse. There are some approaches to account for the additional uncertainty. The general two possible approaches are considered. One is the jackknife method of resampling methods. The other is multiple imputation(MI). These two approaches are reviewed and compared through simulation studies.

  • PDF

Resampling-based Test of Hypothesis in L1-Regression

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • 제11권3호
    • /
    • pp.643-655
    • /
    • 2004
  • L$_1$-estimator in the linear regression model is widely recognized to have superior robustness in the presence of vertical outliers. While the L$_1$-estimation procedures and algorithms have been developed quite well, less progress has been made with the hypothesis test in the multiple L$_1$-regression. This article suggests computer-intensive resampling approaches, jackknife and bootstrap methods, to estimating the variance of L$_1$-estimator and the scale parameter that are required to compute the test statistics. Monte Carlo simulation studies are performed to measure the power of tests in small samples. The simulation results indicate that bootstrap estimation method is the most powerful one when it is employed to the likelihood ratio test.

A comparative study of the Gini coefficient estimators based on the regression approach

  • Mirzaei, Shahryar;Borzadaran, Gholam Reza Mohtashami;Amini, Mohammad;Jabbari, Hadi
    • Communications for Statistical Applications and Methods
    • /
    • 제24권4호
    • /
    • pp.339-351
    • /
    • 2017
  • Resampling approaches were the first techniques employed to compute a variance for the Gini coefficient; however, many authors have shown that an analysis of the Gini coefficient and its corresponding variance can be obtained from a regression model. Despite the simplicity of the regression approach method to compute a standard error for the Gini coefficient, the use of the proposed regression model has been challenging in economics. Therefore in this paper, we focus on a comparative study among the regression approach and resampling techniques. The regression method is shown to overestimate the standard error of the Gini index. The simulations show that the Gini estimator based on the modified regression model is also consistent and asymptotically normal with less divergence from normal distribution than other resampling techniques.

Jensen's Alpha Estimation Models in Capital Asset Pricing Model

  • Phuoc, Le Tan
    • The Journal of Asian Finance, Economics and Business
    • /
    • 제5권3호
    • /
    • pp.19-29
    • /
    • 2018
  • This research examined the alternatives of Jensen's alpha (α) estimation models in the Capital Asset Pricing Model, discussed by Treynor (1961), Sharpe (1964), and Lintner (1965), using the robust maximum likelihood type m-estimator (MM estimator) and Bayes estimator with conjugate prior. According to finance literature and practices, alpha has often been estimated using ordinary least square (OLS) regression method and monthly return data set. A sample of 50 securities is randomly selected from the list of the S&P 500 index. Their daily and monthly returns were collected over a period of the last five years. This research showed that the robust MM estimator performed well better than the OLS and Bayes estimators in terms of efficiency. The Bayes estimator did not perform better than the OLS estimator as expected. Interestingly, we also found that daily return data set would give more accurate alpha estimation than monthly return data set in all three MM, OLS, and Bayes estimators. We also proposed an alternative market efficiency test with the hypothesis testing Ho: α = 0 and was able to prove the S&P 500 index is efficient, but not perfect. More important, those findings above are checked with and validated by Jackknife resampling results.

Bootstrapping Regression Residuals

  • Imon, A.H.M. Rahmatullah;Ali, M. Masoom
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권3호
    • /
    • pp.665-682
    • /
    • 2005
  • The sample reuse bootstrap technique has been successful to attract both applied and theoretical statisticians since its origination. In recent years a good deal of attention has been focused on the applications of bootstrap methods in regression analysis. It is easier but more accurate computation methods heavily depend on high-speed computers and warrant tough mathematical justification for their validity. It is now evident that the presence of multiple unusual observations could make a great deal of damage to the inferential procedure. We suspect that bootstrap methods may not be free from this problem. We at first present few examples in favour of our suspicion and propose a new method diagnostic-before-bootstrap method for regression purpose. The usefulness of our newly proposed method is investigated through few well-known examples and a Monte Carlo simulation under a variety of error and leverage structures.

  • PDF

연최대강우량의 대표확률분포형 결정을 위한 Jackknife기법의 적용 (Application of Jackknife Method for Determination of Representative Probability Distribution of Annual Maximum Rainfall)

  • 이재준;이상원;곽창재
    • 한국수자원학회논문집
    • /
    • 제42권10호
    • /
    • pp.857-866
    • /
    • 2009
  • 본 연구에서는 전국의 30년 이상의 강우관측기록을 보유하고 있는 기상청 산하 56개 강우관측소의 연 최대치 강우자료들로부터 확률분포형에 대하여 모멘트법, 최우추정법, 확률가중모멘트법을 이용하여 모수를 추정하고, 그 모수의 범위와 확률변수의 범위에 대한 적정성을 알아보았다. 적정성이 있는 모수를 대상으로 적합도 검정법인 x$^2$-검정, K-S검정, Cramer von Mises (CVM)검정, Probability Plot Correlation Coefficient (PPCC) 검정을 실시한 결과 중, 최근 연구에서 많이 이용되고 있고 표본자료의 크기가 작거나 왜곡된 자료일 경우에도 비교적 안정적인 결과를 얻을 수 있는 확률가중모멘트법과 상관계수에 의한 검정인 PPCC검정을 통과한 분포형을 우선적으로 적합도 평가 대상 분포형으로 선정하였다. 선정된 분포형을 대상으로 적합도 평가기준인 SLSC, MLL, AIC를 적용하여 적합도 평가를 실시하여 대표확률분포형 후보군을 추출하였다. 대표확률분포형 후보군으로 선정된 확률분포형에 대하여 resampling방법인 Jackknife기법을 적용하여 변동성을 파악하고, 변동성이 가장 작게 나타난 분포형을 그 지점의 대표확률분포형으로 결정하였다. 본 논문에서는 분석 결과의 분량을 감안하여 대표적으로 서울, 강릉, 대구, 전주, 부산 지점에 대해 작성하였으며, 확률강우량의 변동성이 가장 작은 확률분포형을 56개 지점의 각 지점 대표확률분포형으로 제시하였으며, Gumbel 분포(GUM)의 선정 비율이 지속기간 12시간, 24시간에 대해 각각 41 %, 32 %로 가장 높게 나타났다. 본 연구에서는 적합도 평가를 함에 있어서 객관적 정량화가 가능한 세 가지 기준과 Jackknife기법을 이용한 새로운 확률분포형 선정의 가능성을 제시하였다.

Practice of causal inference with the propensity of being zero or one: assessing the effect of arbitrary cutoffs of propensity scores

  • Kang, Joseph;Chan, Wendy;Kim, Mi-Ok;Steiner, Peter M.
    • Communications for Statistical Applications and Methods
    • /
    • 제23권1호
    • /
    • pp.1-20
    • /
    • 2016
  • Causal inference methodologies have been developed for the past decade to estimate the unconfounded effect of an exposure under several key assumptions. These assumptions include, but are not limited to, the stable unit treatment value assumption, the strong ignorability of treatment assignment assumption, and the assumption that propensity scores be bounded away from zero and one (the positivity assumption). Of these assumptions, the first two have received much attention in the literature. Yet the positivity assumption has been recently discussed in only a few papers. Propensity scores of zero or one are indicative of deterministic exposure so that causal effects cannot be defined for these subjects. Therefore, these subjects need to be removed because no comparable comparison groups can be found for such subjects. In this paper, using currently available causal inference methods, we evaluate the effect of arbitrary cutoffs in the distribution of propensity scores and the impact of those decisions on bias and efficiency. We propose a tree-based method that performs well in terms of bias reduction when the definition of positivity is based on a single confounder. This tree-based method can be easily implemented using the statistical software program, R. R code for the studies is available online.

Reexamination of Estimating Beta Coecient as a Risk Measure in CAPM

  • Phuoc, Le Tan;Kim, Kee S.;Su, Yingcai
    • The Journal of Asian Finance, Economics and Business
    • /
    • 제5권1호
    • /
    • pp.11-16
    • /
    • 2018
  • This research examines the alternative ways of estimating the coefficient of non-diversifiable risk, namely beta coefficient, in Capital Asset Pricing Model (CAPM) introduced by Sharpe (1964) that is an essential element of assessing the value of diverse assets. The non-parametric methods used in this research are the robust Least Trimmed Square (LTS) and Maximum likelihood type of M-estimator (MM-estimator). The Jackknife, the resampling technique, is also employed to validate the results. According to finance literature and common practices, these coecients have often been estimated using Ordinary Least Square (LS) regression method and monthly return data set. The empirical results of this research pointed out that the robust Least Trimmed Square (LTS) and Maximum likelihood type of M-estimator (MM-estimator) performed much better than Ordinary Least Square (LS) in terms of eciency for large-cap stocks trading actively in the United States markets. Interestingly, the empirical results also showed that daily return data would give more accurate estimation than monthly return data in both Ordinary Least Square (LS) and robust Least Trimmed Square (LTS) and Maximum likelihood type of M-estimator (MM-estimator) regressions.