• Title/Summary/Keyword: Mixture of Normals

Search Result 13, Processing Time 0.017 seconds

James-Stein Type Estimators Shrinking towards Projection Vector When the Norm is Restricted to an Interval

  • Baek, Hoh Yoo;Park, Su Hyang
    • Journal of Integrative Natural Science
    • /
    • v.10 no.1
    • /
    • pp.33-39
    • /
    • 2017
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}(p-q{\geq}3)$, $q=rank(P_V)$ with a projection matrix $P_v$ under the quadratic loss, based on a sample $X_1$, $X_2$, ${\cdots}$, $X_n$. We find a James-Stein type decision rule which shrinks towards projection vector when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}{\theta}-P_V{\theta}{\parallel}$ is restricted to a known interval, where $P_V$ is an idempotent and projection matrix and rank $(P_V)=q$. In this case, we characterize a minimal complete class within the class of James-Stein type decision rules. We also characterize the subclass of James-Stein type decision rules that dominate the sample mean.

Optimal Estimation within Class of James-Stein Type Decision Rules on the Known Norm

  • Baek, Hoh Yoo
    • Journal of Integrative Natural Science
    • /
    • v.5 no.3
    • /
    • pp.186-189
    • /
    • 2012
  • For the mean vector of a p-variate normal distribution ($p{\geq}3$), the optimal estimation within the class of James-Stein type decision rules under the quadratic loss are given when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\underline{{\theta}}{\parallel}$ in known. It also demonstrated that the optimal estimation within the class of Lindley type decision rules under the same loss when the underlying distribution is the previous type and the norm ${\parallel}{\theta}-\overline{\theta}\underline{1}{\parallel}$ with $\overline{\theta}=\frac{1}{p}\sum\limits_{i=1}^{n}{\theta}_i$ and $\underline{1}=(1,{\cdots},1)^{\prime}$ is known.

Comparison of Laplace and Double Pareto Penalty: LASSO and Elastic Net (라플라스와 이중 파레토 벌점의 비교: LASSO와 Elastic Net)

  • Kyung, Minjung
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.975-989
    • /
    • 2014
  • Lasso (Tibshirani, 1996) and Elastic Net (Zou and Hastie, 2005) have been widely used in various fields for simultaneous variable selection and coefficient estimation. Bayesian methods using a conditional Laplace and a double Pareto prior specification have been discussed in the form of hierarchical specification. Full conditional posterior distributions with each priors have been derived. We compare the performance of Bayesian lassos with Laplace prior and the performance with double Pareto prior using simulations. We also apply the proposed Bayesian hierarchical models to real data sets to predict the collapse of governments in Asia.