Empirical Bayes Problem With Random Sample Size Components

  • Published : 1991.06.01

Abstract

The empirical Bayes version involves ″independent″ repetitions(a sequence) of the component decision problem. With the varying sample size possible, these are not identical components. However, we impose the usual assumption that the parameters sequence $\theta$=($\theta$$_1$, $\theta$$_2$, …) consists of independent G-distributed parameters where G is unknown. We assume that G $\in$ g, a known family of distributions. The sample size $N_i$ and the decisin rule $d_i$ for component i of the sequence are determined in an evolutionary way. The sample size $N_1$ and the decision rule $d_1$$\in$ $D_{N1}$ used in the first component are fixed and chosen in advance. The sample size $N_2$and the decision rule $d_2$ are functions of *see full text($\underline{X}^1$equation omitted), the observations in the first component. In general, $N_i$ is an integer-valued function of *see full text(equation omitted) and, given $N_i$, $d_i$ is a $D_{Ni}$/-valued function of *see full text(equation omitted). The action chosen in the i-th component is *(equation omitted) which hides the display of dependence on *(equation omitted). We construct an empirical Bayes decision rule for estimating normal mean and show that it is asymptotically optimal.

Keywords