We start with \(\theta>0\) and \(E[X|\theta]=\theta\) and \(Var[X|\theta]=\theta^2\)

We have a class of decision rules that are \(S=\{\delta_c(X)=cX, 0 < c < 1\}\)

and we’re give the standard squared loss funciton \(L(\theta,a) = (\theta - a)^2\)

To find a decision rule in \(S\) that is admissible means finding a decision rule \(\delta_{c}\) for there exists no other decision rule that is R-better than \(\delta_{c}\)

We can find such a decision rule by finding \(\delta_{c}\) for which the Risk is the minimum possible.

We know that the Bayesian risk for a decsion rule is defined by \(R(\theta,a)=E[L(\theta,a)]\)

Thus if \(L(\theta,a) = (\theta - a)^2\), we can substitute that back into the risk formula and get: \(R(\theta,a)=E[(\theta - a)^2]\)

Further we know that \(a\) is of the form \(\delta_c(X)=cX\) for \(c \in (0,1)\) and if we also plug that in we get \(R(\theta,\delta_{c})=E[(\theta - cX)^2]\)

so we can derive

\begin{align} R(\theta,\delta_{c})=E[(\theta - cX)^2]\\ =E[\theta^2 -2c\theta X +(cX)^2]\\ =\theta^2 - 2c\theta E[X] +c^2E[X^2] \end{align}

Now we can use the identity that \(Var[\zeta] = E[\zeta^2] - E[\zeta]^2\), which if we re-arrange, we get \(E[\zeta^2] = Var[\zeta] + E[\zeta]^2\).

Now let us set \(\zeta=\theta -cX\) and put that in our re-arranged identity and we get (Note: \(Var(X+c)=Var(X)\), and where \(Var(cX)=c^2Var(X)\) where c is a constant):

\begin{align} \require{cancel} E[\zeta^2] = Var[\zeta] + E[\zeta]^2 \\ E[(\theta -cX)^2] = Var[\theta -cX] + E[\theta -cX]^2 \\ E[\theta^2-2cX\theta +c^2X^2] = c^2Var[X] + (E[\theta] -cE[X])^2 \\ E[\theta^2]-2c\theta E[X]+c^2E[X^2] = c^2Var[X] + (\theta -cE[X])^2 \\ \theta^2-2c\theta E[X]+c^2E[X^2] = c^2Var[X] + \theta^2 -2c\theta E[X] +c^2E[X]^2 \\ \cancel{\theta^2} \bcancel{-2c\theta E[X]}+c^2E[X^2] = c^2Var[X] + \cancel{\theta^2} \bcancel{-2c\theta E[X]} +c^2E[X]^2 \\ \xcancel{c^2}E[X^2] = \xcancel{c^2}Var[X] +\xcancel{c^2}E[X]^2 \\ E[X^2] = Var[X] + E[X]^2 \end{align}

next let us substitute it back into our previous result:

\begin{align} R(\theta,\delta_{c})=\theta^2 - 2c\theta E[X] +c^2E[X^2] \\ =\theta^2 - 2c\theta E[X] + c^2(Var[X] + E[X]^2) \\ =\theta^2 - 2c\theta E[X] + c^2Var[X] + c^2E[X]^2 \end{align}

Now we plug back \(E[X|\theta]=\theta\) and \(Var[X|\theta]=\theta^2\) into the previous result:

\begin{align} R(\theta,\delta_{c})=\theta^2 - 2c\theta E[X] + c^2Var[X] + c^2E[X]^2 \\ =\theta^2 - 2c\theta \cdot \theta + c^2\theta^2 + c^2(\theta)^2 \\ =\theta^2 - 2c\theta^2 + 2c^2\theta^2 \\ =\theta^2 \end{align}