Jethro's Braindump

Sufficient Statistics

A statistic \(t\) is called a sufficient statistic for \(\theta\) for a given \(\boldsymbol{y}\) if:

\begin{equation} p(\boldsymbol{y} | t, \theta)=p(\boldsymbol{y} | t) \end{equation}

Let \(Y_{i} \sim \text { Bernoulli }(\theta)\) for \(i = 1, \dots, n\), and \(T=\sum_{i=1}^{n} Y_{i}\). Then it can be shown that \(t=\sum_{i=1}^{n} y_{i}\) is a sufficient statistic for \(\theta\) given \(y=\left(y_{1}, \ldots, y_{n}\right)\).

Fisher-Neyman Theorem

The Fisher-Neyman theorem, or the factorization theorem, helps us find sufficient statistics more readily. It states that:

A statistic \(t\) is sufficient for \(\theta\) if and only if there are functions \(f\) and \(g\) such that:

\begin{equation} p(\boldsymbol{y} | \theta)=f(t, \theta) g(\boldsymbol{y}) \end{equation}

where \(t=t(\boldsymbol{y})\).