Monte Carlo EM

Recall the basic setup for the EM algorithm.

We have a joint model,

\[ f(z,x \,|\, \theta) \]

where,

Iterates of \(\theta\): \(\{ \theta^t \}\).

\[ Q(\theta \,|\, \theta^t) = E(\log(f(z,x \,|\, \theta)) \]

where \(E\) is over \(Z \,|\, x,\theta^t\).

\[ \theta^{t+1} = \underset{\theta}{\operatorname{argmax}} \; Q(\theta \,|\, \theta^t) \]

The idea of the Monte Carlo EM is simple.

In some cases the expectation in the E step cannot be done analytically.
This is exactly what Monte Carlo is for!

So, you simply replace the \(E(\log(f(z,x \,|\, \theta))\) with a monte carlo approximation.
This means you will have to be able to get iid draws from \(Z \,|\, x,\theta^t\) and then compute \[ E(\log(f(z,x \,|\, \theta)) \approx \frac{1}{m} \, \sum \, \log(f(z_j,x \,|\, \theta)) \] where \(z_j \sim Z \,|\, x,\theta^t, \, iid\).

You can use the data suggested at the end of example 4.7 and see if you can get something like Figure 4.2