Assignment 2: Let’s go to Monte Carlo

Discussion: April 25th
Deadline: April 24th, 20:00

NOTE THE CHANGED DEADLINE

In this assignment, we will be investigating Monte Carlo Methods with a few simple examples. There is a lot of text for explanation, but the actual tasks are rather compact.

Some starter code can be found on Gitlab!

Markov Chain Monte Carlo

To understand the basics of MCMC methods, we consider the simple example in section 17.3 of the deep learning book, where we are interested in sampling single integers x from 0, …, n according to some distribution. Try the following:

Now, we run a Markov chain:

Gibbs Sampling & Mixing

In most of our use cases, the situation is not as in the example above: We don’t have a transition distribution given and just start running a Markov chain on it. Instead, we have a desired target distribution (our model distribution) and need to figure out how to get there, i.e. how to sample from it.

Let’s try to sample from a mixture of Gaussians via Gibbs sampling.

You should collect a reasonable number of samples (1000 or more) and plot both the target distribution (mixture of Gaussians) and your samples. Do the samples reflect the distribution well? In particular are both modes of the Gaussian mixture covered equally? You can do this visually and/or using statistics. Also, experiment with different locations/scales for the Gaussians. That is, move the components further apart or closer together and repeat the sampling process each time. The quality of the samples should vary dramatically based on the distance between components!