Assignment 8: Improved GANs

Discussion: June 22nd

This week, we want to try out some advanced architectures. In particular, we want to focus on the implementation of Wasserstein GANs. Although the derivation of WGANs is mathematically sophisticated, their implementation is quite simple. Recall the reading from the lecture; the papers contain pseudocode algorithms you can use as a reference.

Basic WGAN

For the standard WGAN, the differences are as follows:

Experiment with hyperparameters, such as the ratio of discriminator/generator training and, most importantly, the weight clipping boundaries. See what happens when you use either extremely tight bounds, or exceedingly loose ones (or remove clipping altogether). How do the outputs differ in each case?

Improved WGAN

Next, move on the WGAN-GP formulation. Compared to the above, there are just two differences:

Optional: Advanced Architectures

If you have more time, try to implement other architectures from the reading. In particular,