Learning Generative Models


Summer Semester 2020
Prof. Dr. Sebastian Stober
Jens Johannsmeier, M.Sc.
Lecture: Thu 13-15 c.t.; online (TBA) Exercise: Mon 11-13 s.t. (!); online (TBA)

Start of lectures: Thursday, April 23

Please note: Due to the Coronavirus (COVID-19) pandemic, this course will be taught virtually. We will follow a mostly synchronous teaching approach by using a video conferencing tool for the lectures and exercises. More details will be announced after the end of the sign-up period on April 7. Please make sure you sign up for this class and the respective exercise in the LSF system by April 6!


“What I cannot create, I do not understand.”
Richard Feynman

Deep Learning techniques have gained popularity and yielded astonishing results in a wide range of applications, like computer vision or speech recognition. In most of these applications, deep neural nets are trained in a supervised way, i.e., the network is trained to predict a particular label for high-dimensional data. In order to perform well, they need large amounts of data.

In many real-world applications, the amount of data is either limited or it is not annotated with the desired labels. For instance, training a deep neural network to reliably detect a malfunction in a nuclear power plant would require a lot of measurements during melt-down, which obviously is infeasible.

Generative models can be used to tackle such problems. They learn to represent the probability distribution over multiple variables from training data. While ordinary deep neural networks may discard any information unrelated to the desired prediction, generative models learn to represent the data in its entirety. As such, they can be applied to tasks like imputing missing values, repairing damaged data, detecting anomalies or generating new data from the learnt distribution.

Course Content

The course will first focus on understanding and applying shallow and deep energy-based models as a particular type of generative models. Furthermore, the course will cover more recent generative models, like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). You will further explore applications of generative models in a team-based project, using real-world data. For implementation, we will use Python and TensorFlow.

This course requires active participation as it follows a “flipped classroom” design. You will prepare for contents before the class using selected text book chapters or recent publications. You will share your learning and project progress with the other participants by writing short blog posts on a weekly basis. The in-class time will be used for discussions and for strengthening the understanding.

If you have got questions about the course, please contact Sebastian Stober.

Administrative Terms & Conditions (a.k.a. the “fine print”)

Prerequisites for Attending

This is a very advanced course intended for students who have successfully taken the lecture “Introduction to Deep Learning” which is regularly taught alternatingly in the winter term. You need to know the concepts and techniques introduced there to be able to follow along. If you did not take this course but you think that you still have the required knowledge, please contact Sebastian Stober. Note that “Introduction to Deep Learning” is additionally also taught in the 2020 summer term, but is NOT recommended to attend both “Introduction to Deep Learning” and “Learning Generative Models” at the same time!

Credits

The course belongs the to topic domain “practical computer science” and amounts to 5 credit points or an equivalent of 150 hours. Be prepared to spend that much time! If you are into the topic and want to go deeper, rather plan for some extra time - it’s worth it! Master students can obtain 6 credit points by completing an additional task of roughly 30 hours. (Details will be annouced in class.)