This reading assignemt focusses on unsupervised and self-supervised learning tasks as a main driver for representation learning.
The Deep Learning Book - Chapter 14: Autoencoders (optional reading!) covers the topic very well in depth. However, in favor of including the second topic, the blog post “Introduction to autoencoders” by Jeremy Jordan provides the most important details. Refer to the book chapter if you would like to know further details!
Besides the image processing domain that we already covered in reading 3 and 4, deep learning has also had a major impact on the field of Natural Language Processing (NLP). The following references provide a good overview. Feel free to dive deeper if you are interested!
Sebastian Ruder’s blog post “A Review of the Neural History of Natural Language Processing” gives a nice introductory overview of the most important deep learning developments in NLP that nicely connects with the topics covered in the previous lectures.
Next, read the two overview blog posts by Lilian Weng “Learning Word Embedding” and “Generalized Language Models”.