This is the page of the course Unsupervised Language Learning: Representation Learning for NLP offered at the University of Amsterdam

Course coordinators: Wilker Aziz and Ekaterina Shutova

Teaching assistant: Samira Abnar

Goals

Content

The class will cover representation learning methods for natural language processing (NLP) problems. Various kinds of representations will be considered: from discrete and structured representations (e.g., hidden hierarchical structure of text) to real-valued vectors (as in deep learning). The main focus will be on problems from natural language processing but most of the methods we will consider will have applications in other domains (e.g., bioinformatics, vision, information retrieval, etc). The goal of this class is to give you a perspective on modern research in statistical NLP.

Though the title contains the term “unsupervised”, we will treat it in a rather general way: for example, any representation learning approach is in some sense unsupervised as features are not pre-specified by a model designer but rather induced during the learning process. There will be two parts to the class:

In this setup, the course has no exam. The grade is based on participation (including presentations of literature that students give (20%) and a series of practical assignments, culminating in a research report that the students submit at the end (80%).