Course program
- Data, features, and embeddings
- Data awareness
- Modeling prior knowledge
- The curse of dimensionality
- Task-driven features and invariances
- Recap of linear algebra
- Vector spaces, bases
- Linear maps
- Matrix notation and matrix algebra
- Tensors and tensor operations
- Parametric models and regression
- Linear and polynomial regression
- Convexity and Lp norms
- Underfitting and overfitting
- Cross validation
- Logistic regression
- Optimization
- Gradient descent
- Stochastic gradient descent
- Learning rate, decay, momentum, batch size
- Forward and reverse-mode automatic differentiation
- Deep neural networks
- Multi layer perceptron
- Backpropagation
- Universal approximation theorems
- Autograd and modules
- Invariance, equivariance, compositionality
- Convolutional neural networks
- Pooling
- Double descent
- Regularization: weight penalty, early stopping, dropout, batchnorm
- Generative models
- PCA
- Manifolds and the manifold hypothesis
- Representation learning
- Autoencoders: variational, contractive, denoising
- Generative adversarial networks
- Adversarial learning
- Decision boundaries
- Black-box and white-box attacks
- Adversarial perturbations: universal and one-pixel
- Adversarial training
- Geometric deep learning
- Learning on graphs and point clouds
- Learning on surfaces
- Generative models of structured data
- Adversarial surfaces
Prerequisites
Important: Calculus; linear algebra; fundamentals of machine learning.
Mandatory: Fundamentals of programming (Python language).
The course will cover the basics of calculus, linear algebra and machine learning that are needed to fully understand the lectures.
Books
Due to the highly dynamic nature of this advanced course, classes will not follow a specific text book. Different sources will be provided throughout the course in the form of scientific papers and book chapters.
For your own reference, the following material may be useful:
Deep Learning
Ian Goodfellow, Yoshua Bengio, Aaron Courville
MIT Press, 2016
Deep Learning with PyTorch
Vishnu Subramanian
Packt, 2018
Teaching mode
The course follows a traditional format and is held completely in the classroom.
Classes follow a hybrid format covering the theoretical as well as the more technical aspects of advanced machine learning. The hybrid nature of the classes lies in the frequent practical demos that are interleaved throughout the lectures. These tutorials are executed live and make up for about 40% of the course. The deep learning framework PyTorch will be used in these sessions. All students are required to actively engage in the tutorials, using their personal computers to work on the notebooks.
Frequency
In presence, but it is not mandatory to attend the lectures.
Exam mode
Evaluation consists in the following steps:
1. A written midterm, acting as a self-evaluation test, which does not concur to the final grade.
2. A project (not necessarily individual).
3. An optional oral exam.
The oral examination, which is optional, can bring up to 3 points (plus or minus) to the final score.
All these steps are aimed at evaluating the technical and theoretical skills, as well as the capability to work in a group, knowledge of the literature, the ability to formulate deep learning problems and to setup experiments.
Bibliography
Deep Learning
Ian Goodfellow, Yoshua Bengio, Aaron Courville
MIT Press, 2016
Deep Learning with PyTorch
Vishnu Subramanian
Packt, 2018
Lesson mode
The course follows a traditional format and is held completely in the classroom.
Classes follow a hybrid format covering the theoretical as well as the more technical aspects of advanced machine learning. The hybrid nature of the classes lies in the frequent practical demos that are interleaved throughout the lectures. These tutorials are executed live and make up for about 40% of the course. The deep learning framework PyTorch will be used in these sessions. All students are required to actively engage in the tutorials, using their personal computers to work on the notebooks.