Announcements
- First class on August 4, 2021 3:00 PM.
Top      
Syllabus
- Visual and Time Series Modeling: Semantic Models, Recurrent neural models and LSTM models, Encoder-decoder models, Attention models.
- Representation Learning, Causality And Explainability: t-SNE visualization, Hierarchical Representation, semantic embeddings, gradient and perturbation analysis, Topics in Explainable learning, Structural causal models.
- Unsupervised Learning: Restricted Boltzmann Machines, Variational Autoencoders, Generative Adversarial Networks.
- New Architectures: Capsule networks, End-to-end models, Transformer Networks.
- Applications: Applications in in NLP, Speech, Image/Video domains in all modules.
Top      
Grading Details
3 monthly research projects from three different domains (Speech/Audio, Text, Images/Videos, Biomedical, Financial, Chemical/Physical Sciences/Mathematical Sciences) | 60% |
Midterm exam | 10% |
Final exam | 30% |
Pre-requisites
- Linear Algebra
- Random Process
- Basic Machine Learning/Pattern Recognition course
- Good background in Python programming.
Top      
References
- A significant portion of the material would come from research papers/tutorials in the domain.
- Lecture notes in pdf format.
- “Deep Learning”, I. Goodfellow, Y, Bengio, A. Courville, MIT Press, 2016. html
Top      
Slides
04-08-2021 | Introduction. Setting the stage for the course. |
slides |
Top