Over the last few years, deep neural networks (DNN) have fundamentally transformed the way people think of machine learning and approach practical problems. Successes around DNN have ranged from traditional AI fields such as computer vision, natural language processing, interactive games, to health care, physical sciences—touching each and every corner of theoretical and applied domains. On the other hand, DNN still largely operate as black-boxes and we only have very limited understanding as for when and why they work. This course introduces basic ingredients of DNN, samples important applications, and throws around open problems. Emphasis is put on thinking from first principles, as the field is still evolving rapidly and there is nothing there that cannot be changed.

Instructor: Professor Ju Sun Email: jusun AT umn.edu

When/Where: T/Th 2:30PM–3:45PM, Akerman Hall 225

TA’s: Yuan Yao Email: yaoxx340 AT umn.edu   Taihui Li Email: lixx5027 AT umn.edu

The detailed syllabus, containing the office hours, recommended references, assessment, homework and project requirements, programming and computing, and other resources, can be found here: Syllabus.pdf

References

Lectures

Date Topics Notes Reading
01/21 Overview Slides  
01/23 Neural networks: old and new Slides DLP Ch 1, D2L Ch 3–4, MNDL Ch 2
01/28 Fundamental belief: universal approximation theorem    
01/30      
02/04 (Tutorial) Numpy, Scipy, Colab [Guest: Dr. Ben Lynch of MSI]    
02/06 (Discussion) Project ideas    
02/11      
02/13      
02/18 (Tutorial) Tensorflow, Pytorch, MSI GPU cluster [Guest: Dr. Ben Lynch of MSI]    
02/20      
02/25      
02/27      
03/03      
03/05      
03/10 SPRING BREAK – NO CLASS    
03/12 SPRING BREAK – NO CLASS    
03/17      
03/19      
03/24      
03/26      
03/31      
04/02      
04/07      
04/09      
04/14      
04/16      
04/21      
04/23      
04/28      
04/30      
05/05      

Homework assignments

Course project