Over the last few years, deep neural networks (DNNs) have fundamentally transformed the way people think of machine learning and approach practical problems. Successes around DNNs have ranged from traditional AI fields such as computer vision, natural language processing, interactive games, to healthcare, physical sciences—touching each and every corner of theoretical and applied domains. On the other hand, DNNs still largely operate as black-boxes and we only have very limited understanding as for when and why they work. This course introduces basic ingredients of DNNs, samples important applications, and throws around open problems. Emphasis is put on thinking from first principles, as the field is still evolving rapidly and there is nothing there that cannot be changed.

Full syllabus: Syllabus.pdf

Instructor: Professor Ju Sun Email: jusun AT umn.edu (Office Hours: Tue/Thur 5–6pm)

When/Where: Mon 6:30 – 9:00pm/Keller 3-210

TA’s: Hengkang Wang Email: wang9881 AT umn.edu (Office hours: 4:30–6:30pm Wed)  

Lecture Schedule

Date Topics Notes
Sep 14 Think deep learning: overview [Slides]
Neural networks: old and new [Slides]
Sep 21 Fundamental belief: universal approximation theorems [Slides]
Review of multivariate calculus [Slides]
Sep 28 Basics of numerical optimization: preliminaries [Slides]  
Oct 05 Intro to MSI, Colab, Numpy, Scipy [Notebook]
        (Guest lecturer: Dr. Ben Lynch, MSI)
Course project [Slides]
Oct 12 Basics of numerical optimization: iterative methods [Slides ]  
Oct 19 Basics of numerical optimization: computing derivatives [Slides]  
Oct 26 Training DNNs: basic methods and tricks [Slides]  
Nov 02 Unsupervised representation learning:
        autoencoders and factorization [Slides]
Nov 09 From fully connected to convolutional neural networks [Slides]  
Nov 16 Applications of CNNs in computer vision [Slides]  
Nov 30    
Dec 07    
Dec 14