Mathematical Foundations of Deep Neural Networks, Spring 2024

This is the primary course website for Mathematical Foundations of Deep Neural Networks (심층신경망의 수학적 기초), M1407.001200, Spring 2024. We will also use eTL as a secondary website.


Announcements

  • No class on 04/10.
  • The midterm exam is open-book in the sense that you may use any non-electronic resource.


Homework

This class will have weekly homework assignments. Submit completed assignments through eTL.


Lecture Plans

  • [Week 1] Optimization and stochastic gradient descent
  • [Week 2] Shallow neural networks and logistic regression.
  • [Week 3] Multi-layer perceptron. Softmax regression.
  • [Week 4] Convolutional layers, pooling layers, GPU computing, LeNet
  • [Week 5] Data augmentation, regularization techniques: dropout, weight decay, early stopping
  • [Week 6] Weight initialization, VGGNet, backprop
  • [Week 7] Optimizers (ADAM, RMSProp), NiN network, GoogLeNet
  • [Week 8] Batch normalization, ResNet, DenseNet
  • [Week 9] ResNext, SENet, DNCNN, super-resolution, inverse problem
  • [Week 10-11] Flow models
  • [Week 12-13] Variational auto-encoders
  • [Week 14-15] Generative adversarial networks

Course Information

Course material will be posted on this website. eTL will be used for announcements, homework submission, and receiving homework and exam scores.

Instructor

Ernest K. Ryu, 27-205,

Photo of Ernest Ryu

Graduate Teaching Assistants

Kibeom Myoung

Photo of Kibeom Myoung

Jisun Park

Photo of Jisun Park

Jaewook Suh

Photo of Jaewook Suh

Undergraduate Teaching Assistants

Yongin Cho

Photo of Yongin Cho

Juhyeok Choi

Photo of Juhyeok Choi

Chaeju Lee

Photo of Chaeju Lee

Gyeongmin Lee

Photo of Gyeongmin Lee

Hyojun Lee

Photo of Hyojun Lee

Jihoon Jun

Photo of Jihoon Jun

Jongchan Noh

Photo of Jongchan Noh

Youngmin Son

Photo of Youngmin Son

Lectures

Mondays and Wednesdays 5:00–6:15pm at 28-102.

Exams

The midterm and final exams will be hand-written (no computers), in-person, and 4 hours long.

Grading

Homework 30%, midterm exam 30%, final exam 40%.

Non-programming prerequisites

Good knowledge of the following subjects is required.

  • Vector calculus: profeciency with gradients and chain rules.
  • Linear algebra: profeciency with matrix-vector products.
  • Basic probability: random variables, expectation, mean, variance, discrete and continuous random variables, Gaussian random variables.

No prior backround in machine learning or deep learning is required.

Programming prerequisites

Although programming is an essential part of deep learning, I do not want to make substantial Python programming background a prerequisite for this course. Therefore, I will record and upload a Python tutorial for students to study before the start of the semester. I expect this tutorial to be about 5 hours long, and it will cover the necessary Python programming concepts such as variables, loops, functions, classes, objects, inheritance, exceptions, Python lists, Python tuples, Python decorators, Python iterators, the NumPy library, and the Matplotlib library. Of course, students already familiar with these concepts need not spend time with the tutorial.

Programming in deep learning tends to be relatively simple if one utilizes the frameworks such as PyTorch. These frameworks do utilize advanced programming concepts such as inheritance, decorators, and iterators, but if you are roughly familiar with these concepts, I do not expect the Python programming to be a major challenge in this course.

Textbooks and other references

The course does not have a designated reference or a textbook. However, you may find the following references useful: Fei Fei Li's CS 231n at Stanford, Pieter Abbeel's Deep Unsupervised Learning at Berkeley, Dive into Deep Learning by Zhang, Lipton, Li, and Smola, and Deep Learning by Goodfellow, Bengio, and Courville.

Throughout this class, you will spend many hours computing gradients. While the chain rule is taught in any vector calculus curriculum, chain rule formulae in vector and matrix forms are not. Fundamentally, you should first carry out the gradient calculations elementwise and then collect the result into a vectorized form. Nevertheless, you may find the following references useful: Matrix calculus notes 1, matrix calculus notes 2, and matrix calculus notes 3 (Chapters 1–3).