rasbt/stat479-deep-learning-ss19
Course material for STAT 479: Deep Learning (SS 2019) at University Wisconsin-Madison
repo name | rasbt/stat479-deep-learning-ss19 |
repo link | https://github.com/rasbt/stat479-deep-learning-ss19 |
homepage | http://pages.stat.wisc.edu/~sraschka/teaching/stat479-ss2019/ |
language | Jupyter Notebook |
size (curr.) | 168326 kB |
stars (curr.) | 349 |
created | 2019-01-19 |
license | |
STAT479: Deep Learning (Spring 2019)
Instructor: Sebastian Raschka
Lecture material for the STAT 479 Deep Learning course at University Wisconsin-Madison. For details, please see the course website at http://pages.stat.wisc.edu/~sraschka/teaching/stat479-ss2019/
Course Calendar
Please see http://pages.stat.wisc.edu/~sraschka/teaching/stat479-ss2019/#calendar.
Topic Outline
- History of neural networks and what makes deep learning different from “classic machine learning”
- Introduction to the concept of neural networks by connecting it to familiar concepts such as logistic regression and multinomial logistic regression (which can be seen as special cases: single-layer neural nets)
- Modeling and deriving non-convex loss function through computation graphs
- Introduction to automatic differentiation and PyTorch for efficient data manipulation using GPUs
- Convolutional neural networks for image analysis
- 1D convolutions for sequence analysis
- Sequence analysis with recurrent neural networks
- Generative models to sample from input distributions
- Autoencoders
- Variational autoencoders
- Generative Adversarial Networks
Material
-
L01: What are Machine Learning and Deep Learning? An Overview. [Slides]
-
L02: A Brief Summary of the History of Neural Networks and Deep Learning. [Slides]
-
L04: Linear Algebra for Deep Learning. [Slides]
-
L06: Automatic Differentiation with PyTorch. [Slides] [Code]
-
L07: Cloud Computing. [Slides]
-
L08: Logistic Regression and Multi-class Classification [Slides] [Code]
-
L11: Normalization and Weight Initialization [Slides]
-
L12: Learning Rates and Optimization Algorithms [Slides]
-
L13: Introduction to Convolutional Neural Networks [Slides (part 1)] [Slides (part 2)] [Slides (part 3)]
-
L14: Introduction to Recurrent Neural Networks [Slides (part 1) Slides (part 2)] [Code]
-
L16: Variational Autoencoders (skipped due to timing constraints)
-
A summary/gallery of some of the awesome student projects students in this class worked on.
Project Presentation Awards
Without exception, we had amazing project presentations this semester. Nonetheles, we have some winners the top 5 project presentations for each of the 3 categories, as determined by voting among the ~65 students:
Best Oral Presentation:
-
Saisharan Chimbiki, Grant Dakovich, Nick Vander Heyden (Creating Tweets inspired by Deepak Chopra), average score: 8.417
-
Josh Duchniak, Drew Huang, Jordan Vonderwell (Predicting Blog Authors’ Age and Gender), average score: 7.663
-
Sam Berglin, Jiahui Jiang, Zheming Lian (CNNs for 3D Image Classification), average score: 7.595
-
Christina Gregis, Wengie Wang, Yezhou Li (Music Genre Classification Based on Lyrics), average score: 7.588
-
Ping Yu, Ke Chen, Runfeng Yong (NLP on Amazon Fine Food Reviews) average score: 7.525
Most Creative Project:
-
Saisharan Chimbiki, Grant Dakovich, Nick Vander Heyden (Creating Tweets inspired by Deepak Chopra), average score: 8.313
-
Yien Xu, Boyang Wei, Jiongyi Cao (Judging a Book by its Cover: A Modern Approach), average score: 7.952
-
Xueqian Zhang, Yuhan Meng, Yuchen Zeng (Handwritten Math Symbol Recognization), average score: 7.919
-
Jinhyung Ahn, Jiawen Chen, Lu Li (Diagnosing Plant Diseases from Images for Improving Agricultural Food Production), average score: 7.917
-
Poet Larsen, Reng Chiz Der, Noah Haselow (Convolutional Neural Networks for Audio Recognition), average score: 7.854
Best Visualizations:
-
Ping Yu, Ke Chen, Runfeng Yong (NLP on Amazon Fine Food Reviews), average score: 8.189
-
Xueqian Zhang, Yuhan Meng, Yuchen Zeng (Handwritten Math Symbol Recognization), average score: 8.153
-
Saisharan Chimbiki, Grant Dakovich, Nick Vander Heyden (Creating Tweets inspired by Deepak Chopra), average score: 7.677
-
Poet Larsen, Reng Chiz Der, Noah Haselow (Convolutional Neural Networks for Audio Recognition), average score: 7.656
-
Yien Xu, Boyang Wei, Jiongyi Cao (Judging a Book by its Cover: A Modern Approach), average score: 7.490