Vadikus/practicalDL
A Practical Guide to Deep Learning with TensorFlow 2.0 and Keras materials for Frontend Masters course
repo name | Vadikus/practicalDL |
repo link | https://github.com/Vadikus/practicalDL |
homepage | |
language | Jupyter Notebook |
size (curr.) | 3453 kB |
stars (curr.) | 40 |
created | 2019-11-13 |
license | Apache License 2.0 |
github.com/Vadikus/practicalDL
Educational materials for Frontend Masters course “A Practical Guide to Deep Learning with TensorFlow 2.0 and Keras”
Setup
Prerequisite: Python
To use Jupyter Notebooks on your computer - please follow the installation instructions. Note: Anaconda installation is recommended if you are not familiar with other Python package management systems.
Guided Steps
-
Install dependencies
pip install -r requirements.txt
-
Run jupyter notebook
jupyter notebook
Agenda/Curriculum
00) Introductions:
- πββοΈ About myself
- About this course/workshop - quick demo & tools overview
- π¨ Whiteboard drawings
- π Jupyter Notebooks
- π¨π»βπ» Terminal commands (pip, jupyter -> !cmd, pyenv & conda)
- π» GitHub repos (for class, TFJS -> π₯ pose demo πΊ, books repos, TF/Keras demos)
- πΈ Websites (TF, TF-hub)
- π Books:
- “Deep Learning with Python” by FranΓ§ois Chollet
- “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems” by AurΓ©lien GΓ©ron
- “Hands-On Neural Networks with TensorFlow 2.0” by Paolo Galeone
- (plot) What is the difference between Statistics / Machine Learning / Deep Learning / Artificial Intelligence? @matvelloso. Shoes size example. Information reduction.
- (plot) Compute + Algorithm + IO
- (plot) Why now, AI? Chronological retrospective.
- (plot) Hardware advances: SIMD, Tensor Cores, TPU, FPGA, Quantum Computing
- (plot) HW, compilers, TensorFlow and Keras -> computational graph, memory allocation
0) Don’t be scared of Linear Regressions - it does not “byte”!.. Basic Terminology:
- Linear regression Notebook
- π΅π§ (plot) What is neuron? What is activation function?
1) π Computer Vision:
- βπ» Handwritten digits (MNIST) recognized with fully connected neural network
- πΈ (plot) One-hot encoding
- π Information theory and representation: MNIST Principal Component Analysis
- π (plot) Fully connected vs. convolutional neural network
- π· (plot + Notebook) Convolutions, pooling, dropouts
- π (plot) Transfer learning and different topologies
- π¨ Style transfer
- π§ (Convolutional) Neural Network attention - ML explainability
2) Text Analytics - Natural Language Processing (NLP):
- π€¬ Toxicity demo
- π (plot) How to represent text as numbers? Text vectorization: one-hot encoding, tokenization, word embeddings
- π IMDB movies review dataset prediction with hot-encoding in Keras
- π€― Word embeddings and Embedding Projector
- π Embedding vs hot-encoding and Fully Connected Neural Network for IMDB
- π Can LSTM guess the author?
3) Can Robot juggle? Reinforcement Learning:
- π (plot) Actors and environment
- Reinforcement learning
4) Operationalization, aka “10 ways to put your slapdash code into production…”
- (plot) Data - Training - Deployment aka MLOps or CI/CD for Data Scientists
5) Summary
- Quick recap what we learned so far