January 19, 2019

485 words 3 mins read

tomepel/Technical_Book_DL

tomepel/Technical_Book_DL

This note presents in a technical though hopefully pedagogical way the three most common forms of neural network architectures: Feedforward, Convolutional and Recurrent.

repo name tomepel/Technical_Book_DL
repo link https://github.com/tomepel/Technical_Book_DL
homepage
language TeX
size (curr.) 6374 kB
stars (curr.) 1409
created 2017-09-04
license

Technical Book on Deep Learning

This note presents in a technical though hopefully pedagogical way the three most common forms of neural network architectures: Feedforward, Convolutional and Recurrent.

For each network, their fundamental building blocks are detailed. The forward pass and the update rules for the backpropagation algorithm are then derived in full.

The pdf of the whole document can be downloaded directly: White_book.pdf.

Otherwise, all the figures contained in the note are joined in this repo, as well as the tex files needed for compilation. Just don’t forget to cite the source if you use any of this material! :)

Hope it can help others!

Acknowledgement

This work has no benefit nor added value to the deep learning topic on its own. It is just the reformulation of ideas of brighter researchers to fit a peculiar mindset: the one of preferring formulas with ten indices but where one knows precisely what one is manipulating rather than (in my opinion sometimes opaque) matrix formulations where the dimension of the objects are rarely if ever specified.

Among the brighter people from whom I learned online are Andrew Ng. His Coursera class (https://www.coursera.org/learn/machine-learning) was the first contact I got with Neural Network, and this pedagogical introduction allowed me to build on solid ground.

I also wish to particularly thanks Hugo Larochelle, who not only built a wonderful deep learning class (http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html), but was also kind enough to answer emails from a complete beginner and stranger!

The Stanford class on convolutional networks (http://cs231n.github.io/convolutional-networks/) proved extremely valuable to me, so did the one on Natural Language processing (http://web.stanford.edu/class/cs224n/).

I also benefited greatly from Sebastian Ruder’s blog (http://ruder.io/#open), both from the blog pages on gradient descent optimization techniques and from the author himself.

I learned more about LSTM on colah’s blog (http://colah.github.io/posts/2015-08-Understanding-LSTMs/), and some of my drawings are inspired from there.

I also thank Jonathan Del Hoyo for the great articles that he regularly shares on LinkedIn.

Many thanks go to my collaborators at Mediamobile, who let me dig as deep as I wanted on Neural Networks. I am especially indebted to Clément, Nicolas, Jessica, Christine and Céline.

Thanks to Jean-Michel Loubes and Fabrice Gamboa, from whom I learned a great deal on probability theory and statistics.

I end this list with my employer, Mediamobile, which has been kind enough to let me work on this topic with complete freedom. A special thanks to Philippe, who supervised me with the perfect balance of feedback and freedom!

Contact

If you detect any typo, error (as I am sure that there unfortunately still are), or feel that I forgot to cite an important source, don’t hesitate to email me: thomas.epelbaum@shift-technology.com

comments powered by Disqus