July 24, 2019

1360 words 7 mins read



An interactive book on deep learning. Much easy, so MXNet. Wow. [Straight Dope is growing up] —> Much of this content has been incorporated into the new Dive into Deep Learning Book available at https://d2l.ai/.

repo name zackchase/mxnet-the-straight-dope
repo link https://github.com/zackchase/mxnet-the-straight-dope
homepage https://d2l.ai/
language Jupyter Notebook
size (curr.) 220622 kB
stars (curr.) 2495
created 2017-07-11
license Apache License 2.0

Deep Learning - The Straight Dope (Deprecated Please see d2l.ai)

This content has been moved to Dive into the Deep Learning Book freely available at https://d2l.ai/.


This repo contains an incremental sequence of notebooks designed to teach deep learning, MXNet, and the gluon interface. Our goal is to leverage the strengths of Jupyter notebooks to present prose, graphics, equations, and code together in one place. If we’re successful, the result will be a resource that could be simultaneously a book, course material, a prop for live tutorials, and a resource for plagiarising (with our blessing) useful code. To our knowledge there’s no source out there that teaches either (1) the full breadth of concepts in modern deep learning or (2) interleaves an engaging textbook with runnable code. We’ll find out by the end of this venture whether or not that void exists for a good reason.

Another unique aspect of this book is its authorship process. We are developing this resource fully in the public view and are making it available for free in its entirety. While the book has a few primary authors to set the tone and shape the content, we welcome contributions from the community and hope to coauthor chapters and entire sections with experts and community members. Already we’ve received contributions spanning typo corrections through full working examples.

Implementation with Apache MXNet

Throughout this book, we rely upon MXNet to teach core concepts, advanced topics, and a full complement of applications. MXNet is widely used in production environments owing to its strong reputation for speed. Now with gluon, MXNet’s new imperative interface (alpha), doing research in MXNet is easy.


To run these notebooks, you’ll want to build MXNet from source. Fortunately, this is easy (especially on Linux) if you follow these instructions. You’ll also want to install Jupyter and use Python 3 (because it’s 2017).


The authors (& others) are increasingly giving talks that are based on the content in this books. Some of these slide-decks (like the 6-hour KDD 2017) are gigantic so we’re collecting them separately in this repo. Contribute there if you’d like to share tutorials or course material based on this books.


As we write the book, large stable sections are simultaneously being translated into 中文, available in a web version and via GitHub source.

Table of contents

Part 1: Deep Learning Fundamentals

Part 2: Applications

Part 3: Advanced Methods


  • Appendix 1: Cheatsheets
    • Roadmap gluon
    • Roadmap PyTorch to MXNet (work in progress)
    • Roadmap Tensorflow to MXNet
    • Roadmap Keras to MXNet
    • Roadmap Math to MXNet

Choose your own adventure

We’ve designed these tutorials so that you can traverse the curriculum in more than one way.

  • Anarchist - Choose whatever you want to read, whenever you want to read it.
  • Imperialist - Proceed through all tutorials in order. In this fashion you will be exposed to each model first from scratch, writing all the code ourselves but for the basic linear algebra primitives and automatic differentiation.
  • Capitalist - If you don’t care how things work (or already know) and just want to see working code in gluon, you can skip (from scratch!) tutorials and go straight to the production-like code using the high-level gluon front end.


This evolving creature is a collaborative effort (see contributors tab). The lead writers, assimilators, and coders include:


In creating these tutorials, we’ve have drawn inspiration from some the resources that allowed us to learn deep / machine learning with other libraries in the past. These include:


  • Already, in the short time this project has been off the ground, we’ve gotten some helpful PRs from the community with pedagogical suggestions, typo corrections, and other useful fixes. If you’re inclined, please contribute!
comments powered by Disqus