October 31, 2018

1581 words 8 mins read

ceobillionaire/WHAT-AI-CAN-DO-FOR-YOU

ceobillionaire/WHAT-AI-CAN-DO-FOR-YOU

Breakthrough AI Papers and CODE for Any Industry.

repo name ceobillionaire/WHAT-AI-CAN-DO-FOR-YOU
repo link https://github.com/ceobillionaire/WHAT-AI-CAN-DO-FOR-YOU
homepage http://www.montreal.ai
language
size (curr.) 60526 kB
stars (curr.) 240
created 2016-11-17
license

WHAT-AI-CAN-DO-FOR-YOU

The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.” — Kevin Kelly

Here are the breakthrough AI papers and CODE for any industry.

Alt text

A hundred years ago electricity transformed countless industries; 20 years ago the internet did, too. Artificial intelligence is about to do the same. To take advantage, companies need to understand what AI can do.” — Andrew Ng

If you are a newcomer to the AI, the first question you may have is “What AI can do now and how it relates to my strategies?” Here are the breakthrough AI papers and CODE for any industry.

Deep Learning BOOKS

0.0 Deep Learning

[0] Bengio, Yoshua, Ian J. Goodfellow, and Aaron Courville. "Deep learning" An MIT Press book. (2016).

Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.” – Elon Musk, co-chair of OpenAI; co-founder and CEO of Tesla and SpaceX

0.1 Deep Reinforcement Learning

[1] Richard S. Sutton and Andrew G. Barto. "Reinforcement Learning: An Introduction (2nd Edition)"

[2] Pieter Abbeel and John Schulman | Open AI / Berkeley AI Research Lab. "Deep Reinforcement Learning through Policy Optimization"

[3] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas. "Learning to learn by gradient descent by gradient descent"

   CODE Learning to Learn in TensorFlow

       arXiv Learning to Learn for Global Optimization of Black Box Functions

0.2 Computer Programming

[4] Antti Laaksonen. "Competitive Programmer’s Handbook"

Notable Deep Learning PAPERS

1.0 Papers Reading Roadmap

[0] "Deep Learning Papers Reading Roadmap"

   CODE Download All Papers

NIPS

1.1 Neural Information Processing Systems Conference - NIPS 2016

Dec 05–8, 2016 Centre Convencions Internacional Barcelona, Barcelona SPAIN

The Thirtieth Annual Conference on Neural Information Processing Systems (NIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

[1] Full Videos "NIPS 2016 : 57 Episodes"

[2] CODE "All Code Implementations for NIPS 2016 papers"

[3] arXiv + CODE "Implementations of Some of the Best arXiv Papers"

1.3 Wasserstein GAN

[4] arXiv "Wasserstein GAN"

[5] CODE "Code accompanying the paper “Wasserstein GAN”"

1.4 The Predictron

[6] arXiv "The Predictron: End-To-End Learning and Planning"

[7] CODE "A TensorFlow implementation of “The Predictron: End-To-End Learning and Planning”"

1.5 Meta-RL

[8] arXiv "Learning to reinforcement learn"

[9] CODE "Meta-RL""

1.6 Neural Architecture Search with RL

[10] arXiv "Neural Architecture Search with Reinforcement Learning"

1.7 Superior Generalizability and Interpretability

Recursion is the key to true generalisation.

[11] arXiv "Making Neural Programming Architectures Generalize via Recursion"

1.8 Seq2seq RL GANs for Dialogue Generation

[12] arXiv "Adversarial Learning for Neural Dialogue Generation"

1.9 DeepMind’s PathNet: Modular Deep Learning Architecture for AGI

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks

[13] arXiv "PathNet: Evolution Channels Gradient Descent in Super Neural Networks"

1.10 Outrageously Large Neural Networks

… achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters.

[14] arXiv "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer"

1.11 Emergence of Locomotion Behaviours in Rich Environments

[15] arXiv "Emergence of Locomotion Behaviours in Rich Environments"

1.12 Learning human behaviors from motion capture by adversarial imitation

[16] arXiv "Learning human behaviors from motion capture by adversarial imitation"

1.13 Hindsight Experience Replay

[17] arXiv "Hindsight Experience Replay"

1.14 Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks

[18] arXiv "Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks"

1.15 End-to-End Learning of Semantic Grasping

[19] arXiv "End-to-End Learning of Semantic Grasping"

1.16 Programmable Agents

[20] arXiv "Programmable Agents"

1.17 One Model To Learn Them All

[21] arXiv "One Model To Learn Them All"

[22] CODE "T2T: Tensor2Tensor Transformers""

Deep Learning LECTURES and TUTORIALS

2.0 Implementation of Reinforcement Learning Algorithms

[0] CODE "Implementation of Reinforcement Learning Algorithms. Python, OpenAI Gym, Tensorflow. Exercises and Solutions to accompany Sutton’s Book and David Silver’s course."

2.1 Deep Natural Language Processing

[1] Video + CODE + Slides "Deep Natural Language Processing" course offered in Hilary Term 2017 at the University of Oxford.

2.2 TensorFlow Dev Summit 2017

Join the TensorFlow team and machine learning experts from around the world for a full day of technical talks, demos, and conversations.

[2] Video + CODE "TensorFlow Dev Summit 2017" by Google Developers.

2.3 Python Data Science Handbook

[3] CODE "Jupyter Notebooks for the Python Data Science Handbook" by Jake Vanderplas.

2.4 Learn How to Build State of the Art Models

[4] Video + CODE "Practical Deep Learning For Coders, Part 1" by Jeremy Howard.

2.5 NIPS 2016 Tutorial: Generative Adversarial Networks

[5] arXiv "NIPS 2016 Tutorial: Generative Adversarial Networks" by Ian Goodfellow.

2.6 Data Science IPython Notebooks

[6] CODE "Data Science Python Notebooks: Deep learning (TensorFlow, Theano, Caffe), Scikit-learn, Kaggle, Big Data (Spark, Hadoop MapReduce, HDFS), Pandas, NumPy, SciPy…"

2.7 AI Playbook

[7] TUTORIALS & CODE "AI Playbook"

Deep Learning TOOLS

TensorFlow

3.0 TensorFlow

TensorFlow is an Open Source Software Library for Machine Intelligence: https://www.tensorflow.org

[0] Mart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane ́, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vie ́gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. "WhitePaper - TensorFlow: Large-scale machine learning on heterogeneous systems"

   CODE Installation

   CODE TensorFlow Tutorial and Examples for Beginners

   CODE Models built with TensorFlow

3.1 OpenAI Gym

The OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms https://gym.openai.com

[1] Greg Brockman and Vicki Cheung and Ludwig Pettersson and Jonas Schneider and John Schulman and Jie Tang and Wojciech Zaremba. "OpenAI Gym WhitePaper"

   CODE Installation of the gym open-source library

   CODE How to create new environments

3.2 Universe

Universe: A software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications. Universe (blog).

   CODE Installation

   CODE Universe Starter Agent

3.3 DyNet: The Dynamic Neural Network Toolkit

DyNet is a neural network library designed to be efficient when run on either CPU or GPU. DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection.

[2] Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, Pengcheng Yin. "DyNet: The Dynamic Neural Network Toolkit"

   CODE Installation

3.4 Edward: A Python library for Probabilistic Modeling, Inference and Criticism

Edward is a Python library for probabilistic modeling, inference and criticism fusing three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming. Runs on TensorFlow.

[3] Dustin Tran, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, David M. Blei. "Deep Probabilistic Programming"

   CODE Installation

3.5 DeepMind Lab: A customisable 3D platform for agent-based AI research

DeepMind Lab provides a suite of challenging 3D navigation and puzzle-solving tasks for learning agents. Its primary purpose is to act as a testbed for research in artificial intelligence, especially deep reinforcement learning.

[4] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, Stig Petersen. "DeepMind Lab"

   CODE Installation of the DeepMind Lab

Breakthrough AI Papers and CODE for Any Industry - WORK IN PROGRESS

The following is constructed in accordance with the following three guiding principles:

  1. Focus on state-of-the-art;
  2. From generic to specific areas; and
  3. Clarity, efficiency and transparency.

Being able to deploy with the least possible delay is key.

Industry What AI Papers CODE
Robotics Deep Reinforcement Learning "Extending the OpenAI Gym for robotics" "Gym Gazebo"
Translation Multilingual Neural Machine Translation (NMT) "Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation" "OpenNMT"
Word Embeddings Approximate Factorization of the Point-Wise Mutual Information Matrix via Stochastic Gradient Descent "Swivel: Improving Embeddings by Noticing What’s Missing" "Swivel"
Chemistry and Drug Discovery Deep Neural Network and Monte Carlo Tree Search (MCTS) "Towards “AlphaChem”: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies" "DeepChem"
Art
NLP(Natural Language Processing)
Audio Deep Neural Network "WaveNet: A Generative Model for Raw Audio" "Tensorflow-Wavenet"
Image Caption
Image Recognition Very Deep Convolutional Networks "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning" "Keras-InceptionV4n"
Full Resolution Image Compression Recurrent Neural Networks "Full Resolution Image Compression with Recurrent Neural Networks" "Compression"
Visual Tracking
No-Limit Poker Blend of Deep Learning and Classical AI DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker [arXiv]
Recommender Systems
Bioinformatics
Neural Network Chip
Game
comments powered by Disqus