May 20, 2020

989 words 5 mins read

DLR-RM/stable-baselines3

DLR-RM/stable-baselines3

PyTorch version of Stable Baselines, improved implementations of reinforcement learning algorithms.

repo name DLR-RM/stable-baselines3
repo link https://github.com/DLR-RM/stable-baselines3
homepage https://stable-baselines3.readthedocs.io
language Python
size (curr.) 1116 kB
stars (curr.) 339
created 2020-05-05
license MIT License

pipeline status Documentation Status coverage report

WARNING: Stable Baselines3 is currently in a beta version, breaking changes may occur before 1.0 is released

Note: most of the documentation of Stable Baselines should be still valid though.

Stable Baselines3

Stable Baselines3 is a set of improved implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines.

You can read a detailed presentation of Stable Baselines in the Medium article.

These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details.

Note: despite its simplicity of use, Stable Baselines3 (SB3) assumes you have some knowledge about Reinforcement Learning (RL). You should not utilize this library without some practice. To that extent, we provide good resources in the documentation to get started with RL.

Main Features

Features Stable-Baselines3
State of the art RL methods :heavy_check_mark:
Documentation :heavy_check_mark:
Custom environments :heavy_check_mark:
Custom policies :heavy_check_mark:
Common interface :heavy_check_mark:
Ipython / Notebook friendly :heavy_check_mark:
PEP8 code style :heavy_check_mark:
Custom callback :heavy_check_mark:
High code coverage :heavy_check_mark:
Type hints :heavy_check_mark:

Roadmap to V1.0

Please look at the issue for more details. Planned features:

  • DQN (almost ready, currently in testing phase)
  • DDPG (you can use its successor TD3 for now)
  • [ ] HER
  • Support for MultiDiscrete and MultiBinary action spaces

Planned features (v1.1+)

  • [ ] Full Tensorboard support
  • DQN extensions (prioritized replay, double q-learning, …)
  • Support for Tuple and Dict observation spaces
  • [ ] Recurrent Policies
  • TRPO

Migration guide

TODO: migration guide from Stable-Baselines in the documentation

Documentation

Documentation is available online: https://stable-baselines3.readthedocs.io/

RL Baselines3 Zoo: A Collection of Trained RL Agents

RL Baselines3 Zoo. is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines3.

It also provides basic scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.

Goals of this repository:

  1. Provide a simple interface to train and enjoy RL agents
  2. Benchmark the different Reinforcement Learning algorithms
  3. Provide tuned hyperparameters for each environment and RL algorithm
  4. Have fun with the trained agents!

Github repo: https://github.com/DLR-RM/rl-baselines3-zoo

Documentation: https://stable-baselines3.readthedocs.io/en/master/guide/rl_zoo.html

Installation

Note: Stable-Baselines3 supports PyTorch 1.4+.

Prerequisites

Stable Baselines3 requires python 3.6+.

Windows 10

To install stable-baselines on Windows, please look at the documentation.

Install using pip

Install the Stable Baselines3 package:

pip install stable-baselines3[extra]

This includes an optional dependencies like OpenCV or atari-py to train on atari games. If you do not need those, you can use:

pip install stable-baselines3

Please read the documentation for more details and alternatives (from source, using docker).

Example

Most of the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms.

Here is a quick example of how to train and run PPO on a cartpole environment:

import gym

from stable_baselines3 import PPO

env = gym.make('CartPole-v1')

model = PPO('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=10000)

obs = env.reset()
for i in range(1000):
    action, _states = model.predict(obs, deterministic=True)
    obs, reward, done, info = env.step(action)
    env.render()
    if done:
      obs = env.reset()

env.close()

Or just train a model with a one liner if the environment is registered in Gym and if the policy is registered:

from stable_baselines3 import PPO

model = PPO('MlpPolicy', 'CartPole-v1').learn(10000)

Please read the documentation for more examples.

Try it online with Colab Notebooks !

All the following examples can be executed online using Google colab notebooks:

Implemented Algorithms

Name Recurrent Box Discrete MultiDiscrete MultiBinary Multi Processing
A2C :x: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
PPO :x: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
SAC :x: :heavy_check_mark: :x: :x: :x: :x:
TD3 :x: :heavy_check_mark: :x: :x: :x: :x:

Actions gym.spaces:

  • Box: A N-dimensional box that containes every point in the action space.
  • Discrete: A list of possible actions, where each timestep only one of the actions can be used.
  • MultiDiscrete: A list of possible actions, where each timestep only one action of each discrete set can be used.
  • MultiBinary: A list of possible actions, where each timestep any of the actions can be used in any combination.

Testing the installation

All unit tests in stable baselines3 can be run using pytest runner:

pip install pytest pytest-cov
make pytest

You can also do a static type check using pytype:

pip install pytype
make type

Codestyle check with flake8:

pip install flake8
make lint

Projects Using Stable-Baselines3

We try to maintain a list of project using stable-baselines3 in the documentation, please tell us when if you want your project to appear on this page ;)

Citing the Project

To cite this repository in publications:

@misc{stable-baselines3,
  author = {Raffin, Antonin and Hill, Ashley and Ernestus, Maximilian and Gleave, Adam and Kanervisto, Anssi and Dormann, Noah},
  title = {Stable Baselines3},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/DLR-RM/stable-baselines3}},
}

Maintainers

Stable-Baselines3 is currently maintained by Ashley Hill (aka @hill-a), Antonin Raffin (aka @araffin), Maximilian Ernestus (aka @erniejunior), Adam Gleave (@AdamGleave) and Anssi Kanervisto (@Miffyli).

Important Note: We do not do technical support, nor consulting and don’t answer personal questions per email.

How To Contribute

To any interested in making the baselines better, there is still some documentation that needs to be done. If you want to contribute, please read CONTRIBUTING.md guide first.

Acknowledgments

The initial work to develop Stable Baselines3 was partially funded by the project Reduced Complexity Models from the Helmholtz-Gemeinschaft Deutscher Forschungszentren.

The original version, Stable Baselines, was created in the robotics lab U2IS (INRIA Flowers team) at ENSTA ParisTech.

Logo credits: L.M. Tenkes

comments powered by Disqus