Check out the new game server:
|size (curr.)||27403 kB|
|license||Apache License 2.0|
Google Research Football
This repository contains an RL environment based on open-source game Gameplay Football. It was created by the Google Brain team for research purposes.
- (NEW!) GRF Tournament - take part in the Tournament and become the new GRF Champion! Starting April 2020.
- GRF Game Server - challenge other researchers!
- Run in Colab - start training in less that 2 minutes.
- Google Research Football Paper
- GoogleAI blog post
- Google Research Football on Cloud
- Mailing List - please use it for communication with us (comments / suggestions / feature ideas)
For non-public matters that you’d like to discuss directly with the GRF team, please use firstname.lastname@example.org.
We’d like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.
Open our example Colab, that will allow you to start training your model in less than 2 minutes.
This method doesn’t support game rendering on screen - if you want to see the game running, please use the method below.
On your computer
Install required apt packages with:
sudo apt-get install git cmake build-essential libgl1-mesa-dev libsdl2-dev \ libsdl2-image-dev libsdl2-ttf-dev libsdl2-gfx-dev libboost-all-dev \ libdirectfb-dev libst-dev mesa-utils xvfb x11vnc libsdl-sge-dev python3-pip
Then install the game from GitHub master:
git clone https://github.com/google-research/football.git cd football pip3 install .
This command can run for a couple of minutes, as it compiles the C++ environment in the background. Now, it’s time to play!
python3 -m gfootball.play_game --action_set=full
- Running experiments
- Playing the game
- Environment API
- Multi-agent support
- Running in docker
- Saving replays, logs, traces
Training agents to play GRF
In order to run TF training, install additional dependencies:
- Update PIP, so that tensorflow 1.15rc2 is available:
python3 -m pip install --upgrade pip
pip3 install "tensorflow==1.15rc2"or
pip3 install "tensorflow-gpu==1.15rc2", depending on whether you want CPU or GPU version;
pip3 install dm-sonnet;
- OpenAI Baselines:
pip3 install git+https://github.com/openai/baselines.git@master.
- To run example PPO experiment on
python3 -m gfootball.examples.run_ppo2 --level=academy_empty_goal_close
- To run on
python3 -m gfootball.examples.run_ppo2 --level=academy_pass_and_shoot_with_keeper
In order to train with nice replays being saved, run
python3 -m gfootball.examples.run_ppo2 --dump_full_episodes=True --render=True
In order to reproduce PPO results from the paper, please refer to:
Playing the game
Please note that playing the game is implemented through an environment, so human-controlled players use the same interface as the agents. One important implication is that there is a single action per 100 ms reported to the environment, which might cause a lag effect when playing.
The game defines following keyboard mapping (for the
keyboard player type):
ARROW UP- run to the top.
ARROW DOWN- run to the bottom.
ARROW LEFT- run to the left.
ARROW RIGHT- run to the right.
S- short pass in the attack mode, pressure in the defense mode.
A- high pass in the attack mode, sliding in the defense mode.
D- shot in the the attack mode, team pressure in the defense mode.
W- long pass in the the attack mode, goalkeeper pressure in the defense mode.
Q- switch the active player in the defense mode.
C- dribble in the attack mode.
Play vs built-in AI
python3 -m gfootball.play_game --action_set=full. By default, it starts
the base scenario and the left player is controlled by the keyboard. Different
types of players are supported (gamepad, external bots, agents…). For possible
python3 -m gfootball.play_game -helpfull.
Play vs pre-trained agent
In particular, one can play against agent trained with
run_ppo2 script with
the following command (notice no action_set flag, as PPO agent uses default
python3 -m gfootball.play_game --players "keyboard:left_players=1;ppo2_cnn:right_players=1,checkpoint=$YOUR_PATH"
We provide trained PPO checkpoints for the following scenarios:
In order to see the checkpoints playing, run
python3 -m gfootball.play_game --players "ppo2_cnn:left_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT" --level=$LEVEL,
$CHECKPOINT is the path to downloaded checkpoint.
In order to train against a checkpoint, you can pass ‘extra_players’ argument to create_environment function. For example extra_players=‘ppo2_cnn:right_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT’.
Frequent Problems & Solutions
Rendering not working / “OpenGL version not equal to or higher than 3.2”
Solution: set environment variables for MESA driver, like this:
MESA_GL_VERSION_OVERRIDE=3.2 MESA_GLSL_VERSION_OVERRIDE=150 python3 -m gfootball.play_game