PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.
|size (curr.)||4975 kB|
|license||Apache License 2.0|
PySlowFast is an open source video understanding codebase from FAIR that provides state-of-the-art video classification models with efficient training. This repository includes implementations of the following methods:
- SlowFast Networks for Video Recognition
- Non-local Neural Networks
- A Multigrid Method for Efficiently Training Video Models
The goal of PySlowFast is to provide a high-performance, light-weight pytorch codebase provides state-of-the-art video backbones for video understanding research on different tasks (classification, detection, and etc). It is designed in order to support rapid implementation and evaluation of novel video research ideas. PySlowFast includes implementations of the following backbone network architectures:
- Non-local Network
- We now support Multigrid Training for efficiently training video models. See
projects/multigridfor more information.
- PySlowFast is released in conjunction with our ICCV 2019 Tutorial.
PySlowFast is released under the Apache 2.0 license.
Model Zoo and Baselines
We provide a large set of baseline results and trained models available for download in the PySlowFast Model Zoo.
Follow the example in GETTING_STARTED.md to start playing video models with PySlowFast.