facebookresearch/pythia
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
repo name | facebookresearch/pythia |
repo link | https://github.com/facebookresearch/pythia |
homepage | https://learnpythia.readthedocs.io/ |
language | Python |
size (curr.) | 6761 kB |
stars (curr.) | 3074 |
created | 2018-06-27 |
license | Other |
Pythia is a modular framework for vision and language multimodal research. Built on top of PyTorch, it features:
- Model Zoo: Reference implementations for state-of-the-art vision and language model including LoRRA (SoTA on VQA and TextVQA), Pythia model (VQA 2018 challenge winner) , BAN and BUTD.
- Multi-Tasking: Support for multi-tasking which allows training on multiple dataset together.
- Datasets: Includes support for various datasets built-in including VQA, VizWiz, TextVQA, VisualDialog and COCO Captioning.
- Modules: Provides implementations for many commonly used layers in vision and language domain
- Distributed: Support for distributed training based on DataParallel as well as DistributedDataParallel.
- Unopinionated: Unopinionated about the dataset and model implementations built on top of it.
- Customization: Custom losses, metrics, scheduling, optimizers, tensorboard; suits all your custom needs.
You can use Pythia to bootstrap for your next vision and language multimodal research project.
Pythia can also act as starter codebase for challenges around vision and language datasets (TextVQA challenge, VQA challenge)
Demo
Documentation
Learn more about Pythia here.
Citation
If you use Pythia in your work, please cite:
@inproceedings{singh2019TowardsVM,
title={Towards VQA Models That Can Read},
author={Singh, Amanpreet and Natarajan, Vivek and Shah, Meet and Jiang, Yu and Chen, Xinlei and Batra, Dhruv and Parikh, Devi and Rohrbach, Marcus},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2019}
}
and
@inproceedings{singh2018pythia,
title={Pythia-a platform for vision \& language research},
author={Singh, Amanpreet and Goswami, Vedanuj and Natarajan, Vivek and Jiang, Yu and Chen, Xinlei and Shah, Meet and Rohrbach, Marcus and Batra, Dhruv and Parikh, Devi},
booktitle={SysML Workshop, NeurIPS},
volume={2018},
year={2018}
}
License
Pythia is licensed under BSD license available in LICENSE file