aim-uofa/adet
AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
repo name | aim-uofa/adet |
repo link | https://github.com/aim-uofa/adet |
homepage | |
language | Python |
size (curr.) | 66 kB |
stars (curr.) | 214 |
created | 2020-01-23 |
license | Other |
AdelaiDet
AdelaiDet is an open source toolbox for multiple instance-level recognition tasks on top of Detectron2. All instance-level recognition works from our group are open-sourced here.
To date, AdelaiDet implements the following algorithms:
- FCOS
- BlendMask to be released
- ABCNet to be released (demo)
- SOLO to be released
- DirectPose to be released
- CondInst to be released
Models
More models will be released soon. Stay tuned.
COCO Object Detecton Baselines with FCOS
Name | box AP | download |
---|---|---|
FCOS_R_50_1x | 38.7 | model |
Installation
First install Detectron2 following the official guide: INSTALL.md. Then build AdelaiDet with:
git clone https://github.com/aim-uofa/adet.git
cd adet
python setup.py build develop
Quick Start
Inference with Pre-trained Models
- Pick a model and its config file, for example,
fcos_R_50_1x.yaml
. - Download the model
wget https://cloudstor.aarnet.edu.au/plus/s/glqFc13cCoEyHYy/download -O fcos_R_50_1x.pth
- Run the demo with
python demo/demo.py \
--config-file configs/FCOS-Detection/R_50_1x.yaml \
--input input1.jpg input2.jpg \
--opts MODEL.WEIGHTS fcos_R_50_1x.pth
Train Your Own Models
To train a model with “train_net.py”, first setup the corresponding datasets following datasets/README.md, then run:
python tools/train_net.py \
--config-file configs/FCOS-Detection/R_50_1x.yaml \
--num-gpus 8 \
OUTPUT_DIR training_dir/fcos_R_50_1x
The configs are made for 8-GPU training. To train on another number of GPUs, change the num-gpus
.
Citing AdelaiDet
If you use this toolbox in your research or wish to refer to the baseline results, please use the following BibTeX entries.
@inproceedings{tian2019fcos,
title = {{FCOS}: Fully Convolutional One-Stage Object Detection},
author = {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
booktitle = {Proc. Int. Conf. Computer Vision (ICCV)},
year = {2019}
}
@inproceedings{chen2020blendmask,
title = {{BlendMask}: Top-Down Meets Bottom-Up for Instance Segmentation},
author = {Chen, Hao and Sun, Kunyang and Tian, Zhi and Shen, Chunhua and Huang, Yongming and Yan, Youliang},
booktitle = {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
year = {2020}
}
@inproceedings{liu2020abcnet,
title = {{ABCNet}: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network},
author = {Liu, Yuliang and Chen, Hao and Shen, Chunhua and He, Tong and Jin, Lianwen and Wang, Liangwei},
booktitle = {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
year = {2020}
}
@article{wang2019solo,
title = {{SOLO}: Segmenting Objects by Locations},
author = {Wang, Xinlong and Kong, Tao and Shen, Chunhua and Jiang, Yuning and Li, Lei},
journal = {arXiv preprint arXiv:1912.04488},
year = {2019}
}
@article{tian2019directpose,
title = {{DirectPose}: Direct End-to-End Multi-Person Pose Estimation},
author = {Tian, Zhi and Chen, Hao and Shen, Chunhua},
journal = {arXiv preprint arXiv:1911.07451},
year = {2019}
}
License
For academic use, this project is licensed under the 2-clause BSD License - see the LICENSE file for details. For commercial use, please contact Chunhua Shen.