cvg/SOLD2
Joint deep network for feature line detection and description
repo name | cvg/SOLD2 |
repo link | https://github.com/cvg/SOLD2 |
homepage | |
language | Jupyter Notebook |
size (curr.) | 37100 kB |
stars (curr.) | 106 |
created | 2020-12-07 |
license | MIT License |
SOLD² - Self-supervised Occlusion-aware Line Description and Detection
This repository contains the implementation of the paper: SOLD² : Self-supervised Occlusion-aware Line Description and Detection, J-T. Lin*, R. Pautrat*, V. Larsson, M. Oswald and M. Pollefeys (Oral at CVPR 2021).
SOLD² is a deep line segment detector and descriptor that can be trained without hand-labelled line segments and that can robustly match lines even in the presence of occlusion.
Demos
Matching in the presence of occlusion:
Matching with a moving camera:
Usage
Installation
We recommend using this code in a Python environment (e.g. venv or conda). The following script installs the necessary requirements with pip:
pip install -r requirements.txt
Set your dataset and experiment paths (where you will store your datasets and checkpoints of your experiments) by modifying the file config/project_config.py
. Both variables DATASET_ROOT
and EXP_PATH
have to be set.
You can download the version of the Wireframe dataset that we used during our training and testing here. This repository also includes some files to train on the Holicity dataset to add more outdoor images, but note that we did not extensively test this dataset and the original paper was based on the Wireframe dataset only.
Training your own model
All training parameters are located in configuration files in the folder config
. Training SOLD² from scratch requires several steps, some of which taking several days, depending on the size of your dataset.
The following command will create the synthetic dataset and start training the model on it:
python experiment.py --mode train --dataset_config config/synthetic_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_synth
Note that this step can take one to several days depending on your machine and on the size of the dataset. You can set the batch size to the maximum capacity that your GPU can handle.
python experiment.py --exp_name wireframe_train --mode export --resume_path <path to your previously trained sold2_synth> --model_config config/train_detector.yaml --dataset_config config/wireframe_dataset.yaml --checkpoint_name <name of the best checkpoint> --export_dataset_mode train --export_batch_size 4
You can similarly perform the same for the test set:
python experiment.py --exp_name wireframe_test --mode export --resume_path <path to your previously trained sold2_synth> --model_config config/train_detector.yaml --dataset_config config/wireframe_dataset.yaml --checkpoint_name <name of the best checkpoint> --export_dataset_mode test --export_batch_size 4
cd postprocess
python convert_homography_results.py <name of the previously exported file (e.g. "wireframe_train.h5")> <name of the new data with extracted line segments (e.g. "wireframe_train_gt.h5")> ../config/export_line_features.yaml
cd ..
We recommend testing the results on a few samples of your dataset to check the quality of the output, and modifying the hyperparameters if need be. Using a detect_thresh=0.5
and inlier_thresh=0.99
proved to be successful for the Wireframe dataset in our case for example.
We found it easier to pretrain the detector alone first, before fine-tuning it with the descriptor part.
Uncomment the lines ‘gt_source_train’ and ‘gt_source_test’ in config/wireframe_dataset.yaml
and fill them with the path to the h5 file generated in the previous step.
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_wireframe
Alternatively, you can also fine-tune the already trained synthetic model:
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_wireframe --pretrained --pretrained_path <path ot the pre-trained sold2_synth> --checkpoint_name <name of the best checkpoint>
Lastly, you can resume a training that was stopped:
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_detector.yaml --exp_name sold2_wireframe --resume --resume_path <path to the model to resume> --checkpoint_name <name of the last checkpoint>
You first need to modify the field ‘return_type’ in config/wireframe_dataset.yaml
to ‘paired_desc’. The following command will then train the full model (detector + descriptor) on the Wireframe dataset:
python experiment.py --mode train --dataset_config config/wireframe_dataset.yaml --model_config config/train_full_pipeline.yaml --exp_name sold2_full_wireframe --pretrained --pretrained_path <path ot the pre-trained sold2_wireframe> --checkpoint_name <name of the best checkpoint>
Pretrained models
We provide the checkpoints of two pretrained models:
- sold2_synthetic.tar: SOLD² detector trained on the synthetic dataset only.
- sold2_wireframe.tar: full version of SOLD² trained on the Wireframe dataset.
How to use it
We provide a notebook showing how to use the trained model of SOLD². Additionally, you can use the model to export line features (segments and descriptor maps) as follows:
python export_line_features.py --img_list <list to a txt file containing the path to all the images> --output_folder <path to the output folder> --checkpoint_path <path to your best checkpoint,>
You can tune some of the line detection parameters in config/export_line_features.yaml
, in particular the ‘detect_thresh’ and ‘inlier_thresh’ to adapt them to your trained model and type of images. As the line detection can be sensitive to the image resolution, we recommend using it with images in the range 300~800 px per side.
Results
Comparison of repeatability and localization error to the state of the art on the Wireframe dataset for an error threshold of 5 pixels in structural and orthogonal distances:
Matching precision-recall curves on the Wireframe and ETH3D datasets:
Bibtex
If you use this code in your project, please consider citing the following paper:
@InProceedings{Pautrat_Lin_2021_CVPR,
author = {Pautrat, Rémi* and Juan-Ting, Lin* and Larsson, Viktor and Oswald, Martin R. and Pollefeys, Marc},
title = {SOLD²: Self-supervised Occlusion-aware Line Description and Detection},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
year = {2021},
}