ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
repo name | ialhashim/DenseDepth |
repo link | https://github.com/ialhashim/DenseDepth |
homepage | https://arxiv.org/abs/1812.11941 |
language | Jupyter Notebook |
size (curr.) | 12078 kB |
stars (curr.) | 568 |
created | 2018-12-28 |
license | GNU General Public License v3.0 |
High Quality Monocular Depth Estimation via Transfer Learning (arXiv 2018)
Ibraheem Alhashim and Peter Wonka
Offical Keras (TensorFlow) implementaiton. If you have any questions or need more help with the code, contact the first author.
[Update] Added a Colab notebook to try the method on the fly.
[Update] Experimental TensorFlow 2.0 implementation added.
[Update] Experimental PyTorch code added.
Results
- KITTI
- NYU Depth V2
Requirements
- This code is tested with Keras 2.2.4, Tensorflow 1.13, CUDA 10.0, on a machine with an NVIDIA Titan V and 16GB+ RAM running on Windows 10 or Ubuntu 16.
- Other packages needed
keras pillow matplotlib scikit-learn scikit-image opencv-python pydot
andGraphViz
for the model graph visualization andPyGLM PySide2 pyopengl
for the GUI demo. - Minimum hardware tested on for inference NVIDIA GeForce 940MX (laptop) / NVIDIA GeForce GTX 950 (desktop).
- Training takes about 24 hours on a single NVIDIA TITAN RTX with batch size 8.
Pre-trained Models
- NYU Depth V2 (165 MB)
- KITTI (165 MB)
Demos
- After downloading the pre-trained model (nyu.h5), run
python test.py
. You should see a montage of images with their estimated depth maps. - [Update] A Qt demo showing 3D point clouds from the webcam or an image. Simply run
python demo.py
. It requires the packagesPyGLM PySide2 pyopengl
.
Data
- NYU Depth V2 (50K) (4.1 GB): You don’t need to extract the dataset since the code loads the entire zip file into memory when training.
- KITTI: copy the raw data to a folder with the path ‘../kitti’. Our method expects dense input depth maps, therefore, you need to run a depth inpainting method on the Lidar data. For our experiments, we used our Python re-implmentaiton of the Matlab code provided with NYU Depth V2 toolbox. The entire 80K images took 2 hours on an 80 nodes cluster for inpainting. For our training, we used the subset defined here.
- Unreal-1k: coming soon.
Training
- Run
python train.py --data nyu --gpus 4 --bs 8
.
Evaluation
- Download, but don’t extract, the ground truth test data from here (1.4 GB). Then simply run
python evaluate.py
.
Reference
Corresponding paper to cite:
@article{Alhashim2018,
author = {Ibraheem Alhashim and Peter Wonka},
title = {High Quality Monocular Depth Estimation via Transfer Learning},
journal = {arXiv e-prints},
volume = {abs/1812.11941},
year = {2018},
url = {https://arxiv.org/abs/1812.11941},
eid = {arXiv:1812.11941},
eprint = {1812.11941}
}