October 26, 2020

755 words 4 mins read

avanetten/simrdwn

avanetten/simrdwn

Rapid satellite imagery object detection

repo name avanetten/simrdwn
repo link https://github.com/avanetten/simrdwn
homepage
language C
size (curr.) 141839 kB
stars (curr.) 152
created 2018-10-24
license

SIMRDWN

Alt text

The Satellite Imagery Multiscale Rapid Detection with Windowed Networks (SIMRDWN) codebase combines some of the leading object detection algorithms into a unified framework designed to detect objects both large and small in overhead imagery. This work seeks to extend the YOLT modification of YOLO to include the TensorFlow Object Detection API. Therefore, one can train models and test on arbitrary image sizes with YOLO (versions 2 and 3), Faster R-CNN, SSD, or R-FCN.

For more information, see:

  1. Our arXiv paper: Satellite Imagery Multiscale Rapid Detection with Windowed Networks

  2. Our blog (e.g. 1, 2)

  3. Our original YOLT paper

  4. The original YOLT repository (now deprecated)


Running SIMRDWN


0. Installation

SIMRDWN is built to execute within a docker container on a GPU-enabled machine. The docker command creates an Ubuntu 16.04 image with CUDA 9.0, python 3.6, and tensorflow-gpu version 1.13.1.

  1. Clone this repository (e.g. to /simrdwn)

  2. Install nvidia-docker

  3. Build docker file.

     cd /simrdwn/docker
     nvidia-docker build --no-cache -t simrdwn .
    
  4. Spin up the docker container (see the docker docs for options)

     nvidia-docker run -it -v /simrdwn:/simrdwn --name simrdwn_container0 simrdwn
    
  5. Compile the Darknet C program for both YOLT2 and YOLT3.

     cd /simrdwn/yolt2
     make
     cd /simrdwn/yolt3
     make
    
  6. Get help on SIMRDWN options

     python /simrdwn/simrdwn/core/simrdwn.py --help
    

1. Prepare Training Data

1A. Create YOLT Format

Training data needs to be transformed to the YOLO format of training images in an “images” folder and bounding box labels in a “labels” folder. For example, an image “images/ex0.png” has a corresponding label “labels/ex0.txt”. Labels are bounding boxes of the form

<object-class> <x> <y> <width> <height>

Where x, y, width, and height are relative to the image’s width and height. Running a script such as /simrdwn/data_prep/parse_cowc.py_ extracts training windows of reasonable size (usually 416 or 544 pixels in extent) from large labeleled images of the COWC dataset. The script then transforms the labels corresponding to these windows into the correct format and creates a list of all training input images in /data/train_data/training_list.txt. We also need to define the object classes with a .pbtxt file, such as /data/training_data/class_labels_car.pbtxt. Class integers should be 1-indexed in the .pbtxt file.

1B. Create .tfrecord (optional)

If the tensorflow object detection API models are being run, we must transform the training data into the .tfrecord format. This is accomplished via the simrdwn/core/preprocess_tfrecords.py script.

python /simrdwn/core/preprocess_tfrecords.py \
    --image_list_file /simrdwn/data/cowc_labels_car_list.txt \
    --pbtxt_filename /simrdwn/data/class_labels_car.pbtxt \
    --outfile /simrdwn/data/cowc_labels_car_train.tfrecord \
    --outfile_val /simrdwn/data/cowc_labels_car_val.tfrecord \
    --val_frac 0.1

2. Train

We can train either YOLT models or tensorflow object detection API models. If we are using tensorflow, the config file may need to be updated in the /simrdwn/configs directory (further example config files reside here). Training can be run with commands such as:

# SSD vehicle search
python /simrdwn/core/simrdwn.py \
	--framework ssd \
	--mode train \
	--outname inception_v2_cowc \
	--label_map_path /simrdwn/data/class_labels_car.pbtxt \
	--tf_cfg_train_file _altered_v0/ssd_inception_v2_simrdwn.config \
	--train_tf_record cowc/cowc_train.tfrecord \
	--max_batches 30000 \
	--batch_size 16 

# YOLT vechicle search
python /simrdwn/core/simrdwn.py \
	--framework yolt2 \
	--mode train \
	--outname dense_cars \
	--yolt_cfg_file ave_dense.cfg  \
	--weight_file yolo.weights \
	--yolt_train_images_list_file cowc_yolt_train_list.txt \
	--label_map_path class_labels_car.pbtxt \
	--max_batches 30000 \
	--batch_size 64 \
	--subdivisions 16

3. Test

During the test phase, input images of arbitrary size are processed.

  1. Slice test images into the window size used in training.

  2. Run inference on windows with the desired model

  3. Stitch windows back together to create original test image

  4. Run non-max suppression on overlapping predictions

  5. Make plots of predictions (optional)

    # SSD vehicle search
    python /raid/local/src/simrdwn/src/simrdwn.py \
    	--framework ssd \
    	--mode test \
    	--outname inception_v2_cowc \
    	--label_map_path class_labels_car.pbtxt \
    	--train_model_path [ssd_train_path] \
    	--tf_cfg_train_file ssd_inception_v2_simrdwn.config \
    	--use_tfrecords=0 \
    	--testims_dir cowc/Utah_AGRC  \
    	--keep_test_slices 0 \
    	--test_slice_sep __ \
    	--test_make_legend_and_title 0 \
    	--edge_buffer_test 1 \
    	--test_box_rescale_frac 1 \
    	--plot_thresh_str 0.2 \
    	--slice_sizes_str 416 \
    	--slice_overlap 0.2 \
    	--alpha_scaling 1 \
    	--show_labels 0
    
    # YOLT vehicle search
    python /raid/local/src/simrdwn/core/simrdwn.py \
    	--framework yolt2 \
    	--mode test \
    	--outname dense_cowc \
    	--label_map_path class_labels_car.pbtxt \
    	--train_model_path [yolt2_train_path] \
    	--weight_file ave_dense_final.weights \
    	--yolt_cfg_file ave_dense.cfg \
    	--testims_dir cowc/Utah_AGRC  \
    	--keep_test_slices 0 \
    	--test_slice_sep __ \
    	--test_make_legend_and_title 0 \
    	--edge_buffer_test 1 \
    	--test_box_rescale_frac 1 \
    	--plot_thresh_str 0.2 \
    	--slice_sizes_str 416 \
    	--slice_overlap 0.2 \
    	--alpha_scaling 1 \
    	--show_labels 1
    

    Outputs will be something akin to the images below. The alpha_scaling flag makes the bounding box opacity proportional to prediction confidence, and the show_labels flag prints the object class at the top of the bounding box. Alt text Alt text

If you plan on using SIMRDWN in your work, please consider citing YOLO, the TensorFlow Object Detection API, YOLT, and SIMRDWN.

comments powered by Disqus