June 8, 2019

452 words 3 mins read

poloclub/interactive-classification

poloclub/interactive-classification

Interactive Classification for Deep Learning Interpretation

repo name poloclub/interactive-classification
repo link https://github.com/poloclub/interactive-classification
homepage https://cabreraalex.com/#/paper/interactive-classification
language JavaScript
size (curr.) 9637 kB
stars (curr.) 50
created 2018-02-13
license MIT License

Interactive Classification for Deep Learning Interpretation

We have designed and developed an interactive system that allows users to experiment with deep learning image classifiers and explore their robustness and sensitivity. Selected areas of an image can be removed in real time with classical computer vision inpainting algorithms, allowing users to ask a variety of “what if” questions by experimentally modifying images and seeing how the deep learning model reacts. The system also computes class activation maps for any selected class, which highlight the important semantic regions of an image the model uses for classification. The system runs fully in browser using Tensorflow.js, React, and SqueezeNet. An advanced inpainting version is also available using a server running the PatchMatch algorithm from the GIMP Resynthesizer plugin.

YouTube video demo

This is the code repository for the accepted CVPR 2018 Demo: Interactive Classification for Deep Learning Interpretation. Visit our research group homepage Polo Club of Data Science at Georgia Tech for more related research!

Example Scenario: Interpreting “Failed” Classification

The modified image (left), originally classified as dock is misclassified as ocean liner when the masts of a couple boats are removed from the original image (right). The top five classification scores are tabulated underneath each image.

Failed classification

Installation

Download or clone this repository:

git clone https://github.com/poloclub/interactive-classification.git

Within the cloned repo, install the required packages with yarn:

yarn

Usage

To run, type:

yarn start

Advanced Inpainting

The following steps are needed to set up PatchMatch inpainting, which currently only works on Linux:

  1. Clone the Resynthesizer repository and follow the instructions for building the project (stop after running make)
  2. Find the libresynthesizer.a shared library in the generated lib folder and copy it to the inpaint folder in this repository
  3. Run gcc resynth.c -L. -lresynthesizer -lm -lglib-2.0 -o prog (may have to install glib2.0 first) to generate the prog executable
  4. You can now run python3 inpaint_server.py and PatchMatch will be used as the inpainting algorithm when running the React application with yarn start.

Citation

Interactive Classification for Deep Learning Interpretation
Angel Cabrera, Fred Hohman, Jason Lin, Duen Horng (Polo) Chau
Demo, Conference on Computer Vision and Pattern Recognition (CVPR). June 18, 2018. Salt Lake City, USA.

@article{cabrera2018interactive,
  title={Interactive Classification for Deep Learning Interpretation},
  author={Cabrera, Angel and Hohman, Fred and Lin, Jason and Chau, Duen Horng},
  journal={Demo, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018},
  organization={IEEE}
}

Researchers

Name Affiliation
Angel Cabrera Georgia Tech
Fred Hohman Georgia Tech
Jason Lin Georgia Tech
Duen Horng (Polo) Chau Georgia Tech

License

MIT License. See LICENSE.md.

Contact

For questions or support open an issue.

comments powered by Disqus