November 6, 2019

525 words 3 mins read

SeldonIO/alibi

SeldonIO/alibi

Algorithms for monitoring and explaining machine learning models

repo name SeldonIO/alibi
repo link https://github.com/SeldonIO/alibi
homepage https://docs.seldon.io/projects/alibi/en/latest/
language Python
size (curr.) 2685 kB
stars (curr.) 447
created 2019-02-26
license Apache License 2.0

Build Status Documentation Status codecov Python version PyPI version GitHub Licence Slack channel

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.

If you’re interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.

Goals

  • Provide high quality reference implementations of black-box ML model explanation and interpretation algorithms
  • Define a consistent API for interpretable ML methods
  • Support multiple use cases (e.g. tabular, text and image data classification, regression)

Installation

Alibi can be installed from PyPI:

pip install alibi

This will install alibi with all its dependencies:

  beautifulsoup4
  numpy
  Pillow
  pandas
  requests
  scikit-learn
  spacy
  scikit-image
  tensorflow

To run all the example notebooks, you may additionally run pip install alibi[examples] which will install the following:

  seaborn
  Keras

Supported algorithms

Model explanations

These algorithms provide instance-specific (sometimes also called local) explanations of ML model predictions. Given a single instance and a model prediction they aim to answer the question “Why did my model make this prediction?” The following algorithms all work with black-box models meaning that the only requirement is to have acces to a prediction function (which could be an API endpoint for a model in production).

The following table summarizes the capabilities of the current algorithms:

Explainer Model types Classification Categorical data Tabular Text Images Need training set
Anchors black-box For Tabular
CEM black-box, TF/Keras Optional
Counterfactual Instances black-box, TF/Keras No
Prototype Counterfactuals black-box, TF/Keras Optional

Model confidence metrics

These algorihtms provide instance-specific scores measuring the model confidence for making a particular prediction.

Algorithm Model types Classification Regression Categorical data Tabular Text Images Need training set
Trust Scores black-box ✔(1) ✔(2) Yes
Linearity Measure black-box Optional

(1) Depending on model

(2) May require dimensionality reduction

Example outputs

Anchor method applied to the InceptionV3 model trained on ImageNet:

Prediction: Persian Cat Anchor explanation
Persian Cat Persian Cat Anchor

Contrastive Explanation method applied to a CNN trained on MNIST:

Prediction: 4 Pertinent Negative: 9 Pertinent Positive: 4
mnist_orig mnsit_pn mnist_pp

Trust scores applied to a softmax classifier trained on MNIST:

trust_mnist

Citations

If you use alibi in your research, please consider citing it.

BibTeX entry:

@software{alibi,
  title = {Alibi: Algorithms for monitoring and explaining machine learning models},
  author = {Klaise, Janis and Van Looveren, Arnaud and Vacanti, Giovanni and Coca, Alexandru},
  url = {https://github.com/SeldonIO/alibi},
  version = {0.3.2},
  date = {2020-02-17},
}
comments powered by Disqus