September 10, 2019

622 words 3 mins read



Lightweight framework for fast prototyping and training deep neural networks with PyTorch and TensorFlow

repo name delira-dev/delira
repo link
language Python
size (curr.) 8176 kB
stars (curr.) 220
created 2018-12-11
license GNU Affero General Public License v3.0

PyPI version Build Status Documentation Status codecov DOI


delira - A Backend Agnostic High Level Deep Learning Library

Authors: Justus Schock, Michael Baumgartner, Oliver Rippel, Christoph Haarburger

Copyright (C) 2020 by RWTH Aachen University

This software is dual-licensed under:
• Commercial license (please contact:
• AGPL (GNU Affero General Public License) open source license


delira is designed to work as a backend agnostic high level deep learning library. You can choose among several computation backends. It allows you to compare different models written for different backends without rewriting them.

For this case, delira couples the entire training and prediction logic in backend-agnostic modules to achieve identical behavior for training in all backends.

delira is designed in a very modular way so that almost everything is easily exchangeable or customizable.

A (non-comprehensive) list of the features included in delira:

  • Dataset loading
  • Dataset sampling
  • Augmentation (multi-threaded) including 3D images with any number of channels (based on batchgenerators)
  • A generic trainer class that implements the training process for all backends
  • Training monitoring using Visdom or Tensorboard
  • Model save and load functions
  • Already impelemented Datasets
  • Many operations and utilities for medical imaging

What about the name?

delira started as a library to enable deep learning research and fast prototyping in medical imaging (especially in radiology). That’s also where the name comes from: delira was an acronym for DEep Learning In RAdiology*. To adapt many other use cases we changed the framework’s focus quite a bit, although we are still having many medical-related utilities and are working on constantly factoring them out.


Choose Backend

You may choose a backend from the list below. If your desired backend is not listed and you want to add it, please open an issue (it should not be hard at all) and we will guide you during the process of doing so.

Backend Binary Installation Source Installation Notes
None pip install delira pip install git+ Training not possible if backend is not installed separately
torch pip install delira[torch] git clone && cd delira && pip install .[torch] delira with torch backend supports mixed-precision training via NVIDIA/apex (must be installed separately).
torchscript pip install delira[torchscript] git clone && cd delira && pip install .[torchscript] The torchscript backend currently supports only single-GPU-training
tensorflow eager pip install delira[tensorflow] git clone && cd delira && pip install .[tensorflow] the tensorflow backend is still very experimental and lacks some features
tensorflow graph pip install delira[tensorflow] git clone && cd delira && pip install .[tensorflow] the tensorflow backend is still very experimental and lacks some features
scikit-learn pip install delira pip install git+ /
chainer pip install delira[chainer] git clone && cd delira && pip install .[chainer] /
Full pip install delira[full] git clone && cd delira && pip install .[full] All backends will be installed.


The easiest way to use delira is via docker (with the nvidia-runtime for GPU-support) and using the Dockerfile or the prebuild-images.


We have a community chat on slack. If you need an invitation, just follow this link.

Getting Started

The best way to learn how to use is to have a look at the tutorial notebook. Example implementations for classification problems, segmentation approaches and GANs are also provided in the notebooks folder.


The docs are hosted on ReadTheDocs/Delira. The documentation of the latest master branch can always be found at the project’s github page.


If you find a bug or have an idea for an improvement, please have a look at our contribution guideline.

comments powered by Disqus