December 13, 2019

825 words 4 mins read

Bisonai/awesome-edge-machine-learning

Bisonai/awesome-edge-machine-learning

A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.

repo name Bisonai/awesome-edge-machine-learning
repo link https://github.com/Bisonai/awesome-edge-machine-learning
homepage
language Python
size (curr.) 276 kB
stars (curr.) 86
created 2019-06-27
license Other

Awesome Edge Machine Learning

Awesome

A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.

Table of Contents

Papers

Applications

There is a countless number of possible edge machine learning applications. Here, we collect papers that describe specific solutions.

AutoML

Automated machine learning (AutoML) is the process of automating the end-to-end process of applying machine learning to real-world problems.Wikipedia AutoML is for example used to design new efficient neural architectures with a constraint on a computational budget (defined either as a number of FLOPS or as an inference time measured on real device) or a size of the architecture.

Efficient Architectures

Efficient architectures represent neural networks with small memory footprint and fast inference time when measured on edge devices.

Federated Learning

Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud.Google AI blog: Federated Learning

ML Algorithms For Edge

Standard machine learning algorithms are not always able to run on edge devices due to large computational requirements and space complexity. This section introduces optimized machine learning algorithms.

Network Pruning

Pruning is a common method to derive a compact network – after training, some structural portion of the parameters is removed, along with its associated computations.Importance Estimation for Neural Network Pruning

Others

This section contains papers that are related to edge machine learning but are not part of any major group. These papers often deal with deployment issues (i.e. optimizing inference on target platform).

Quantization

Quantization is the process of reducing a precision (from 32 bit floating point into lower bit depth representations) of weights and/or activations in a neural network. The advantages of this method are reduced model size and faster model inference on hardware that support arithmetic operations in lower precision.

Datasets

Visual Wake Words Dataset

Visual Wake Words represents a common microcontroller vision use-case of identifying whether a person is present in the image or not, and provides a realistic benchmark for tiny vision models. Within a limited memory footprint of 250 KB, several state-of-the-art mobile models achieve accuracy of 85-90% on the Visual Wake Words dataset.

Inference Engines

List of machine learning inference engines and APIs that are optimized for execution and/or training on edge devices.

Arm Compute Library

Bender

Caffe 2

CoreML

Deeplearning4j

Embedded Learning Library

Feather CNN

MACE

MNN

MXNet

NCNN

Neural Networks API

Paddle Mobile

Qualcomm Neural Processing SDK for AI

Tengine

TensorFlow Lite

dabnn

Books

List of books with focus on on-device (e.g., edge or mobile) machine learning.

TinyML: Machine Learning with TensorFlow on Arduino, and Ultra-Low Power Micro-Controllers

  • Authors: Pete Warden, Daniel Situnayake
  • Published: 2020

Machine Learning by Tutorials: Beginning machine learning for Apple and iOS

  • Author: Matthijs Hollemans
  • Published: 2019

Core ML Survival Guide

  • Author: Matthijs Hollemans
  • Published: 2018

Building Mobile Applications with TensorFlow

  • Author: Pete Warden
  • Published: 2017

Challenges

Low Power Recognition Challenge (LPIRC)

Competition with focus on the best vision solutions that can simultaneously achieve high accuracy in computer vision and energy efficiency. LPIRC is regularly held during computer vision conferences (CVPR, ICCV and others) since 2015 and the winners’ solutions have already improved 24 times in the ratio of accuracy divided by energy.

Other Resources

Awesome EMDL

Embedded and mobile deep learning research resources

Awesome Mobile Machine Learning

A curated list of awesome mobile machine learning resources for iOS, Android, and edge devices

Awesome Pruning

A curated list of neural network pruning resources

Efficient DNNs

Collection of recent methods on DNN compression and acceleration

Machine Think

Machine learning tutorials targeted for iOS devices

Pete Warden’s blog

Contribute

Unlike other awesome list, we are storing data in YAML format and markdown files are generated with awesome.py script.

Every directory contains data.yaml which stores data we want to display and config.yaml which stores its metadata (e.g. way of sorting data). The way how data will be presented is defined in renderer.py.

License

CC0

To the extent possible under law, Bisonai has waived all copyright and related or neighboring rights to this work.

comments powered by Disqus