onnx/tutorials
Tutorials for creating and using ONNX models
repo name | onnx/tutorials |
repo link | https://github.com/onnx/tutorials |
homepage | |
language | Jupyter Notebook |
size (curr.) | 36103 kB |
stars (curr.) | 1538 |
created | 2017-11-15 |
license | MIT License |
ONNX Tutorials
Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools.
These images are available for convenience to get started with ONNX and tutorials on this page
- Docker image for ONNX and Caffe2/PyTorch
- Docker image for ONNX, ONNX Runtime, and various converters
Getting ONNX models
- Pre-trained models: Many pre-trained ONNX models are provided for common scenarios in the ONNX Model Zoo.
- Services: Customized ONNX models are generated for your data by cloud based services (see below)
- Convert models from various frameworks (see below)
Services
Below is a list of services that can output ONNX models customized for your data.
Converting to ONNX format
Scoring ONNX Models
Once you have an ONNX model, it can be scored with a variety of tools.
Framework / Tool | Installation | Tutorial |
---|---|---|
Caffe2 | Caffe2 | Example |
Cognitive Toolkit (CNTK) | built-in | Example |
CoreML (Apple) | onnx/onnx-coreml | Example |
MATLAB | Deep Learning Toolbox Converter | Documentation and Examples |
Menoh | Github Packages or from Nuget | Example |
ML.NET | Microsoft.ML Nuget Package | Example |
MXNet (Apache) - Github | MXNet | APIExample |
ONNX Runtime | Python (Pypi) - onnxruntime and onnxruntime-gpuC/C# (Nuget) - Microsoft.ML.OnnxRuntime and Microsoft.ML.OnnxRuntime.Gpu | APIs: Python, C#, C, C++Examples - Python, C#, C |
SINGA (Apache) - Github [experimental] | built-in | Example |
Tensorflow | onnx-tensorflow | Example |
TensorRT | onnx-tensorrt | Example |
Windows ML | Pre-installed on Windows 10 | APITutorials - C++ Desktop App, C# UWP App Examples |
End-to-End Tutorials
Conversion to deployment
- Converting SuperResolution model from PyTorch to Caffe2 with ONNX and deploying on mobile device
- Transferring SqueezeNet from PyTorch to Caffe2 with ONNX and to Android app
- Converting Style Transfer model from PyTorch to CoreML with ONNX and deploying to an iPhone
- Serving PyTorch Models on AWS Lambda with Caffe2 & ONNX
- MXNet to ONNX to ML.NET with SageMaker, ECS and ECR - external link
- Convert CoreML YOLO model to ONNX, score with ONNX Runtime, and deploy in Azure
- Inference PyTorch Bert Model for High Performance in ONNX Runtime
- Inference TensorFlow Bert Model for High Performance in ONNX Runtime
- Inference Bert Model for High Performance with ONNX Runtime on AzureML
- Various Samples: Inferencing ONNX models using ONNX Runtime (Python, C#, C, Java, etc)
Serving
- Serving ONNX models with Cortex
- Serving ONNX models with MXNet Model Server
- Serving ONNX models with ONNX Runtime Server
- ONNX model hosting with AWS SageMaker and MXNet
- Serving ONNX models with ONNX Runtime on Azure ML
ONNX as an intermediary format
ONNX Custom Operators
Other ONNX tools
- Verifying correctness and comparing performance
- Visualizing an ONNX model (useful for debugging)
- Netron: a viewer for ONNX models
- Example of operating on ONNX protobuf
- Float16 <-> Float32 converter
- Version conversion
Contributing
We welcome improvements to the convertor tools and contributions of new ONNX bindings. Check out contributor guide to get started.
Use ONNX for something cool? Send the tutorial to this repo by submitting a PR.