Machine Learning Documentation Overview
Introduction
This section of the documentation provides resources and guides for developing Machine Learning applications on Toradex System-on-Modules (SoMs). It also includes information on the principal Machine Learning frameworks and tools supported by Torizon OS, as well as best practices for optimizing Machine Learning models for edge devices.
The most efficient way to execute Machine Learning applications on Toradex SoMs is by leveraging Neural Processing Units (NPUs). NPUs are specialized hardware accelerators designed to efficiently execute the core operations used in neural networks, such as matrix multiplications and convolutions. By offloading these compute-intensive tasks to the NPU, you can significantly reduce inference time, improve energy efficiency, and achieve better overall performance.
NPU Support on Toradex SoMs
Currently, the NPU is supported on the following Toradex SoMs:
- Aquila AM69 - Coming Soon
- Aquila iMX95 - Coming Soon
- Verdin iMX8M Plus
- Verdin iMX95 - Coming Soon
- SMARC iMX8M Plus
- SMARC iMX95 - Coming Soon
- Luna SL1680 SBC
Torizon-Supported Frameworks
The table below summarizes the current status of Machine Learning frameworks on Torizon OS, and the availability of containers for each framework runtime. Keep in mind that the availability of certain frameworks depend on SoC vendor, may be limited by hardware capabilities, or may be under active development or planned for future releases.
Table Legend:
- ✅: Framework is supported
- ❌: Framework is not supported
- ➖: Framework not available due to hardware limitations or cannot be supported (usually because it requires hardware capabilities not provided by the SoM)
- ⌛: Framework currently under development or scheduled for an upcoming release
- 📅: Framework planned for a future release
| Platform → Framework ↓ | AM69 | iMX95 | iMX8M Plus | SL1680 |
|---|---|---|---|---|
| ONNX Runtime | 📅 | 📅 | 📅 | 📅 |
| OpenCV | 📅 | 📅 | 📅 | 📅 |
| NNStreamer | 📅 | 📅 | 📅 | 📅 |
| TensorFlow Lite | 📅 | 📅 | ✅ | ⌛ |
| LiteRT | 📅 | 📅 | 📅 | 📅 |
| Torq Runtime* | ➖ | ➖ | ➖ | 📅 |
*Torq Runtime is a proprietary software runtime developed by Synaptics. It is not supported on other platforms due to hardware limitations.
If you are interested in using a framework that is not supported on Torizon OS or not listed in the table above, contact us.
Use Cases
Build TensorFlow Lite Applications With NPU Support on i.MX 8M Plus-Based Modules on Torizon OS
Learn how to build TensorFlow Lite applications that leverage the NPU on i.MX 8M Plus-based modules running Torizon OS.
Build TensorFlow Lite Applications With NPU Support on SL1680-Based Modules on Torizon OS
Learn how to build TensorFlow Lite applications that leverage the NPU on SL1680-based modules running Torizon OS.
Run the TensorFlow Lite Demo Application With NPU Support on Torizon OS
Learn how to run a Toradex-provided TensorFlow Lite demo application that leverages the NPU on Torizon OS.
Legacy Documentation
The Developer Website provides Machine Learning guides for older versions of Torizon OS. Although these guides may still serve as useful examples, they have not been updated to reflect the latest changes in the software stack. These articles include:
- AI at the Edge, Pasta Detection Demo with AWS
- Build and Run the Amazon SageMaker Edge Manager Demo on Torizon
- Build the AWS AI at the Edge Demo Image
- How to Execute Models Tuned by SageMaker Neo using DLR Runtime, Gstreamer and OpenCV on TorizonCore
- How to use OpenCL 1.2 in iMX8 on Torizon
- Torizon Sample: Image Classification with Tensorflow Lite
- Torizon Sample: Real Time Object Detection with Tensorflow Lite
- Torizon Sample: Using OpenCV for Computer Vision
- Train a Neural Network for Object Detection algorithm (SSD) for iMX8 boards using SageMaker Neo