Skip to main content
Version: Torizon OS 7.x.y

Run the TensorFlow Lite Demo Application With NPU Support on Torizon OS

Introduction

In this article, you will learn how to run a Toradex-provided TensorFlow Lite demo application that leverages the Neural Processing Unit (NPU) on Torizon OS.

The demo is based on a object detection model which detect Toradex modules on a given image. Refer to the Torizon Object Detection AI Training and Inference for Toradex Modules blog post for more details.

Prerequisites

  • Hardware Prerequisites:

  • Software Requirements:

    • A Toradex System-on-Module with Torizon OS installed

Demonstration

Follow the instructions below to run the TensorFlow Lite demo application on Torizon OS. The demo application is provided by Toradex and is designed to showcase the performance benefits of using the NPU for Machine Learning inference tasks.

The application is available as a pre-built Docker image that you can run on your target device:

# docker run --rm                      \
--name "tflite-example" \
-v /dev:/dev \
-v /tmp:/tmp \
--device-cgroup-rule "c 199:0 rmw" \
torizon/tensorflow-lite-imx8:4

In the above command, we are running a Docker container named tflite-example using the torizon/tensorflow-lite-<platform>:4 image. The container is configured to have access to the device files and temporary files on the host system, which is required for the TensorFlow Lite demo application to function properly and execute on the NPU.

The output of the application will show the inference times for processing a set of images using the NPU:

Images processed: 17
Mean inference time: 0.030311079586253446
Images/s: 32.99123665834442
Std deviation: 0.0001578828954934214

Additional Resources

Send Feedback!