Skip to main content
Version: Torizon OS 6.x.y

Torizon Sample: Image Classification with Tensorflow Lite

Introduction

TensorFlow is a popular open-source platform for machine learning. Tensorflow Lite is a set of tools to convert and run Tensorflow models on embedded devices.

Through Torizon, Toradex provides Debian Docker images and deb packages that greatly ease the development process for several embedded computing applications. In this article, we will show how you can quickly build an application with Tensorflow Lite using Python for distinct NXP's i.MX SoC, such as i.MX8, i.MX8X, i.MX8MM, i.MX7 and i.MX6 with Torizon.

caution

In this particular Tensorflow Lite implementation, the sample is automatically executed in CPU. To use Tensorflow Lite with GPU/NPU acceleration make sure to check this other developer site page.

This article complies with the Typographic Conventions for Torizon Documentation.

Prerequisites

About this Sample Project

This example uses Tensorflow Lite libraries with Python. It executes a slightly modified version of the sample extracted from the official Tensorflow Lite tutorial to perform an inference using Image Classification model. You can adapt other machine learning models quickly from this sample implementation.

This example takes an image as input, resize it, use it as an input for the model, and prints its output.

The Tensorflow Lite Image Classification example

Result:

image.jpg : Maltese dog
Inference time: 0.1774742603302002 s

For the Impatient: Running the Sample Project in Torizon Without Building It

If you only want to see the sample project in action, in your board terminal, download the specific docker-compose file targeting your architecture and run the containers:

# wget https://raw.githubusercontent.com/toradex/torizon-samples/bookworm/tflite/tflite-basic/docker-compose.yaml
# docker-compose up

Modifying and Building the Project from Source

Getting the Source Code of the Torizon Samples

In this article, we will explore the demonstration example available on the Toradex samples repository.

To obtain the files, clone the torizon-samples repository to your computer:

$ cd ~
$ git clone https://github.com/toradex/torizon-samples.git

Build the Sample Project

Select your SoM from the boxes below:

First, in your PC terminal, build the sample project:

$ cd torizon-samples/tflite/tflite-basic
$ docker build -t <your-dockerhub-username>/tflite_example .

After the build, push the image to your Dockerhub account:

$ docker push <your-dockerhub-username>/tflite_example

Please remember that if you once built your image for one architecture, you need to pass the --pull argument to build for another architecture. According to the Docker documentation, the pull argument will always attempt to pull a newer version of the image.

Example:

$ docker build --pull .

Modify the Docker-compose

After building the Dockerfile image above and pushing it to your Dockerhub, you need to edit the docker-compose.

Edit the image field of the example with your image repository:

docker-compose
version: "2.4"
services:
tflite-example:
image: your-username/tflite_example
volumes:
- /tmp:/tmp
- /sys:/sys
- /dev:/dev

After filling it, save and send this file to your module using scp:

$ scp <your-docker-compose-file> torizon@<your-ip>:/home/torizon

Run the Sample Project

Now enter your module's terminal using SSH:

$ ssh torizon@<target-ip>
info

For more information about SSH, please refer to SSH on Linux.

Now you can launch the sample application by using the command:

# docker-compose -f <your-docker-compose-file> up

Implementation Details

The main.py file

In our project, we implemented the code on the main.py file.

First, our script imports the Tensorflow Lite, NumPy, and PIL Libraries:

import tflite_runtime.interpreter as tf
import numpy as np
from PIL import Image

On the main function, it loads the Image Classification model:

# Load the TFLite model and allocate tensors.
interpreter = tf.Interpreter(model_path="mobilenet_v1_1.0_224_quant.tflite")
interpreter.allocate_tensors()

# Load object labels
with open('labels_mobilenet_quant_v1_224.txt') as f:
labels = f.readlines()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
nn_input_size=input_details[0]['shape'][1]

We will resize the image to the corresponding input size of the network (224x224 in this example):

# Resize image to the input size of the model adding padding if necessary
width, height = img.size
if width>height:
img_resized=Image.new("RGB",(width,width))
if width<height:
img_resized=Image.new("RGB",(height,height))
img_resized.paste(img)
img_resized = img_resized.resize((nn_input_size,nn_input_size))
np_img = np.array(img_resized)
input_data=[np_img]

And finally, we will set the input tensor, execute the inference and print the result:

# Set the input tensor
interpreter.set_tensor(input_details[0]['index'], input_data)

# Execute the inference
t1=time()
interpreter.invoke()
t2=time()

# Find highest score into the result array and print the corresponding label
output_data = interpreter.get_tensor(output_details[0]['index'])
print(filename,':',labels[np.where(output_data[0]==np.amax(output_data[0]))[0][0]],flush=True)
print('Inference time:',t2-t1,'s')

The Docker Compose (yaml) file

This file configures the application's services. It informs the Docker runtime which containers the system will run, set privileges, among other options.

This example has a very simple Docker Compose to start the demo application.

The Dockerfile

In this section, you will go through some relevant snippets containing information about the Dockerfile.

Toradex provides several Debian Containers for Torizon. For this demonstration, we use the base image torizon/debian, which is available for both 32 and 64-bit architectures.

If you are using an iMX8 (arm64v8 CPU) computer-on-module (COM) use --platform=linux/arm64 and set the variable PKG_ARCH=aarch64. Otherwise, use --platform=linux/arm and set PKG_ARCH=armv7l.

Choose from the tabs:

ARG ARCH_ARG=linux/arm64
ARG PKG_ARCH=aarch64
FROM --platform=$ARCH_ARG torizon/debian:3-bookworm

Tensorflow Lite libraries

As recommended in the TensorFlow Lite guide for Python, we add the TensorFlow Debian feed and install TensorFlow Lite with apt-get.

The same document explains how to alternatively install TensorFlow Lite with pip, and our Dockerfile has a comment that you can use, in case you have strong reason to do so.

We also install Python in the example Dockerfile.

Additional Considerations

Additional Resources

Getting Started With Machine Learning Using Toradex and NXP® eIQ

This workshop demonstrates how NXP's eIQ framework can be used to load a model onto the system and conduct inference from a contrived dataset with a Toradex iMX8 SoM running Torizon.



Send Feedback!