Search by Tags

Torizon Sample: Image Classification with Tensorflow Lite

 

Article updated at 30 Sep 2020

Select the version of your OS from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release or cat /etc/issue on the board.

Torizon 5.0.0

Introduction

TensorFlow is a popular open-source platform for machine learning.Tensorflow Lite is a set of tools to convert and run Tensorflow models on embedded devices.

Through Torizon, Toradex provides Debian Docker images and deb packages that greatly ease the development process for several embedded computing applications. In this article, we will show how you can quickly build an application with Tensorflow Lite using Python for distinct NXP's i.MX SoC, such as i.MX8, i.MX8X, i.MX8MM, i.MX7 and i.MX6 with Torizon.

Attention: This Tensorflow Lite implementation executes in CPU only at this moment. We plan to provide Tensorflow Lite with GPU/NPU acceleration. Contact us if you need more information.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

About this Sample Project

This example uses Tensorflow Lite libraries with Python. It executes a slightly modified version of the sample extracted from the official Tensorflow Lite tutorial to perform an inference using Image Classification model. You can adapt other machine learning models quickly from this sample implementation.

This example takes an image as input, resize it, use it as an input for the model, and prints its output.


  • The Tensorflow Lite Image Classification example

    The Tensorflow Lite Image Classification example

Result:

image.jpg : Maltese dog
Inference time: 0.1774742603302002 s

For the Impatient: Running the Sample Project in Torizon Without Building It

If you only want to see the sample project in action, in your board terminal, download the specific docker-compose file targeting your architecture and run the containers:

# wget https://raw.githubusercontent.com/toradex/torizon-samples/bullseye/tflite/docker-compose.yaml
# docker-compose up

Modifying and Building the Project from Source

Getting the Source Code of the Torizon Samples

In this article, we will explore the demonstration example available on the Toradex samples repository.

To obtain the files, clone the torizon-samples repository to your computer:

$ cd ~
$ git clone https://github.com/toradex/torizon-samples.git

Build the Sample Project

Select your SoM from the boxes below:

First, in your PC terminal, build the sample project:

$ cd torizon-samples/tflite
$ docker build -t <your-dockerhub-username>/tflite_example .

After the build, push the image to your Dockerhub account:

$ docker push <your-dockerhub-username>/tflite_example

First, in your PC terminal, build the sample project:

$ cd torizon-samples/tflite
$ docker build --build-arg ARCH_ARG=linux/arm --build-arg PKG_ARCH=armv7l -t <your-dockerhub-username>/tflite_example .

After the build, push the image to your Dockerhub account:

$ docker push <your-dockerhub-username>/tflite_example

Please remember that if you once built your image for one architecture, you need to pass the --pull argument to build for another architecture. According to the Docker documentation, the pull argument will always attempt to pull a newer version of the image.

Example:

$ docker build --pull .

Modify the Docker-compose

After building the Dockerfile image above and pushing it to your Dockerhub, you need to edit the docker-compose.

Edit the image field of the example with your image repository:

docker-compose
version: "2.4"
services:
  tflite-example:
    image: your-username/tflite_example
    volumes:
      - /tmp:/tmp
      - /sys:/sys
      - /dev:/dev

After filling it, save and send this file to your module using scp:

$ scp <your-docker-compose-file> torizon@<your-ip>:/home/torizon

Run the Sample Project

Now enter your module's terminal using SSH:

$ ssh torizon@<target-ip>

Note: For more information about SSH, please refer to SSH on Linux.

Now you can launch the sample application by using the command:

# docker-compose -f <your-docker-compose-file> up

Implementation Details

The main.py file

In our project, we implemented the code on the main.py file.

First, our script imports the Tensorflow Lite, NumPy, and PIL Libraries:

import tflite_runtime.interpreter as tf
import numpy as np
from PIL import Image

On the main function, it loads the Image Classification model:

# Load the TFLite model and allocate tensors.
interpreter = tf.Interpreter(model_path="mobilenet_v1_1.0_224_quant.tflite")
interpreter.allocate_tensors()

# Load object labels
with open('labels_mobilenet_quant_v1_224.txt') as f:
labels = f.readlines()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
nn_input_size=input_details[0]['shape'][1]

We will resize the image to the corresponding input size of the network (224x224 in this example):

# Resize image to the input size of the model adding padding if necessary
width, height = img.size
if width>height:
img_resized=Image.new("RGB",(width,width))
if width<height:
img_resized=Image.new("RGB",(height,height))
img_resized.paste(img)
img_resized = img_resized.resize((nn_input_size,nn_input_size))
np_img = np.array(img_resized)
input_data=[np_img]

And finally, we will set the input tensor, execute the inference and print the result:

# Set the input tensor
interpreter.set_tensor(input_details[0]['index'], input_data)

# Execute the inference
t1=time()
interpreter.invoke()
t2=time()

# Find highest score into the result array and print the corresponding label
output_data = interpreter.get_tensor(output_details[0]['index'])
print(filename,':',labels[np.where(output_data[0]==np.amax(output_data[0]))[0][0]],flush=True)
print('Inference time:',t2-t1,'s')

The Docker Compose (yaml) file

This file configures the application's services. It informs the Docker runtime which containers the system will run, set privileges, among other options.

This example has a very simple Docker Compose to start the demo application.

The Dockerfile

In this section, you will go through some relevant snippets containing information about the Dockerfile.

Toradex provides several Debian Containers for Torizon. For this demonstration, we use the base image torizon/debian, which is available for both 32 and 64-bit architectures.

If you are using an iMX8 (arm64v8 CPU) computer-on-module (COM) use --platform=linux/arm64 and set the variable PKG_ARCH=aarch64. Otherwise, use --platform=linux/arm and set PKG_ARCH=armv7l.

Choose from the tabs:

ARG ARCH_ARG=linux/arm64
ARG PKG_ARCH=aarch64
FROM --platform=$ARCH_ARG torizon/debian:2-bullseye

Tensorflow Lite libraries

As recommended in the TensorFlow Lite guide for Python, we add the TensorFlow Debian feed and install TensorFlow Lite with apt-get.

The same document explains how to alternatively install TensorFlow Lite with pip, and our Dockerfile has a comment that you can use, in case you have strong reason to do so.

ARG ARCH_ARG=linux/arm
ARG PKG_ARCH=armv7l
FROM --platform=$ARCH_ARG torizon/debian:2-bullseye

Tensorflow Lite libraries

As recommended in the TensorFlow Lite guide for Python, we add the TensorFlow Debian feed and install TensorFlow Lite with apt-get.

The same document explains how to alternatively install TensorFlow Lite with pip, and our Dockerfile has a comment that you can use, in case you have strong reason to do so.

We also install Python in the example Dockerfile.

Additional Considerations

Torizon 4.0.0

Introduction

TensorFlow is a popular open-source platform for machine learning.Tensorflow Lite is a set of tools to convert and run Tensorflow models on embedded devices.

Through Torizon, Toradex provides Debian Docker images and deb packages that greatly ease the development process for several embedded computing applications. In this article, we will show how you can quickly build an application with Tensorflow Lite using Python for distinct NXP's i.MX SoC, such as i.MX8, i.MX8X, i.MX8MM, i.MX7 and i.MX6 with Torizon.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

About this Sample Project

This example uses Tensorflow Lite libraries with Python. It executes a slightly modified version of the sample extracted from the official Tensorflow Lite tutorial to perform an inference using Image Classification model. You can adapt other machine learning models quickly from this sample implementation.

This example takes an image as input, resize it, use it as an input for the model, and prints its output.


  • The Tensorflow Lite Image Classification example

    The Tensorflow Lite Image Classification example

Result:

image.jpg : Maltese dog
Inference time: 0.1774742603302002 s

For the Impatient: Running the Sample Project in Torizon Without Building It

If you only want to see the sample project in action, in your board terminal, download the specific docker-compose file targeting your architecture and run the containers:

# wget https://github.com/toradex/torizon-samples/raw/master/tflite/docker-compose.yaml
# docker-compose up

Modifying and Building the Project from Source

Getting the Source Code of the Torizon Samples

In this article, we will explore the demonstration example available on the Toradex samples repository.

To obtain the files, clone the torizon-samples repository to your computer:

$ cd ~
$ git clone https://github.com/toradex/torizon-samples.git

Build the Sample Project

Select your SoM from the boxes below:

First, in your PC terminal, build the sample project:

$ cd torizon-samples/tflite
$ docker build -t <your-dockerhub-username>/tflite_example .

After the build, push the image to your Dockerhub account:

$ docker push <your-dockerhub-username>/tflite_example

First, in your PC terminal, build the sample project:

$ cd torizon-samples/tflite
$ docker build --build-arg ARCH_ARG=linux/arm --build-arg PKG_ARCH=armv7l -t <your-dockerhub-username>/tflite_example .

After the build, push the image to your Dockerhub account:

$ docker push <your-dockerhub-username>/tflite_example

Please remember that if you once built your image for one architecture, you need to pass the --pull argument to build for another architecture. According to the Docker documentation, the pull argument will always attempt to pull a newer version of the image.

Example:

$ docker build --pull .

Modify the Docker-compose

After building the Dockerfile image above and pushing it to your Dockerhub, you need to edit the docker-compose.

Edit the image field of the example with your image repository:

docker-compose
version: "2.4"
services:
  tflite-example:
    image: your-username/tflite_example
    volumes:
      - /tmp:/tmp
      - /sys:/sys
      - /dev:/dev

After filling it, save and send this file to your module using scp:

$ scp <your-docker-compose-file> torizon@<your-ip>:/home/torizon

Run the Sample Project

Now enter your module's terminal using SSH:

$ ssh torizon@<target-ip>

Note: For more information about SSH, please refer to SSH on Linux.

Now you can launch the sample application by using the command:

# docker-compose -f <your-docker-compose-file> up

Implementation Details

The main.py file

In our project, we implemented the code on the main.py file.

First, our script imports the Tensorflow Lite, NumPy, and PIL Libraries:

import tflite_runtime.interpreter as tf
import numpy as np
from PIL import Image

On the main function, it loads the Image Classification model:

# Load the TFLite model and allocate tensors.
interpreter = tf.Interpreter(model_path="mobilenet_v1_1.0_224_quant.tflite")
interpreter.allocate_tensors()

# Load object labels
with open('labels_mobilenet_quant_v1_224.txt') as f:
labels = f.readlines()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
nn_input_size=input_details[0]['shape'][1]

We will resize the image to the corresponding input size of the network (224x224 in this example):

# Resize image to the input size of the model adding padding if necessary
width, height = img.size
if width>height:
img_resized=Image.new("RGB",(width,width))
if width<height:
img_resized=Image.new("RGB",(height,height))
img_resized.paste(img)
img_resized = img_resized.resize((nn_input_size,nn_input_size))
np_img = np.array(img_resized)
input_data=[np_img]

And finally, we will set the input tensor, execute the inference and print the result:

# Set the input tensor
interpreter.set_tensor(input_details[0]['index'], input_data)

# Execute the inference
t1=time()
interpreter.invoke()
t2=time()

# Find highest score into the result array and print the corresponding label
output_data = interpreter.get_tensor(output_details[0]['index'])
print(filename,':',labels[np.where(output_data[0]==np.amax(output_data[0]))[0][0]],flush=True)
print('Inference time:',t2-t1,'s')

The Docker Compose (yaml) file

This file configures the application's services. It informs the Docker runtime which containers the system will run, set privileges, among other options.

This example has a very simple Docker Compose to start the demo application.

The Dockerfile

In this section, you will go through some relevant snippets containing information about the Dockerfile.

Toradex provides a basic Debian image in its Dockerhub page. If you are using an iMX8 (arm64v8 CPU) computer-on-module (COM) add torizon/arm64v8-debian-base to your image. Otherwise, add torizon/arm32v7-debian-base. It contains the repository package.

Choose from the tabs:

FROM torizon/arm64v8-debian-base

Tensorflow Lite libraries

The following Dockerfile lines of the example takes the .whl file from the Tensorflow Lite repository and use pip3 to install the required packages:

ENV TFLITE_LINK https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_aarch64.whl
RUN wget $TFLITE_LINK && pip3 install tflite_runtime-* && rm tflite_runtime-*
FROM torizon/arm32v7-debian-base

Tensorflow Lite libraries

The following Dockerfile lines of the example takes the .whl file from the Tensorflow Lite repository and use pip3 to install the required packages:

ENV TFLITE_LINK https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl
RUN wget $TFLITE_LINK && pip3 install tflite_runtime-* && rm tflite_runtime-*

We also install Python in the example Dockerfile.

Additional Considerations