Select the version of your OS from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release
or cat /etc/issue
on the board.
Remember that you can always refer to the Torizon Documentation, there you can find a lot of relevant articles that might help you in the application development.
In this article, we will show how to setup one or more CAN networks to be accessed in a containerized application in TorizonCore.
This article complies to the Typographic Conventions for Torizon Documentation
Starting at TorizonCore 5, the following Kernel modules are ready to use:
The first step to prepare our environment is to check if our module has the desired CAN interface enabled, otherwise, we must enable it.
As TorizonCore 5 follows the same structure as the BSP 5, the best way to check if your module has its CAN interfaces enabled is by checking the Kernel Support on CAN (Linux) article. Please also take into consideration that your module must be compatible with Torizon.
If your module does not have CAN enabled, you'll have to enable it by using Device Tree Overlays on TorizonCore.
Considering that you have a CAN interface enabled, you can check if the interface is present by executing the command:
## ip link show can0
4: can0: <NOARP,ECHO> mtu 16 qdisc noop state DOWN mode DEFAULT group default qlen 10
link/can
Considering that we have a TorizonCore compatible module with a CAN interface enabled, we can now start the procedures for a Container to make use of it.
The first thing we need to do is to create our container for interfacing with CAN. To be able to build your own containers, please follow the article Configure Build Environment for Torizon Containers.
Note: Always build your docker images for TorizonCore in a host development computer.
For our tutorial, we'll take the Dockerfile showed below as a template for a Container to interact with CAN.
DockerfileARG IMAGE_ARCH=arm64v8 # Use the parameter below for Arm 32 bits (like iMX6 and iMX7) # ARG IMAGE_ARCH=arm32v7 FROM torizon/$IMAGE_ARCH-debian-shell:1.0 WORKDIR /home/torizon RUN apt-get -y update && apt-get install -y \ nano \ python3 \ python3-pip \ python3-setuptools \ git \ iproute2 \ can-utils \ python3-can \ && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/*
As you have observed, the definition of our custom container image has the following packages:
From those packages, the most important ones are iproute2, because without it we are unable to configure the CAN interface, and can-utils, which allows us to send and receive CAN messages using the terminal.
With the Dockerfile for our sample Container image for CAN, you can build it in your host computer with the following command:
$ docker build -t can-torizon-sample .
When the build is complete, you can compress the container in a tar file so you can send it to your target development board:
$ docker save -o can-torizon-sample.tar can-torizon-sample
As said, you can send the compressed container to your target board:
$ scp can-torizon-sample.tar torizon@X.X.X.X:/home/torizon/
In your target board with TorizonCore 5, you can load your container directly from the tar file:
# docker load -i can-torizon-sample.tar
After the load, you can check that the image is in the system:
# docker image ls
verdin-imx8mm-06612136:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
can-torizon-sample latest 5ea7a31bf664 About an hour ago 203MB
Now that we have our customized container image for CAN in our target system, we can execute it.
$ docker run -it --rm --name=can-test --net=host --cap-add="NET_ADMIN" \
-v /dev:/dev -v /tmp:/tmp -v /run/udev/:/run/udev/ \
can-torizon
The main secret behind the setup and usage of CAN on containers with TorizonCore is the usage of the following flags when running your container:
Once within the container console, you'll have to configure the CAN Network. This process is much similar to the setup of CAN on Linux.
We'll set only one CAN interface. But depending on your CoM, you may have more CAN interfaces available.
First, let us configure the CAN0 interface with a bitrate of 500000 bps:
## ip link set can0 type can bitrate 500000
If everything went fine (no complaints!), we can bring that interface up:
## ip link set can0 up
To be sure that the interface is now set and ready, check it by using the following command:
## ip link show
This will output something similar as this:
root@verdin-imx8mm-06612136:/home/torizon# ip link show can0
3: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP mode DEFAULT group default qlen 10
link/can
Now you can start using applications that may connect to the CAN Network, or simply use the can-utils for testing.
For programming a Linux application in C/C++, you can use the SocketCAN API.
There is also a wrapper for it in Python.
One way to test if everything is OK with your CAN communication, you can do the following:
cansend <interface> <message>
to send CAN messages on a given interface. Like for example:## cansend can0 123#deadbeef
If everything is OK, you'll see a CAN message "DEADBEEF" with ID 123 on CAN.
candump <interface>
command to start listening to incoming CAN messages on a given interface. Like for example:root@verdin-imx8mm-06612136:/home/torizon# candump can0
can0 123 [4] DE AD BE EF
If everything is OK, you'll see the messages appearing as other devices start sending messages in the same CAN network that your device is connected to.
Note: You can read more about can-utils on its project page.
As you observed in the process of having your container for CAN, you had to manually create your Dockerfile, build your container, upload and load it on the target device. Not to mention the process in the sequence of loading your application as well.
To make this whole process easy, we have the Torizon Extension, available for both Visual Studio and Visual Studio Code.
In this particular example, let us see how to setup a Python application to interact with CAN, with our personalized container, having everything configured from within Visual Studio Code and the Torizon Extension on it.
Please, make sure to have followed all the steps for configuring your environment for Visual Studio Code and Torizon Extension.
With your environment all set, please create a new Python3 project, according to your target architecture.
Then, go to the Torizon Extension menu and configure your project with the following parameters:
Warning: The NetworkManager does not support the configuration of a CAN interface. That said, for using CAN it's required to execute the application as root, in order to manipulate iproute2 utility and configure it accordingly. Please avoid executing this container application in a privileged mode to prevent security breaches.
See the figure below as an example of how your Torizon Extension setup must look like:
Basically, what we did was the addition of our required packages, as we did in the Dockerfile, plus the required environment flags we used in the docker run....
Now, go to the Explorer menu in the Visual Studio Code, open the main.py code and paste the following code:
To load this application container in your target, which the Torizon Extension asked for its credentials before (hostname/IP, user, and password, for example), you can just press F5 on your keyboard. The utility will then begin to build the container image, will load it into the device, and will start the execution in Debug mode, a process that you can observe in the Output section of the Visual Studio Code.
You can also add breakpoints to track parts of interest in your program.
Due to product changes on Verdin i.MX8M Mini 0055 and 0059, the CAN clock source changed from 20MHz to 40MHz. So, the projects using CAN protocol since TorizonCore 5.7.0_devel_202205
may stop working properly. A workaround to avoid this problem is to change the CAN clock from 40MHz to 20MHz using TorizonCore Builder Tool - Customizing TorizonCore Images and Device Tree Overlays on Torizon.
The following device tree overlay should work to change the clock-frequency
of clk40m
node, which is the node that describes the CAN clock source. Copy and the following .dts
, and then use TorizonCore Builder Tool to apply and deploy to the board.