Search by Tags

Torizon Best Practices Guide

 

Article updated at 04 Dec 2020

Select the version of your OS from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release or cat /etc/issue on the board.



Remember that you can always refer to the Torizon Documentation, there you can find a lot of relevant articles that might help you in the application development.

Torizon 5.0.0

Introduction

Torizon, as explained on the TorizonCore Technical Overview, provides the container runtime and Debian Containers for Torizon, among others listed on the List of Container Images for Torizon, to simplify the developer's life and keep the development model close to one of desktop or server applications. Even though we follow the desktop and server application standards as much as possible, some requirements are inherent to embedded systems development that are divergent. One typical example is hardware access.

Torizon also has its architecture-specific aspects of the system, like graphics and UI, that you should consider when developing your application or porting to an existing one. In this chapter, we will discuss some of those potential issues and how to handle it.

Development Environment

Developing applications for Torizon can be done using command-line tools or, as we recommend, the Visual Studio Extension For Torizon and Visual Studio Code Extension for Torizon. This article contains best practices that may apply in all situations using Torizon. Nevertheless, we focus on explaining how to do things on the Visual Studio Code Extension for Torizon.

This article complies to the Typographic Conventions for Torizon Documentation.

Running in a Container

Running your application in a container means that it will run in a "sandboxed" environment.

This fact may limit its interaction with other components in the system. See below some examples of things that may not work when your application is running inside a container:

  • Changing configuration settings
  • Storing data permanently
  • Other operations

There are solutions for many of those scenarios and, usually, those aren't too hard to apply. In this article, we'll discuss some of them, keeping in mind best practices.

These best practices apply to leverage the extra layer of security provided by the containers. Sometimes, we may refer to what seem to be less secure approaches while they are not in fact. Keep in mind that, in a regular non-containerized embedded system, most likely the application would have access to the entire root filesystem. On the other hand, if you use a container from the community where you don't fully trust the software shipped with it, you may consider splitting critical logic to another smaller container that you have more control and trust over.

Prerequisites

Storing Data Permanently

Containers are transient by nature. Since a container can be destroyed and re-created, storing data inside the container's filesystem is not a good idea. It is also impossible to share such data with other containers, making it unsafely stored for all the containers(It disappear after removing the container) and hard to manage. See how to overcome this in the next sub-sections.

Bind Mounts

You can mount a directory from the host filesystem inside the container to store permanently non-volatile data on the SoM's flash memory. This technique is known as bind mount.

You may need to change or configure your application to store data in that location (or mount the folder where the application expects it).
Also, consider that the files' UID/GID will be shared with the underlying host OS, so using coherent IDs may help. Debian Containers for Torizon use the same users/UIDs and groups/GIDs from the base OS, keeping file permissions always clear.

Volumes

You can also use docker volumes instead of providing an explicit path on the local filesystem. The runtime manages volumes created via the docker volume command or using docker-compose.

Bind Mounts and Volumes on VS Code

In the Visual Studio Code extension, you can add additional bind mounts and volumes:

  • Click the + icon next to the volumes list in the configurations view.
  • Insert the path of either:
    • The folder on the host filesystem.
    • Or the name of your docker volume.
  • Insert the path where you want it to be mounted inside the container.
    • By default, volumes are mounted as readwrite. If you want to mount it read-only, add ,ro after the container's path.
  • Adding Bind Mounts and Volumes

Connectivity

By default, containers connect to Docker's "bridge" network. This configuration allows containers to communicate with the outside with no restrictions. However, it prevents them from being accessible from the outside. We cover three possible networking configuration, each recommended for specific situations:

  • Expose ports: used when your application needs to receive an inbound connection from outside the container on a specific port or a set of ports. Attackers cannot access ports that are not explicitly exposed, and you can use the same port number on several containers, even if the host uses that port number for another application.
  • Private Networks: used when you need to communicate between containers, but avoid that external world to access those communication endpoints. For example, if you have a container exposing a REST API (backend) to a container that implements a web UI (frontend).
  • Host Network: Used when you need to access the network with the same IP and configuration used by processes running natively on the host OS. This method is the least recommended since you expose the entire container networking to the outside, and you should only choose it if it is really required.

Exposed Ports - Inbound Communication

It's possible to enable communication on specific ports using the ports setting. To expose a port, you have to:

  • Click on the + sign next to ports.
  • Add the port information in the format: <port number>/<tcp/udp>, for example 8080/tcp.
  • Add a matching port number that should be used on the host or leave it empty to let the runtime assign a free port number.
  • Exposing Ports

Private Networks - Inter-container Communication

There are scenarios where you may want your containers communicating only to each other on the same device. You can do it by creating a private docker network.
Containers on a private network are accessible with no restrictions, without needing to explicitly enable ports, but only if the containers are on the same network. This remark is important because you can create as many networks as you want and have one container on more than one private network.

For example, you may have a container exposing a REST API (backend) to a container that implements a web UI (frontend). The backend and frontend will be on the same private network. The frontend will also be on the bridge network, exposing the port used to serve webpages.

To use a private network, you have to create it on the device using the docker network create command or defining it in docker-compose. Then you can add your container to that network:

  • Press the + sign next to networks.
  • Type your network name:
    • If using docker network create <network>, type the <network> name
    • If using a docker-compose file in your settings, then you have to prepend #%application.id%#_ to the network name you set in the compose file.
  • Creating Private Networks

Host Network: Using the Host Network Inside a Container

For some kinds of applications, typically those that need to use low-level UDP-based protocols, you may need to access the network using the same IP and configuration used by processes running natively on the host OS.
In this case, you should enable host network mode.

  • select the + sign next to extraparms.
  • input network_mode as the key and host as the value.
  • Using the Host Network

All the ports exposed by applications running in your container will be exposed directly on the host network interfaces in host mode. This also means that you won't be able to expose services on ports already used by the host (for example, port 22 for SSH).

Hardware Access

Container technology is very popular in domains where direct access to the hardware is usually forbidden, like servers and cloud-based solutions. On the other side, software running on an embedded device will probably need to access hardware devices, for example, to collect data from sensors or to drive external machinery.

Docker provides ways to access specific hardware devices from inside a container, and often one needs to grant access from both a permission and a namespace perspective.

From a permission perspective, there are two ways to grant access to a container. You will hardly ever worry about those since they are either well documented or abstracted by our IDE Extensions. They are presented below:

  • Privileged: running a container in privileged mode is unnecessary, and when you do it, you lose the protection layer inherent to using containers. The entire host system is accessible from the container. You can most likely work around it using more granular capabilities and control group rules.
  • Capabilities: using the --cap-add and --cap-drop flags, you can add or drop capabilities such as the ability to modify network interfaces. Learn more about it in the Docker run reference - Runtime privilege and Linux capabilities.
  • Control group (cgroup) rules: those rules give more granular access to some hardware components, solving the permission issue. We make use of those when it is required, but the VS Code extension abstracts them. If you've read our articles Debian Containers for Torizon or Using Multiple Containers with TorizonCore that are focused on the command-line, you might have already come across those.

Warning: It's strongly advised to avoid at any cost using Privileged mode for the execution of your container. That can lead to security flaws. Please search for ways to expose the required resource through cgroup, bind mounts or devices. If you are facing issues in setting up your resource from a container, feel free to contact us at Toradex Community.

From a namespace perspective, there are also two ways to grant access to a container. You must give them access on a per-peripheral basis.

  • Bind Mounts: you can share a file or a directory from the host to the container. Since devices are abstracted as files on Linux, bind mounts can expose devices inside the containers. When using pluggable devices, you might not know the exact mount point in advance and thus bind mount an entire directory. You can learn more about bind mounts in a previous section of this article about data storage.
  • Devices: this is a more granular method for adding devices to containers, and it is preferred over bind mounts. It is better for security since you avoid exposing, for instance, a storage device that may be erased by an attacker.

Torizon uses a coherent naming for the most commonly used hardware interfaces. For instance, a family-specific interface will have a name corresponding to the family name used in the datasheet and tables across our other articles. This helps you write containers and applications that are pin-compatible in software - if you switch the computer on module for another.

Inside Torizon base containers, there is a user called torizon mapped to several groups associated with hardware devices, including dialout, audio, video, gpio, i2cdev, spidev, pwm and input. That means using the torizon user, it's not necessary to be root to access different hardware interfaces like sound cards, displays, serial ports, gpio controllers, etc. So when developing your application to run inside a container, run it with the torizon user so the access to most hardware interfaces will work without requiring any additional privileges.

Sharing a Device Between Host and Container on VS Code

To share a device from the host OS into a container, you can:

  • press the + icon next to devices
  • provide the full path of the device (ex: /dev/colibri-i2c1)
  • Adding a Device

TorizonCore User Groups

The device will be mapped to the same path inside the container and use the same access rights specified for the host. Since the default user on the Toradex containers is torizon, and you should avoid using root as much as possible to limit potential security issues, you may have to add your user to specific groups to enable access for different kinds of devices. Those groups are mirrored between the host OS and our Debian-based container images, making things more intuitive.

The groups that are currently supported are listed in the table below:

group description
gpio allow access to the GPIO character device (/dev/gpiochip*), used by libgpiod
pwm allow access to PWM interfaces
dialout allow access to UART interfaces
i2cdev allow access to I2C interfaces
spidev allow access to SPI interfaces
audio allow access to audio devices
video allow access to graphics and backlight interfaces
input allow access to input devices

Adding User to Groups on VS Code

To add the torizon user - or any other user - to groups, you can:

  • Expand the Custom Properties
  • Press the Edit button on the property buildcommands
  • Add the command RUN usermod -a -G <group 1>,<group 2>,... torizon
    • For example, to add access to GPIO and UART: RUN usermod -a -G gpio,dialout torizon
  • Adding a User to Groups

Sharing a Pluggable Device

Suppose your application needs to access devices that may be plugged/unplugged at runtime. In this situation, the static mapping will not work. There is no way, at the moment, to map a device into a running container. If you need to access this kind of device, the only solution is to mount the /dev folder as a volume.

Hardware Access through Control Group Rules (cgroup)

For devices that are not exposed through user groups, you can add its access through cgroup by the addition of device_cgroup_rules as extraparms in the Torizon Extension.

Each device that is handled through cgroup is referenced (or whitelisted) by the following fields:

  • Type: a (all), c (char), or b (block). 'all' means it applies to all types and all major and minor numbers
  • Major and Minor: Major and minor are either an integer or * for all. They reference the code for the device being whitelisted. The code for each device can be verified in the devices list at devices.txt in Kernel Documentation.
  • Access: a composition of r (read), w (write), and m (mknod).

You can check more details about Device Whitelist Controller in the Kernel Documentation.

The extraparms takes a key-value pair, which syntax shall match the Python API as specified in the Docker SDK for Python.

See for example the following addition for extraparms for GPU access:

  • key: device_cgroup_rules
  • value: [ "c 199:* rmw" , "c 226:* rmw" ]

In case you want to check the major and minor numbers of a given device at /dev, you can do that by typing ls -l /dev/<device>.

See below a few examples:

# ls -l /dev/tty0
crw--w---- 1 root tty 4, 0 Oct  6 09:32 /dev/tty0

# ls -l /dev/tty7
crw--w---- 1 root root 4, 7 Feb 23 19:43 /dev/tty7

# ls -l /dev/input/
total 0
drwxr-xr-x 2 root root      80 Oct  6 09:32 by-path
crw-rw---- 1 root input 13, 64 Oct  6 09:32 event0
crw-rw---- 1 root input 13, 65 Oct  6 09:32 event1

Taking for example the devices presented above, you could register them in the extraparms at device_cgroup_rules as:

  • For tty0: [ "c 4:0 rmw" ]
  • For tty7: [ "c 4:7 rmw" ]
  • For /dev/input devices: [ "c 13:* rmw" ]
  • Supposing you want to add them all together: [ "c 4:0 rmw", "c 4:7 rmw", "c 13:* rmw"]

Exceptions: When You Must Run as Root Inside the Container

There may be situations that cannot be easily worked around, and you need to run the application inside the container as root. However, notice that you should still avoid running the container as privileged, especially in this scenario. Here are some examples:

  • How to Use CAN on TorizonCore: the CAN interface is abstracted as a network interface. Since NetworkManager does not support configuring the CAN, you must use iproute2 and therefore run as root.

Graphical User Interface (GUI)

The default choice for Graphic User Interface (GUI) in Torizon is to use the Wayland protocol. This requires a compositor that manages the screen and input devices and allows clients to access "surfaces" on the screen.

Torizon provides a container with the Weston compositor and clear instructions about how to run it on Debian Containers for Torizon. There is also a dedicated article for Working with Weston on TorizonCore that covers common customization options used by our customers.

Starting Weston Automatically on Your VS Code Project

Some platforms (Qt QML/Widgets, UNO, debian-wayland) are already configured to start Weston before starting your application and also provide the right environment for the different UI toolkits if needed. If you are starting from one of the "console" platforms you may need to add some settings to support wayland.

Suppose you want to run it automatically when debugging your application. In that case, you can create a docker-compose file inside your appconfig folder and add its name to the dockercomposefile configuration property.
You can find a sample of such a file on Debian Containers for Torizon.

Your container must communicate with the Weston compositor. This is done using a socket that, by default, is created under /tmp. Mounting /tmp as a volume in your container will let your application interact with Weston.

GUI Frameworks/Toolkits and Wayland

Many commonly used GUI-toolkits support Wayland as a rendering back end. Usually, you need to set an environment variable that configures rendering mode.
In the following table, you can find configuration settings for some popular toolkits:

Toolkit Env setting
GTK3 GDK_BACKEND="wayland"
Qt5 QT_QPA_PLATFORM="wayland"
SDL2 SDL_VIDEODRIVER="wayland"

To set an environment variable:

  • Click on env under Custom Properties in the configuration view
  • Add the ENV statement as you would do in a Dockerfile: ENV GDK_BACKEND="wayland"
  • Adding an Environment Variable

Remote Access Using VNC or RDP

By using the Weston-based Debian Containers for Torizon, you can enable VNC or RDP remote access by simply adding an environment variable. Learn more on Remote Access the TorizonCore GUI Using VNC or RDP.

Reboot and Suspend/Resume

Altering the power state usually requires root permissions. Since running containers with root (privileged) permissions isn't in best practices, these tasks can be performed without root permissions inside a container. The key to success is using a bind mount for the required files - and it can be applied in many scenarios, not only this one.

From a security perspective, bind mounting /proc/sysrq-trigger, which allows performing some low-level commands, or /var/run/dbus, which provides access to system services, may potentially pose a vulnerability, even though this is probably more secure than a regular non-containerized embedded use case.

Reboot via D-Bus is a more proper way to do it, since it will gently ask all daemons and containers to stop before reboot, while sysrq will not do that. Learn more about sysrq-trigger options in the Linux kernel documentation page Linux Magic System Request Key Hacks to decide if this is acceptable to bind mount in your case.

Reboot

The below example reboots the device by writing to /proc/sysrq-trigger:

# docker run -it --rm -v /proc/sysrq-trigger:/procw/sysrq-trigger torizon/debian:2-bullseye
## echo "b" > /procw/sysrq-trigger

Or reboot with using D-Bus:

# docker run -it --rm -v /var/run/dbus:/var/run/dbus torizon/debian:2-bullseye
## apt update &&  apt-get install -y dbus
## dbus-send --system --print-reply --dest=org.freedesktop.login1 /org/freedesktop/login1 "org.freedesktop.login1.Manager.Reboot" boolean:true

It is also possible to use a D-Bus library for your preferred language. If you use Python, you can use our D-Bus Python sample as a starting point.

Suspend and Wake-Up

You can put the system into supported low-power modes without root permissions inside a container. For a general list of supported power modes by our BSP, see the article Suspend/Resume (Linux).

The below examples suspends/resumes the device by writing to the various files:

Suspend and wakeup using RTC

# docker run -it --rm -v /sys/class/rtc/rtc1/wakealarm:/sys/class/rtc/rtc1/wakealarm -v /sys/power/state:/sys/power/state torizon/debian:2-bullseye
## echo +5 > /sys/class/rtc/rtc1/wakealarm; echo mem > /sys/power/state

Suspend and wakeup over UART

# docker run -it --rm -v /sys/class/tty/ttymxc0/power/wakeup:/sys/class/tty/ttymxc0/power/wakeup -v /sys/power/state:/sys/power/state torizon/debian:2-bullseye
## echo enabled > /sys/class/tty/ttymxc0/power/wakeup
## echo mem > /sys/power/state
Press any button to wakeup from suspend...

Some SoMs require to use a different UART (e.g. ttyLP3 on Colibri iMX8X). Check UART (Linux) to learn more about the corresponding UART names for your SoM.