Search by Tags

How to use Cameras on Torizon

 

Article updated at 13 Jul 2022

Introduction

This article aims to provide helpful information for the first steps with Cameras on TorizonCore. So, it means that this article is going to guide you with the specific details and considerations that surround the usage of a video capture device using containers.

The main goal of using Torizon and containers is to simplify the developer's life and keep the development model close to one of desktop or server applications. However, some requirements inherent to embedded systems development may be considered in the process. Therefore, running a video capture application inside a container requires some aspects like hardware access, running multiple containers, and others. You can read more about these requirements in the section Container Requirements

If you want to learn more about the Torizon best practices and container development workflow for embedded systems applications, you can refer to Torizon Best Practices Guide. Developing applications for Torizon can be done using command-line tools or, as we recommend, the Visual Studio Code Extension for Torizon.

This article workflow is valid for USB Cameras with USB Video Class (UVC) standard driver and some CSI cameras. To find more about USB cameras you can refer to Webcam (Linux). We recommend using one of our supported CSI cameras which you can find at Camera on Toradex Computer on Modules. The process described here regarding the use of a MIPI-CSI camera was done with the CSI Camera Set 5MP AR0521 Color. To use the AR0521 camera you have to enable a Device-Tree Overlay, see more at First Steps with CSI Camera Set 5MP AR0521 Color (Torizon).

Note: This article does not cover the usage of IP cameras, you can find information about this topic at Audio/Video over RTP With GStreamer (Linux).

This article complies with the Typographic Conventions for the Toradex Documentation.

Prerequisites

Camera usage with containers

Running your application in a container means that it will run in a "sandboxed" environment. This fact may limit its interaction with other components in the system. Hardware access and the use of multiple containers, mainly focused on the usage of a Graphical User Interface (GUI), are the most important topics that you must know to have a good understanding of how to proceed with camera usage. You can read more about multiple containers at Using Multiple Containers with TorizonCore.

Hardware Access and Shared Resources

Docker provides ways to access specific hardware devices from inside a container, and often one needs to grant access from both a permission and a namespace perspective.

  • Permissions: To grant GPU access to the container, as explained in Torizon Best Practices Guide, you have to enable the cgroup rule c 199:* rmw.
  • Namespace: To add the camera device to the container you can use bind mounts, but passing devices (using --device) is better for security since you avoid exposing a storage device that may be erased by an attacker.

To use camera devices, it's required to bind mount specific directories of the host machine. In other words, you have to guarantee access to certain directories. So, you have to use four bind mounts:

  • /dev: Mount /dev allows access to the devices that are attached to the local system.
  • /tmp: Mounting /tmp as a volume in the container allows your application to interact with Weston (GUI).
  • /sys: Grant access to kernel subsystems.
  • /var/run/dbus: Provides access to system services.

Another useful resource is displaying the camera output, especially during the setup and debugging phases of the application development, even if not necessary for the final application. To perform this, you can use a display by configuring a Weston-based container, or VNC. To read more about GUI in TorizonCore, refer to the section Graphical User Interface (GUI) in Torizon Best Practices Guide.

Dockerfile

The implementation details will be explained in this session. See the Quickstart Guide with the instructions about how to compile the image on a host pc and pull the image onto the board. You can also scp this file to the board and build it locally.

To build

Attention: Make sure you have configured your Build Environment for Torizon Containers

Now it's a good time to use torizon-samples repository:

$ cd ~/torizon-samples/gstreamer/bash/simple-pipeline
docker build --build-arg BASE_NAME=wayland-base-vivante --build-arg IMAGE_ARCH=linux/arm64/v8 -t <your-dockerhub-username>/gst_example .
$ docker build -t <your-dockerhub-username>/gst_example .

After the build, push the image to your Dockerhub account:

$ docker push <your-dockerhub-username>/gst_example

For more details about the dockerfile, refer to How to use Gstreamer on TorizonCore.

Hello world with a GStreamer pipeline

To run a GStreamer pipeline inside a container, in order to avoid possible problems in launching the containers that are required to start the video, make sure to stop all the other containers that might be running on your device.

# docker stop $(docker run -a -q)

Launching Weston Container

Considering that a streaming video application requires a GUI, you have to pull a Debian Bullseye container featuring the Weston Wayland compositor and start it on the module.

(Optional) pull the torizon/weston container image:

# docker pull torizon/weston:$CT_TAG_WESTON

Start the weston compositor:

# docker run -d --rm --name=weston --net=host --cap-add CAP_SYS_TTY_CONFIG \
             -v /dev:/dev -v /tmp:/tmp -v /run/udev/:/run/udev/ \
             --device-cgroup-rule='c 4:* rmw' --device-cgroup-rule='c 13:* rmw' \
             --device-cgroup-rule='c 199:* rmw' --device-cgroup-rule='c 226:* rmw' \
             torizon/weston:$CT_TAG_WESTON --developer weston-launch --tty=/dev/tty7 --user=torizon

(Optional) pull the torizon/weston container image:

# docker pull torizon/weston:$CT_TAG_WESTON

Start the weston compositor:

# docker run -d --rm --ipc=host --name=weston --net=host --cap-add CAP_SYS_TTY_CONFIG \
             -v /dev:/dev -v /tmp:/tmp -v /run/udev/:/run/udev/ \
             --device-cgroup-rule='c 4:* rmw' --device-cgroup-rule='c 13:* rmw' \
             --device-cgroup-rule='c 199:* rmw' --device-cgroup-rule='c 226:* rmw' \
             torizon/weston:$CT_TAG_WESTON --developer weston-launch \
             --tty=/dev/tty7 --user=torizon -- --use-pixman

(Optional) pull the torizon/weston-vivante container image:

# docker pull torizon/weston-vivante:$CT_TAG_WESTON_VIVANTE

Start the weston compositor:

Attention: Please, note that by executing the following line you are accepting the terms and conditions of the NXP's End-User License Agreement (EULA)

# docker run -e ACCEPT_FSL_EULA=1 -d --rm --name=weston --net=host --cap-add CAP_SYS_TTY_CONFIG \
             -v /dev:/dev -v /tmp:/tmp -v /run/udev/:/run/udev/ \
             --device-cgroup-rule='c 4:* rmw' --device-cgroup-rule='c 13:* rmw' \
             --device-cgroup-rule='c 199:* rmw' --device-cgroup-rule='c 226:* rmw' \
             torizon/weston-vivante:$CT_TAG_WESTON_VIVANTE --developer weston-launch \
             --tty=/dev/tty7 --user=torizon

Discovering the Video Capture Device

If it is the first time you are dealing with a specific video device and you're not sure which device is the capture one, you may follow the next steps to discover what is the right video capture device:

  1. Still outside the container, list the video devices by using ls /dev/video*.
# ls /dev/video*
/dev/video0  /dev/video1  /dev/video2  /dev/video3  /dev/video12  /dev/video13
  1. Start the previously built container by passing all the video devices under the flag --devices.
# docker run --rm -it -v /tmp:/tmp -v /var/run/dbus:/var/run/dbus -v /dev:/dev -v /sys:/sys \
    --device /dev/video0 --device /dev/video1 --device /dev/video2 --device /dev/video3 --device /dev/video12 --device /dev/video13 \ 
    --device-cgroup-rule='c 199:* rmw' \
    <your-dockerhub-username>/<Dockerfile-name>
  1. Once inside the container, use the command v4l2-ctl --list-devices. Now you are able to see which devices are video capture ones.
## v4l2-ctl --list-devices
vpu B0 (platform:):
    /dev/video12
    /dev/video13

mxc-jpeg decoder (platform:58400000.jpegdec):
    /dev/video0

mxc-jpeg decoder (platform:58450000.jpegenc):
    /dev/video1

Video Capture 5 (usb-xhci-cdns3-1.2):
    /dev/video2
    /dev/video3
    /dev/media0

  1. Use v4l2-ctl -D to find information about the Video Capture devices listed before. In this case, just the /dev/video2 and /dev/video3. As you are able to see, the camera is represented by /dev/video3.
## v4l2-ctl --device /dev/video3 -D
Driver Info:
    Driver name      : uvcvideo
    Card type        : Metadata 5
    Bus info         : usb-xhci-cdns3-1.2
    Driver version   : 5.4.161
    Capabilities     : 0x84a00001
        Video Capture
        Metadata Capture
        Streaming
        Extended Pix Format
        Device Capabilities
    Device Caps      : 0x04a00000
        Metadata Capture
        Streaming
        Extended Pix Format
Media Driver Info:
    Driver name      : uvcvideo
    Model            : USB 2.0 Camera: USB Camera
    Serial           : 01.00.00
    Bus info         : usb-xhci-cdns3-1.2
    Media version    : 5.4.161
    Hardware revision: 0x00000003 (3)
    Driver version   : 5.4.161
Interface Info:
    ID               : 0x03000005
    Type             : V4L Video
Entity Info:
    ID               : 0x00000004 (4)
    Name             : Metadata 5
    Function         : V4L2 I/O
  1. With the right video device, from now on, when you start the container you just need to enable access for the specific camera device (in this case /dev/video3).

Launching the Gstreamer and Video4Linux2 Container

The next step is to launch the container with Gstreamer and Video4Linux2, making sure to pass the right video device /dev/video* under the parameter --device.

# docker run --rm -it -v /tmp:/tmp -v /var/run/dbus:/var/run/dbus -v /dev:/dev -v /sys:/sys \
    --device /dev/<video-device> \
    --device-cgroup-rule='c 199:* rmw' \
    <your-dockerhub-username>/<Dockerfile-name>

Creating a Gstreamer pipeline

Once inside the container, you are able to use Video4Linux2 and Gstreamer resources. The basic structure of the pipeline relies on the usage of a data source (in this case a Video4Linux2 source), a filter, and a data sink, in this case, the video sink Wayland.


  • Gstreamer pipeline diagram

    Gstreamer pipeline diagram

One of the simplest pipelines for showing video can be done the following way:

## gst-launch-1.0 <videosrc> ! <capsfilter> ! <videosink>

Or more specifically

## gst-launch-1.0 v4l2src device=</dev/video*> ! <capsfilter> ! fpsdisplaysink video-sink=waylandsink

If you want to learn more about how to elaborate a more complex Gstreamer pipeline, refer to How to use Gstreamer on TorizonCore.

To discover more properties of v4l2src to configure the pads, you can use gst-inspect:

gst-inspect output

As you are dealing with a raw video, you will have to use, in this case, the video/x-raw and configure its properties: format, width, height, and framerate.

## gst-launch-1.0 v4l2src device='/dev/<video-device>'  ! "video/x-raw, format=<video-format>, framerate=<framerate>, width=<supported-width>, height=<supported-height>" ! fpsdisplaysink video-sink=waylandsink

You also can check information about the sink using the gst-inspect:

gst-inspect output

Note: You have to keep in mind that different cameras may support different video formats, resolutions (height and width), etc. Then, even the simplest pipeline can vary

So, the final structure of the pipeline should be similar to the following one:

## gst-launch-1.0 v4l2src device='/dev/<video-device>'  ! "video/x-raw, format=<video-format>, framerate=<framerate>, width=<supported-width>, height=<supported-height>" ! fpsdisplaysink video-sink=waylandsink text-overlay=<true-or-false> sync=<true-or-false>

The following subsections cover the specific considerations of the described process to use MIPI CSI-2 cameras and Webcams.

USB Camera

The USB 2.0 Host interface is available on all the modules to connect USB Cameras. However, USB 3.0 is only available on specific modules from the Apalis SoM Family and the Verdin SoM Family, check the datasheets for details. Our article Webcam (Linux) is a starting point on how to set up a USB camera in Linux, as mentioned in Camera on Toradex Computer on Modules.

The following pipeline was used to show video from a webcam:

## gst-launch-1.0 v4l2src device='/dev/video3'  ! "video/x-raw, format=YUY2, framerate=5/1, width=640, height=480" ! fpsdisplaysink video-sink=waylandsink text-overlay=false sync=false

MIPI CSI-2 Camera

To use a MIPI-CSI Camera you have to make sure that your module and carrier board have a MIPI-CSI connection. It’s important to highlight the following considerations from Camera on Toradex Computer on Modules:

  • The Apalis module family provides MIPI CSI-2 interfaces on type-specific pins.
  • The Colibri module family does not provide MIPI-CSI interfaces. Some SoMs, such as the Colibri iMX8X, have a MIPI CSI-2 interface exposed in an additional FFC connector.
  • The Verdin module family provides one quad-lane MIPI CSI-2 interface on reserved pins, according to the Verdin Family Specification. That means that, as long as the SoC used in a Verdin SoM has MIPI CSI-2, we make sure it's always exposed on fixed pins.

Depending on the MIPI CSI-2 Camera you are using, you may enable a specific device tree overlay. As in this example, it was used the Camera Set 5MP AR0521 Color, the overlay process using TorizonCore Builder was described at First Steps with CSI Camera Set 5MP AR0521 Color (Torizon).

With the overlay enabled, the next step is to launch a video stream using a pipeline. To achieve this, you should follow a process similar to the one described in the article First Steps with CSI Camera Set 5MP AR0521 Color (Torizon).

The following pipeline was used to show video with Camera Set 5MP AR0521 Color:

## gst-launch-1.0 v4l2src device='/dev/video0'  ! "video/x-raw, format=RGB16, framerate=30/1, width=1920, height=1080" ! fpsdisplaysink video-sink=waylandsink text-overlay=false sync=false

Hello world with Python and OpenCV

This hello-world application, like the usage of Gstreamer directly on the terminal, is just a video stream. For this application you are going to use OpenCV together with V4L2 and GStreamer, using Visual Studio Code Extension for Torizon. If you want to learn more about OpenCV, refer to Torizon Sample: Using OpenCV for Computer Vision.

So, the first step is to create a python application with QML, as described at Python development on TorizonCore. We’re not going to use QML, we just need the GUI application launching Weston.

Then, go to Torizon Extension and proceed with the described modifications. Add the following volumes under the parameter volumes:

  • key = /dev, value=/dev
  • key = /tmp, value=/tmp
  • Key = /sys, value=/sys
  • key = /var/run/dbus, value=/var/run/dbus

Add access to the camera device (/dev/video2) under the parameter devices. Add a a cgroup rule under the extraparms parameter as follows: key=device_cgroup_rules and value=[ "c 199:* rmw" ]

Finally, modify the field extrapackages including all the packages and plugins explained before, and add the OpenCV package for python3, which is python3-opencv.

python3-opencv v4l-utils gstreamer1.0-qt5 libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-pulseaudio

The code must be written inside the main.py and begin with the import of the OpenCV library, called cv2

import cv2

OpenCV provides the VideoCapture class used to capture the video. A pipeline similar to the one created before is going to be the input parameter of the class. The difference between the pipelines is the use of appsink instead of waylandsink, as the pipeline will send data to your program, not to a display.

cap = cv2.VideoCapture("v4l2src device=/dev/video2 ! video/x-raw, width=640, height=480, format=YUY2 ! videoconvert ! video/x-raw, format=BGR ! appsink")

To capture and read the frames we’re going to use read:

ret, frame = cap.read() # Capture frame

And to display the frames we can use imshow:

cv2.imshow('frame', frame) # Display the frames

Then, release the capture and destroy the window:

cap.release() # Release the capture object
cv2.destroyAllWindows() # Destroy all the windows 

Tip: To read frames and display them continuously, you can put the capture and display lines inside a loop.