Search by Tags

Networking with TorizonCore

 

Article updated at 18 Sep 2020
Compare with Revision



Select the version of your OS from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release or cat /etc/issue on the board.

Torizon 5.0.0

Introduction

Networking with TorizonCore can refer to different topics:

  • Configuration of the host network, not directly related to containers.
  • Configuration of networking on a container, and the relationship between the container and the host networks.
  • Configuration of inter-container networking, often with the purpose of multi-process communication using the network stack (e.g. REST API).

The first part of this article explains about host network configuration: the TorizonCore image currently provides NetworkManager, a program that provides detection and configuration for the system to automatically connect to networks.

The second part of this article explains about container network configuration and how to share a network between containers using docker-compose.

Ethernet Interface Naming on TorizonCore

TorizonCore Ethernet interfaces are always named ethernetX, being X a number starting from 0, for instance, ethernet0, ethernet1 and so on.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

In order to take full advantage of this article, the following read is proposed:

Host Configuration: NetworkManager

The nmcli is a command-line client for NetworkManager. You can show the status of your network devices, detected by NetworkManager:

# nmcli device

Show our available connections and devices, on which the active connection is applied to:

# nmcli connection show

To disconnect from a network:

# nmcli con down id '<Connection_name>'

To delete a connection:

# nmcli con delete '<Connection_name>'

Static Network Configuration

If you looking for a way to configure a Static Network Configuration, nmcli provides the following commands:

# nmcli con mod '<Connection_name>'  ipv4.addresses "<desired IP/mask>"
# nmcli con mod '<Connection_name>'  ipv4.gateway "<desired gateway>"
# nmcli con mod '<Connection_name>'  ipv4.dns "<DNS server 1>,<DNS server 2>"
# nmcli con mod '<Connection_name>'  ipv4.method "manual"

After running the commands above, you can visualize your entire network configuration by opening the <connection-name>.nmconnection file:

# cd /etc/NetworkManager/system-connections/
# sudo cat <connection-name>.nmconnection

Expected file output:

connection_name.nmconnection
[connection]
id=<connection-name>
uuid=a690e7e8-a413-331d-830d-d0df5bad3983
type=ethernet
autoconnect-priority=-999
permissions=
timestamp=1581530428
 
[ethernet]
mac-address=00:14:2D:63:47:64
mac-address-blacklist=
 
[ipv4]
address1=<board-ip>,10.0.0.1
dns-search=
method=manual
 
[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto

After the changes were made, do not forget to reload the configuration file:

# sudo nmcli connection reload

Dynamic Network Configuration

Along with Static Network Configuration, ncmli provides a way to configure a dynamic connection:

# nmcli con mod '<Connection_name>'  ipv4.method "auto"

Other nmcli Commands

You must read the nmcli man page, either running man nmcli on a computer with nmcli installed or Googling after it. For quick reference, man --help is also useful.

Wi-Fi

TorizonCore supports two Wi-Fi modes: client mode and access point (AP) mode.

Wi-Fi client mode

This mode is used when you want TorizonCore to connect to a Wi-Fi access point.

To see a list of available Wi-Fi access points:

# nmcli device wifi list

To connect to a Wi-Fi access point:

# nmcli -a device wifi connect <WIFI_NAME>

Wi-Fi access point mode

This mode is used when you want TorizonCore to act as a Wi-Fi access point.

Run the following commands to configure TorizonCore as a Wi-Fi access point, substituting <WIFI_AP_NAME>, <WIFI_SSID>, <WIFI_PASSWORD> and <IPV4_ADDR> accordingly:

# nmcli con add type wifi ifname uap0 mode ap con-name <WIFI_AP_NAME> ssid <WIFI_SSID>
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.key-mgmt wpa-psk
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.proto rsn
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.group ccmp
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.pairwise ccmp
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.psk <WIFI_PASSWORD>
# nmcli con modify <WIFI_AP_NAME> ipv4.addresses <IPV4_ADDR>
# nmcli con modify <WIFI_AP_NAME> ipv4.method manual
# nmcli con up <WIFI_AP_NAME>

Besides a Wi-Fi access point, you also need to activate a DHCP server in TorizonCore. To do that, you can leverage systemd's built-in DHCP server support, creating the file /etc/systemd/network/80-wifi-ap.network with the following content (substitute <IPV4_ADDR>, <IPV4_ADDR_NETMASK>, <DHCPD_POOL_OFFSET> and <DHCPD_POOL_SIZE> accordingly):

/etc/systemd/network/80-wifi-ap.network
[Match]
Name=uap0
Type=wlan
WLANInterfaceType=ap
 
[Network]
Address=<IPV4_ADDR>/<IPV4_ADDR_NETMASK>
DHCPServer=yes
 
[DHCPServer]
PoolOffset=<DHCPD_POOL_OFFSET>
PoolSize=<DHCPD_POOL_SIZE>

Now just restart the systemd-networkd service:

$ sudo systemctl restart systemd-networkd

VPN

It's possible to configure a VPN tunnel in TorizonCore using WireGuard. In order to do this, please follow the instructions described in How to Use VPN on TorizonCore.

ifupdown plugin

TorizonCore 5.5.0 and later versions support the NetworkManager's ifupdown plugin. This plugin makes it possible to configure the network using a /etc/network/interfaces file. For more information on how to use this plugin, please check the official NetworkManager documentation and the NetworkManager.conf manpage.

Production Release

After you make the changes to the board, you can use the command isolate from the TorizonCore Builder Tool to generate your custom TorizonCore image for the Toradex Easy Installer. To learn how to do it, please refer to the article Capturing Changes in the Configuration of a Board on TorizonCore.

Networking Inside Docker container

This section is a brief introduction on how to use different network configurations inside a Docker container. You must also refer to the Docker Networking documentation, which is a comprehensive source of information.

Show the list of networks:

# docker network ls

Inspect network to see what containers are connected to it:

# docker network inspect <NETWORK_NAME>

Network drivers:

  • Bridge (containers communicate on the same Docker host)

  • Host (uses the host's networking directly)

  • Overlay (when containers running on different Docker hosts to communicate)

  • Macvlan (when you need your containers to look like physical hosts )

  • None

  • 3rd-party- network plugins

Bridge

When you run a new container, it automatically connects to the bridge network. A private network internal to the host is created in order to provide communication to the containers.

Create a user-defined bridge network:

# docker network create --subnet=<172.18.0.0/16> <NETWORK_NAME>

Create a container connected to our user-defined network:

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME>  <IMAGE_NAME>

Specify the IP to a container and publish port 80 in the container to port 8080 to allow connections from other machine on the network :

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME> --ip <172.18.0.5>  --publish <8080>:<80> <IMAGE_NAME>

Connect a running container to a network:

# docker network connect <NETWORK_NAME> <CONTAINER_NAME>

Macvlan

Macvlan driver can be configured in different ways. The advantage is to use the newest built-in and a lightweight driver, allowing the container to connect directly to host interfaces.

Create a macvlan network:

# docker network create -d macvlan --subnet=<172.16.86.0/24>  \
  --gateway=<172.16.86.1> -o parent=<ETHERNET_INTERFACE>  \
  <NETWORK_NAME> 

Attach the container to the macvlan network:

# docker run -dit --network <NETWORK_NAME> \
  --name <CONTAINER_NAME>  <IMAGE_NAME> /bin/bash

Docker Networking Drivers Use Cases

To understand more about Docker networking drivers and which one is more advised to use on your application, please take a look at Understanding Docker Networking Driver Use Cases (archived).

Docker Network Using Docker-compose

When you start your application, Docker Compose sets up a bridge network by default. Each service connects to the network, which makes them reachable with each other.

You can create your own networks to provide isolation and more options:

docker-compose.yml
services:
  app1:
    image: app
    networks:
          - frontend
  app2:
    image: app
    networks:
          - frontend
          - backend
  app3:
    image: app
    networks:
         - backend
networks:
  backend:
    # here you can configure your network 
  frontend:

App2 is connected to frontend and backend network, so it can communicate with app1 and app3. App1 and app3 can't communicate with each other, because they are on separate networks.

Connect to the external network:


networks:
  default:
    external:
      name: <pre-existing-network>

Docker compose looks for the pre-existing-network.

For more information about, please take a look at Docker Compose Documentation.

Torizon 4.0.0

Introduction

The TorizonCore image currently provides NetworkManager, a program that provides detection and configuration for the system to automatically connect to networks. This article will show you instructions on how to use it.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

In order to take full advantage of this article, the following read is proposed:

Network Manager

The nmcli is a command-line client for NetworkManager. You can show the status of your network devices, detected by NetworkManager:

# nmcli device

Show our available connections and devices, on which the active connection is applied to:

# nmcli connection show

Static Network Configuration

If you looking for a way to configure a Static Network Configuration, nmcli provides the following commands:

# nmcli con mod '<Connection_name>'  ipv4.addresses "<desired IP/mask>"
# nmcli con mod '<Connection_name>'  ipv4.gateway "<desired gateway>"
# nmcli con mod '<Connection_name>'  ipv4.dns "<DNS server 1>,<DNS server 2>"
# nmcli con mod '<Connection_name>'  ipv4.method "manual"

After running the commands above, you can visualize your entire network configuration by opening the <connection-name>.nmconnection file:

# cd /etc/NetworkManager/system-connections/
# sudo cat <connection-name>.nmconnection

Expected file output:

connection_name.nmconnection
[connection]
id=<connection-name>
uuid=a690e7e8-a413-331d-830d-d0df5bad3983
type=ethernet
autoconnect-priority=-999
permissions=
timestamp=1581530428
 
[ethernet]
mac-address=00:14:2D:63:47:64
mac-address-blacklist=
 
[ipv4]
address1=<board-ip>,10.0.0.1
dns-search=
method=manual
 
[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto

After the changes were made, do not forget to reload the configuration file:

# sudo nmcli connection reload

Dynamic Network Configuration

Along with Static Network Configuration, ncmli provides a way to configure a dynamic connection:

# nmcli con mod '<Connection_name>'  ipv4.method "auto"

Wi-Fi

TorizonCore supports two Wi-Fi modes: client mode and access point (AP) mode.

Wi-Fi client mode

This mode is used when you want TorizonCore to connect to a Wi-Fi access point.

To see a list of available Wi-Fi access points:

# nmcli device wifi list

To connect to a Wi-Fi access point:

# nmcli -a device wifi connect <WIFI_NAME>

Wi-Fi access point mode

This mode is used when you want TorizonCore to act as a Wi-Fi access point.

Run the following commands to configure TorizonCore as a Wi-Fi access point, substituting <WIFI_AP_NAME>, <WIFI_SSID>, <WIFI_PASSWORD> and <IPV4_ADDR> accordingly:

# nmcli con add type wifi ifname uap0 mode ap con-name <WIFI_AP_NAME> ssid <WIFI_SSID>
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.key-mgmt wpa-psk
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.proto rsn
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.group ccmp
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.pairwise ccmp
# nmcli con modify <WIFI_AP_NAME> 802-11-wireless-security.psk <WIFI_PASSWORD>
# nmcli con modify <WIFI_AP_NAME> ipv4.addresses <IPV4_ADDR>
# nmcli con modify <WIFI_AP_NAME> ipv4.method manual
# nmcli con up <WIFI_AP_NAME>

Besides a Wi-Fi access point, you also need to activate a DHCP server in TorizonCore. To do that, you can leverage systemd's built-in DHCP server support, creating the file /etc/systemd/network/80-wifi-ap.network with the following content (substitute <IPV4_ADDR>, <IPV4_ADDR_NETMASK>, <DHCPD_POOL_OFFSET> and <DHCPD_POOL_SIZE> accordingly):

/etc/systemd/network/80-wifi-ap.network
[Match]
Name=uap0
Type=wlan
WLANInterfaceType=ap
 
[Network]
Address=<IPV4_ADDR>/<IPV4_ADDR_NETMASK>
DHCPServer=yes
 
[DHCPServer]
PoolOffset=<DHCPD_POOL_OFFSET>
PoolSize=<DHCPD_POOL_SIZE>

Now just restart the systemd-networkd service:

$ sudo systemctl restart systemd-networkd

Networking Inside Docker container

This section is all about showing the drivers and ways to use network inside a Docker container.

Show the list of networks:

# docker network ls

Inspect network to see what containers are connected to it:

# docker network inspect <NETWORK_NAME>

Network drivers:

  • Bridge (containers communicate on the same Docker host)

  • Host (uses the host's networking directly)

  • Overlay (when containers running on different Docker hosts to communicate)

  • Macvlan (when you need your containers to look like physical hosts )

  • None

  • 3rd-party- network plugins

Bridge

When you run a new container, it automatically connects to the bridge network. A private network internal to the host is created in order to provide communication to the containers.

Create a user-defined bridge network:

# docker network create --subnet=<172.18.0.0/16> <NETWORK_NAME>

Create a container connected to our user-defined network:

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME>  <IMAGE_NAME>

Specify the IP to a container and publish port 80 in the container to port 8080 to allow connections from other machine on the network :

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME> --ip <172.18.0.5>  --publish <8080>:<80> <IMAGE_NAME>

Connect a running container to a network:

# docker network connect <NETWORK_NAME> <CONTAINER_NAME>

Macvlan

Macvlan driver can be configured in different ways. The advantage is to use the newest built-in and a lightweight driver, allowing the container to connect directly to host interfaces.

Create a macvlan network:

# docker network create -d macvlan --subnet=<172.16.86.0/24>  \
  --gateway=<172.16.86.1> -o parent=<ETHERNET_INTERFACE>  \
  <NETWORK_NAME> 

Attach the container to the macvlan network:

# docker run -dit --network <NETWORK_NAME> \
  --name <CONTAINER_NAME>  <IMAGE_NAME> /bin/bash

Docker Networking Drivers Use Cases

To understand more about Docker networking drivers and which one is more advised to use on your application, please take a look at Understanding Docker Networking Driver Use Cases.

Docker Network Using Docker-compose

When you start your application, Docker Compose sets up a bridge network by default. Each service connects to the network, which makes them reachable with each other.

You can create your own networks to provide isolation and more options:

docker-compose.yml
services:
  app1:
    image: app
    networks:
          - frontend
  app2:
    image: app
    networks:
          - frontend
          - backend
  app3:
    image: app
    networks:
         - backend
networks:
  backend:
    # here you can configure your network 
  frontend:

App2 is connected to frontend and backend network, so it can communicate with app1 and app3. App1 and app3 can't communicate with each other, because they are on separate networks.

Connect to the external network:


networks:
  default:
    external:
      name: <pre-existing-network>

Docker compose looks for the pre-existing-network.

For more information about, please take a look at Docker Compose Documentation.

Next Steps