Search by Tags

Networking with TorizonCore

 
Applicable for

Tags

Article updated at 18 Sep 2020
Compare with Revision



Subscribe for this article updates

Select the version of Torizon from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release on the board.

Torizon 5.0.0

Introduction

Networking with TorizonCore can refer to different topics:

  • Configuration of the host network, not directly related to containers.
  • Configuration of networking on a container, and the relationship between the container and the host networks.
  • Configuration of inter-container networking, often with the purpose of multi-process communication using the network stack (e.g. REST API).

The first part of this article explains about host network configuration: the TorizonCore image currently provides NetworkManager, a program that provides detection and configuration for the system to automatically connect to networks.

The second part of this article explains about container network configuration and how to share a network between containers using docker-compose.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

In order to take full advantage of this article, the following read is proposed:

Network Manager

The nmcli is a command-line client for NetworkManager. You can show the status of your network devices, detected by NetworkManager:

# nmcli device

Show our available connections and devices, on which the active connection is applied to:

# nmcli connection show

Static Network Configuration

If you looking for a way to configure a Static Network Configuration, nmcli provides the following commands:

# nmcli con mod '<Connection_name>'  ipv4.addresses "<10.0.0.164/23>"
# nmcli con mod '<Connection_name>'  ipv4.gateway "<10.0.0.1>"
# nmcli con mod '<Connection_name>'  ipv4.dns "<10.0.0.2>,<8.8.8.8>"
# nmcli con mod '<Connection_name>'  ipv4.method "manual"

After running the commands above, you can visualize your entire network configuration by opening the <connection-name>.nmconnection file:

# cd /etc/NetworkManager/system-connections/
# sudo cat <connection-name>.nmconnection

Expected file output:

connection_name.nmconnection
[connection] id= uuid=a690e7e8-a413-331d-830d-d0df5bad3983 type=ethernet autoconnect-priority=-999 permissions= timestamp=1581530428 [ethernet] mac-address=00:14:2D:63:47:64 mac-address-blacklist= [ipv4] address1=,10.0.0.1 dns-search= method=manual [ipv6] addr-gen-mode=stable-privacy dns-search= method=auto

After the changes were made, do not forget to reload the configuration file:

# sudo nmcli connection reload

Dynamic Network Configuration

Along with Static Network Configuration, ncmli provides a way to configure a dynamic connection:

# nmcli con mod '<Connection_name>'  ipv4.method "auto"

Wi-Fi

To see a list of available Wi-Fi hotspots:

# nmcli device wifi list

Connect to a Wi-Fi:

# nmcli -a device wifi connect <WIFI_NAME>

Networking Inside Docker container

This section is all about showing the drivers and ways to use network inside a Docker container.

Show the list of networks:

# docker network ls

Inspect network to see what containers are connected to it:

# docker network inspect <NETWORK_NAME>

Network drivers:

  • Bridge (containers communicate on the same Docker host)

  • Host (uses the host's networking directly)

  • Overlay (when containers running on different Docker hosts to communicate)

  • Macvlan (when you need your containers to look like physical hosts )

  • None

  • 3rd-party- network plugins

Bridge

When you run a new container, it automatically connects to the bridge network. A private network internal to the host is created in order to provide communication to the containers.

Create a user-defined bridge network:

# docker network create --subnet=<172.18.0.0/16> <NETWORK_NAME>

Create a container connected to our user-defined network:

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME>  <IMAGE_NAME>

Specify the IP to a container and publish port 80 in the container to port 8080 to allow connections from other machine on the network :

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME> --ip <172.18.0.5>  --publish <8080>:<80> <IMAGE_NAME>

Connect a running container to a network:

# docker network connect <NETWORK_NAME> <CONTAINER_NAME>

Macvlan

Macvlan driver can be configured in different ways. The advantage is to use the newest built-in and a lightweight driver, allowing the container to connect directly to host interfaces.

Create a macvlan network:

# docker network create -d macvlan --subnet=<172.16.86.0/24>  \
  --gateway=<172.16.86.1> -o parent=<ETHERNET_INTERFACE>  \
  <NETWORK_NAME> 

Attach the container to the macvlan network:

# docker run -dit --network <NETWORK_NAME> \
  --name <CONTAINER_NAME>  <IMAGE_NAME> /bin/bash

Docker Networking Drivers Use Cases

To understand more about Docker networking drivers and which one is more advised to use on your application, please take a look at Understanding Docker Networking Driver Use Cases.

Docker Network Using Docker-compose

When you start your application, Docker Compose sets up a bridge network by default. Each service connects to the network, which makes them reachable with each other.

You can create your own networks to provide isolation and more options:

docker-compose.yml
services: app1: image: app networks: - frontend app2: image: app networks: - frontend - backend app3: image: app networks: - backend networks: backend: # here you can configure your network frontend:

App2 is connected to frontend and backend network, so it can communicate with app1 and app3. App1 and app3 can't communicate with each other, because they are on separate networks.

Connect to the external network:

networks:
  default:
    external:
      name: <pre-existing-network>

Docker compose looks for the pre-existing-network.

For more information about, please take a look at Docker Compose Documentation.

Next Steps

Torizon 4.0.0

Introduction

The TorizonCore image currently provides NetworkManager, a program that provides detection and configuration for the system to automatically connect to networks. This article will show you instructions on how to use it.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

In order to take full advantage of this article, the following read is proposed:

Network Manager

The nmcli is a command-line client for NetworkManager. You can show the status of your network devices, detected by NetworkManager:

# nmcli device

Show our available connections and devices, on which the active connection is applied to:

# nmcli connection show

Static Network Configuration

If you looking for a way to configure a Static Network Configuration, nmcli provides the following commands:

# nmcli con mod '<Connection_name>'  ipv4.addresses "<10.0.0.164/23>"
# nmcli con mod '<Connection_name>'  ipv4.gateway "<10.0.0.1>"
# nmcli con mod '<Connection_name>'  ipv4.dns "<10.0.0.2>,<8.8.8.8>"
# nmcli con mod '<Connection_name>'  ipv4.method "manual"

After running the commands above, you can visualize your entire network configuration by opening the <connection-name>.nmconnection file:

# cd /etc/NetworkManager/system-connections/
# sudo cat <connection-name>.nmconnection

Expected file output:

connection_name.nmconnection
[connection] id= uuid=a690e7e8-a413-331d-830d-d0df5bad3983 type=ethernet autoconnect-priority=-999 permissions= timestamp=1581530428 [ethernet] mac-address=00:14:2D:63:47:64 mac-address-blacklist= [ipv4] address1=,10.0.0.1 dns-search= method=manual [ipv6] addr-gen-mode=stable-privacy dns-search= method=auto

After the changes were made, do not forget to reload the configuration file:

# sudo nmcli connection reload

Dynamic Network Configuration

Along with Static Network Configuration, ncmli provides a way to configure a dynamic connection:

# nmcli con mod '<Connection_name>'  ipv4.method "auto"

Wi-Fi

To see a list of available Wi-Fi hotspots:

# nmcli device wifi list

Connect to a Wi-Fi:

# nmcli -a device wifi connect <WIFI_NAME>

Networking Inside Docker container

This section is all about showing the drivers and ways to use network inside a Docker container.

Show the list of networks:

# docker network ls

Inspect network to see what containers are connected to it:

# docker network inspect <NETWORK_NAME>

Network drivers:

  • Bridge (containers communicate on the same Docker host)

  • Host (uses the host's networking directly)

  • Overlay (when containers running on different Docker hosts to communicate)

  • Macvlan (when you need your containers to look like physical hosts )

  • None

  • 3rd-party- network plugins

Bridge

When you run a new container, it automatically connects to the bridge network. A private network internal to the host is created in order to provide communication to the containers.

Create a user-defined bridge network:

# docker network create --subnet=<172.18.0.0/16> <NETWORK_NAME>

Create a container connected to our user-defined network:

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME>  <IMAGE_NAME>

Specify the IP to a container and publish port 80 in the container to port 8080 to allow connections from other machine on the network :

# docker run --name <CONTAINER_NAME> -d --net <NETWORK_NAME> --ip <172.18.0.5>  --publish <8080>:<80> <IMAGE_NAME>

Connect a running container to a network:

# docker network connect <NETWORK_NAME> <CONTAINER_NAME>

Macvlan

Macvlan driver can be configured in different ways. The advantage is to use the newest built-in and a lightweight driver, allowing the container to connect directly to host interfaces.

Create a macvlan network:

# docker network create -d macvlan --subnet=<172.16.86.0/24>  \
  --gateway=<172.16.86.1> -o parent=<ETHERNET_INTERFACE>  \
  <NETWORK_NAME> 

Attach the container to the macvlan network:

# docker run -dit --network <NETWORK_NAME> \
  --name <CONTAINER_NAME>  <IMAGE_NAME> /bin/bash

Docker Networking Drivers Use Cases

To understand more about Docker networking drivers and which one is more advised to use on your application, please take a look at Understanding Docker Networking Driver Use Cases.

Docker Network Using Docker-compose

When you start your application, Docker Compose sets up a bridge network by default. Each service connects to the network, which makes them reachable with each other.

You can create your own networks to provide isolation and more options:

docker-compose.yml
services: app1: image: app networks: - frontend app2: image: app networks: - frontend - backend app3: image: app networks: - backend networks: backend: # here you can configure your network frontend:

App2 is connected to frontend and backend network, so it can communicate with app1 and app3. App1 and app3 can't communicate with each other, because they are on separate networks.

Connect to the external network:

networks:
  default:
    external:
      name: <pre-existing-network>

Docker compose looks for the pre-existing-network.

For more information about, please take a look at Docker Compose Documentation.

Next Steps