Systemd network management and container handling
Startup Aids
Most of the new features in the current version of systemd relate to the systemd-networkd
and systemd-nspawn
components, which have been part of systemd for a long time. However, additional functions were added to the current release that can significantly simplify working with these tools.
Setting Up Bridge Devices
The systemd-networkd
network manager can now handle a variety of different network devices; its functionality has been upgraded to better support smooth operations in container environments. The following example shows how easily the daemon can be used to set up, say, a bridge device.
Networkd is not designed to replace the established Gnome NetworkManager. Instead, the daemon is intended for special environments, such as container hosts or embedded applications. Listing 1 shows the preparatory work needed to use the new network manager. For name resolution, it relies on the LLMNR (link-local multicast name resolution)-capable stub resolver, which also belongs to the systemd package.
Listing 1: Basic Setup
systemctl enable systemd-networkd systemctl disable NetworkManager systemctl enable systemd-resolved cp /etc/resolv.conf /etc/resolv.conf.bak ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf mkdir /etc/systemd/network
Listings 2 and 3 show the two configuration files for the regular Ethernet card (Listing 2) and bridge device (Listing 3). You can specify which device is configured by these files in the [Match]
section, where you enter, for example, the device name or MAC address.
Listing 2: Network Configuration (1)
[Match] Name=docker0 [Network] DNS=192.168.100.1 Address=192.168.100.42/24 Gateway=192.168.100.1
Listing 3: Network Configuration (2)
[Match] Name=enp2s25 [Network] Bridge=docker0
The device identified in this way is then configured in the [Network]
section. Listing 3 only determines the bridge to which the network interface controller needs to be bound: In Listing 2, the bridge device is then configured to match with an IP address and other parameters.
Before the bridge device can be configured, you first need to ensure that it exists. For this purpose, you can pass the configuration files with the .netdev
extension to systemd. These are read directly after starting up the service and the virtual network devices defined in it are created.
Listing 4 shows a configuration for a bridge device. The systemd documentation, which maintains the familiar high standards, describes all possible configuration options for these types of files in the two help pages for systemd.network
and systemd.netdev
.
Listing 4: Bridge Configuration
[NetDev] Name=docker0 Kind=bridge
Next, you can call up the networkctl
tool to see a status overview of the existing network devices from the perspective of the systemd network manager (Listing 5).
Listing 5: The Newly Set Up Network Device
IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 enp2s25 ether off unmanaged 3 enp3s25 ether off unmanaged 4 enp4s25 ether degraded configured 5 enp5s25 ether off unmanaged 6 docker0 ether routable configured 7 virbr0 ether no-carrier unmanaged
The previously set up DNS stub resolver now uses the DNS server defined in the docker0.network
configuration file. Systemd keeps this in the temporary file /run/systemd/resolve/resolv.conf
. Because direct access to this file is not recommended, a soft link was set up for the /etc/resolv.conf
file.
Container Manager
The systemd init framework has offered its own container service for a long time under the name systemd-nspawn
. People often refer to this as "chroot on steroids," which illustrates its proximity to the well-known chroot
command. The service is based on the container interface specification at freedesktop.org [1] and can manage standalone namespace containers.
Unlike Docker, the focus is not on application containers; instead, systemd-nspawn
seeks to empower its users quickly and easily to boot a standalone container, which then can be used for debugging and testing system components. Security features provided by Docker, for example, in recent versions are not implemented by systemd-nspawn
.
The Container Registration Manager systemd-machined.service
manages both containers and other virtual systems, providing easy access through the machinectl
tool. To create a new container, you can download an image of the desired distribution in a RAW, TAR, or Docker flavor and then start a container on the basis of that image. Listing 6 shows how to download a Fedora 22 image and start a new container based on it.
Listing 6: Download and Start the Containers
# machinectl pull-raw --verify=no http://ftp.halifax.rwth-aachen.de/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.raw.xz # systemd-nspawn -M Fedora-Cloud-Base-22-20150521.x86_64 Spawning container Fedora-Cloud-Base-22-20150521.x86_64 on /var/lib/machines/Fedora-Cloud-Base-22-20150521.x86_64.raw. Press ^] three times within 1s to kill container. [root@Fedora-Cloud-Base-22-20150521 ~]#
The fact that the generated container actual does use its own namespaces can be easily confirmed by taking a look at the /proc/self/ns/
inside the container and on the host. With the exception of the user namespace, the host and container use independent namespaces. For example, the systemd process for the PID namespace uses PID 1 on the host, whereas inside the container the Bash shell uses PID 1 (Listing 7).
Listing 7: Separate Container and Host Namespaces
[root@Fedora-Cloud-Base-22-20150521 ~]# readlink /proc/self/ns/* ipc:[4026532776] mnt:[4026532771] net:[4026531969] pid:[4026532777] user:[4026531837] uts:[4026532775] # ps -p 1 PID TTY TIME CMD 1 ? 00:00:00 bash # readlink /proc/self/ns/* ipc:[4026531839] mnt:[4026531840] net:[4026531969] pid:[4026531836] user:[4026531837] uts:[4026531838] # ps -p 1 PID TTY TIME CMD 1 ? 00:02:39 systemd
You can define a container generated in this way as a system service and then log in to the container with:
machinectl start Fedora-Cloud-Base-22-20150521.x86_64 machinectl login
This action creates a new instance of the systemd-nspawn
service, which is equivalent to a systemctl container home
command. The machinectl list
shows an overview of all active virtual machines (Listing 8).
Listing 8: All Active VMs
# machinectl list MACHINE CLASS SERVICE Fedora-Cloud-Base-22-20150521.x86_64 container nspawn qemu-rhel-standalone_vagrant_rhel vm libvirt-qemu
Again, the help pages for systemd-nspawn
, systemd-machined
, and machinectl
offer much useful information for these tools.
Conclusions
The systemd developers have again added some very useful features to the current release, once again expanding the systemd universe. Two features presented here in particular will certainly be welcomed with open arms by a container-savvy audience.