networkd and nspawn in systemd
Startup Aid
Most new features in the current systemd version relate to the networkd and nspawn components. Both have been part of systemd for some time, but they have been given added features in the new version to make working with them far easier in some cases.
Setting Up Bridge Devices
The systemd-networkd
network manager can now handle a variety of different network devices; its feature set has been extended to provide better support for use in container environments. The following example shows how easy it is to use the daemon to set up, say, a bridge device. Networkd is not designed to replace the established Gnome Network Manager; instead, the daemon is designed for special environments, such as container hosts, or in embedded applications.
Listing 1 shows the preparations required to be able to use the new network manager. For name resolution, it relies on the LLMNR (Link-Local Multicast Name Resolution)-capable stub resolver, which is also part of the system package.
Listing 1: Basic Setup
systemctl enable systemd-networkd systemctl disable NetworkManager systemctl enable systemd-resolved cp /etc/resolv.conf /etc/resolv.conf.bak ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf mkdir /etc/systemd/network
Listings 2 and 3 show the two configuration files for the normal Ethernet card (Listing 2) and the bridge device (Listing 3). You can decide which device is configured by these files by entering the device names or MAC addresses. In the Network
section, the device identified in this way is then configured. Listing 2 simply defines which network card to bind the bridge to; in Listing 3, the bridge device is then configured with its IP address and other parameters.
Listing 2: Network Configuration (1)
[Match] Name=enp2s25 [Network] Bridge=docker0
Listing 3: Network Configuration (2)
[Match] Name=docker0 [Network] DNS=192.168.100.1 Address=192.168.100.42/24 Gateway=192.168.100.1
Before the bridge device can be configured, you first need to make sure that it actually exists. To do so, you can pass in the configuration files (with the .netdev
suffix) to systemd. They are parsed immediately on starting the service, and the virtual network devices defined in them are created. Listing 4 shows a configuration for a bridge device. The typically good systemd documentation describes all the configuration options supported for these types of files in the help pages for systemd.network
and systemd.netdev
. If you then launch the networkctl
tool, you will see a status overview of the network devices from the systemd network manager's point of view (Listing 5).
Listing 4: Bridge Configuration
[NetDev] Name=docker0 Kind=bridge
Listing 5: Newly Created Network Device
IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback carrier unmanaged 2 enp2s25 ether off unmanaged 3 enp3s25 ether off unmanaged 4 enp4s25 ether degraded configured 5 enp5s25 ether off unmanaged 6 docker0 ether routable configured 7 virbr0 ether no-carrier unmanaged
Remember that the DNS stub resolver set up previously now relies on the DNS server defined in the docker0.network
configuration file from now on. Systemd keeps this in a temporary file named /run/systemd/resolve/resolv.conf
. Because direct access to this file is not recommended, a matching softlink for the /etc/resolv.conf
file has been set up.
Container Manager
For some time, the systemd init framework has offered its own container service under the name systemd-nspawn
. This is often jokingly referred to as "chroot on steroids," which shows its proximity to the well-known chroot. The service is based on the container interface specifications [1] and is capable of managing independent namespace containers.
In contrast to Docker, the focus is not on application containers; instead, systemd-nspawn
seeks to give users the ability to boot a separate container quickly and without obstacles for debugging and testing system components. Security features, such as those offered by Docker's current versions, are not implemented by systemd-nspawn
.
The systemd-machined.service
container registration manager manages containers and other virtual systems and uses the machinectl
tool for easy access. To create a new container, you can download suitable images of the desired distribution in RAW, TAR, or Docker format and then launch a container based on your choice of image. Listing 6 shows how to download a Fedora 22 image and launch a container based on it.
Listing 6: Launch a Container
# machinectl pull-raw --verify=no http://ftp.halifax.rwth-aachen.de/fedora/linux/releases/ 22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.raw.xz # systemd-nspawn -M Fedora-Cloud-Base-22-20150521.x86_64 Spawning container Fedora-Cloud-Base-22-20150521.x86_64 on /var/lib/machines/Fedora-Cloud-Base-22-20150521.x86_64.raw. Press ^] three times within 1s to kill container. [root@Fedora-Cloud-Base-22-20150521 ~]#
You can easily confirm that the container launched in this way uses its own namespaces by looking at the /proc/self/ns/
directory inside the container and on the host. With the exception of the user namespace, the host and the container really do use independent namespaces. For example, the systemd process is hiding behind PID 1 in the PID namespace on the host, whereas PID 1 in the container is the Bash shell launched in the container (Listing 7).
Listing 7: Separate Namespaces for Container and Host
[root@Fedora-Cloud-Base-22-20150521 ~]# readlink /proc/self/ns/* ipc:[4026532776] mnt:[4026532771] net:[4026531969] pid:[4026532777] user:[4026531837] uts:[4026532775] # ps -p 1 PID TTY TIME CMD 1 ? 00:00:00 bash # readlink /proc/self/ns/* ipc:[4026531839] mnt:[4026531840] net:[4026531969] pid:[4026531836] user:[4026531837] uts:[4026531838] # ps -p 1 PID TTY TIME CMD 1 ? 00:02:39 systemd
You can define a container launched in this way as a system service with
machinectl start Fedora-Cloud-Base-22-20150521.x86_64
and then log in to the container using machinectl login
to generate a new instance of the systemd-nspawn
services, which is equivalent to issuing a systemctl start Container
command. The
machinectl list
command shows an overview of all active virtual machines (Listing 8).
Listing 8: Active Virtual Machines
# machinectl list MACHINE CLASS SERVICE Fedora-Cloud-Base-22-20150521.x86_64 container nspawn qemu-rhel-standalone_vagrant_rhel vm libvirt-qemu
Conclusions
The systemd developers have added some very useful features to the current release and have thus continued to expand the systemd universe. The features introduced here will be welcomed by friends of containers. The help pages for systemd-nspawn
, systemd-machined
, and machinectl
have much additional useful information on this topic.