Virtualization Puppet and vSphere Lead image: Lead Image © astragal, 123RF.com
Lead Image © astragal, 123RF.com
 

Automate your VMware configuration with Puppet

Steer the Sphere

In ESXi environments, the powerful Puppet automatic configuration tool can perform its services and roll out VMs automatically. A test environment shows the benefits of Puppet in conjunction with vSphere. By Tam Hanna

In addition to supporting hypervisors such as Amazon EC2 and Google GCE, Puppet lends itself to the use of VMware vSphere. A private cloud created with Puppet and vSphere is superior to public clouds in two respects: In addition to far lower total cost of ownership, given 24/7 use, you also benefit from increased data security with local hosting. In this article, I target IT managers with Puppet experience who thus far have not looked into the use of the product's cloud service modules.

Test Infrastructure

Establishing a complex VMware infrastructure requires huge hardware investments, so instead, I used a VMware Workstation as the basis of my setup. To understand the process, though, you need a workstation with at least eight cores, 16GB of RAM, and 100GB of hard disk storage space. The host operating system is Windows 8 for simplicity's sake – a 64-bit version is mandatory – and VMware Workstation should be installed as usual.

Puppet networks can be controlled by a master, which must be a Linux system. The first step is thus to set up a virtual machine with Ubuntu. Be frugal with the hardware resources; ESX just loves burning up the remaining computing power for virtual machines and the vCenter appliance.

In the next step, log in to your Puppet master system and download the tarball with Puppet Enterprise [1]. After starting the installation wizard, you can accept all the defaults, because you are generating a test installation only.

While the installation process is running, the Ubuntu host needs additional resources. Even though only 3GB of RAM and two processor cores are required, the installation takes two to three hours. After the installation, if you see a warning regarding the MCollective package, you will have to install it manually. Because of a known bug, you also must install the STOMP library before the actual deployment can take place:

sudo apt-get install ruby-stomp
sudo apt-get install mcollective

Installing the master does not automatically enable Puppet Enterprise to interact with the cloud: For some time, the necessary packages have not been enabled by the setup process. To resolve this issue, you need to open a command prompt and install the missing components manually:

cd /puppet-enterprise-3.8.1-ubuntu-14.04-amd64/packages/ubuntu-14.04-amd64
sudo dpkg -i pe-cloud-provisioner-libs_0.3.2-1puppet1_amd64.deb
sudo dpkg -i pe-cloud-provisioner_1.2.0-1puppet1_all.deb

vSphere Setup

After installing Ubuntu, it's time to let ESX out of its cage. Download the VMware vSphere image [2] and configure a new virtual machine. The successful use of ESX requires pass-through virtualization, the existence of at least two processor cores, and at least 2GB of RAM. For the vCenter server, required for communication between Puppet and ESX, you need 10GB of RAM; this can be reduced later if necessary.

The details of the ESX installation are not critical; simply follow the instructions on the screen. The only difficulty relates to the strange command schema. Some dialogs can be cleared off the screen just by pressing the F key.

Once the two basic components are on your system, you need to wire the Puppet master to the ESX host that is responsible for management of the virtual machines. Communication between Puppet and ESX runs through a vCenter server. Download the VMware vCenter Server Appliance [3] and mount the ISO file. Because the installation of a standalone vCenter server requires the presence of a Windows Server operating system, you need to move the administration instance to the ESX server instead. To do this, you need the VMware-VCSA-all ISO image [3], which you can mount on a Windows 8.1 host by double clicking.

In the next step, you simply navigate to the VCSA subfolder and install VMware Client Integration Plugin when you get there. Then, open the vcsa-setup.html file in a browser of your choice. I used Chrome, which proved very cooperative in this scenario. If you get security prompts, select the option Launch Application and press Install on the web page that appears.

After agreeing to the EULA, the installation wizard prompts you, among other things, for the IP address and the password of the ESXi server intended for use as the host. Although P@ssw0rd met all the conditions imposed by the validator, the installation wizard was annoyed but accepted it. The Embedded Platform Services Controller serves as the deployment type, the new SSO domain goes by the name vsphere.local. Use P@ssw0rd for the administrator account, too.

Because the installation wizard still requires 8GB of RAM on the ESX host, even if you choose an appliance of the Tiny type, you should stop all the VMs and expand the ESX instance's RAM to 10GB at this point. In the actual deployment, make sure you select the Enable Thin Disk Mode option, too, because the virtual instance of vSphere otherwise allocates all the space right from the outset. Use the ESXi host for time synchronization, and click on Finish to start the deployment.

After successfully processing the deployment, the VM is available from the URL mentioned on the Installation Complete page; in my case, this was https://192.168.121.136/vsphere-client. To log in, use the username administrator@vsphere.local.

Unfortunately, your instance knows nothing about its host; therefore, you need a new data center as the first step. To do this, go to the vCenter Inventory Lists | Datacenters section and create a new entry by clicking the plus icon. In the documentation, the Puppet developers explicitly point out that the data center must reside at the top level of the hierarchy, which means that the vCenter server must be the parent object.

The host can move in during the next step. Click on vCenter Inventory Lists | Hosts to open the dialog box where you can enter a new host by clicking the plus icon. Enter the IP address of the ESXi host and add it to the newly created data center. After a few seconds of computing time, the VM generated by the installation wizard appears in the vSphere web client.

Puppet Configuration

The Puppet Cloud Provisioner module from version 3.8 onward is based on the fog library [4]. Its configuration is located in the root directory of the user responsible for provisioning; to create it, you need to type touch ~/.fog. Fog will accept a variety different parameters. For VMware, it is sufficient to add the following five lines to the .fog file using Gedit:

:default:
  :vsphere_server: 192.168.121.136
  :vsphere_username: administrator@vsphere.local
  :vsphere_password: P@ssw0rd
  :vsphere_expected_pubkey_hash: XXX

The statement starting with :default: stipulates that the subsequent configuration is used as the default. If you want to connect your Puppet Master instance to multiple cloud servers, then you can create multiple configurations. To switch between them, in this case, you need to enter:

FOG_CREDENTIAL=default puppet node_VMware <optional commands>

Hashes are used for identification. Unfortunately, it is possible to talk a vCenter server into divulging its hashes. For this reason, I went for a workaround and set XXX as the hash then ran an idempotent command against the server (Listing 1). Puppet responded with an error message, indicating that the hashes did not match. The hash supplied by the server was copied to the FOG file, thus completing the wiring work for Puppet and ESX. For more information on the Warning message in Listing 1, see the "Deprecated" box.

Listing 1: Getting Hash to FOG File

puppet node_VMware list
Warning: Cloud Provisioner is deprecated in PE 3.8.
         For more information and recommendations,
         see the release notes documentation here:
         https://docs.puppetlabs.com/pe/3.8/release_notes.html
Notice: Connecting ...
Error: The remote system presented a public key with
       hash 31452e1f896 f71542b6b9198188de1b5e59f5af62ffcefdc261df324636c90c7
       but we're expecting a hash of XXX. If you are sure
       the remote system is authentic set
       vsphere_expected_ pubkey_hash: <the hash printed
       in this message> in ~/.fog
Error: Try 'puppet help node_VMware list' for usage

Using Templates with Puppet

Templates let you manage your virtual machines. Working with dedicated virtualization systems differs from desktop virtualization in that parameterization of the individual VMs relies on templates. Creating your own templates for VMware is a science in itself, which is beyond the scope of this article. To make the task a little easier for newcomers, I will touch briefly on possible approaches. Option 1, and the easiest way, is to create and configure a new VM in ESXi. You can then convert the VM into a template in vCenter. Conversion destroys the original VM; whereas it is kept if you clone. Option 2 is to use a virtual machine created in VMware workstation. A converter [6] can be used, but its use is a little quirky. Option 3, which I employ here, is to use a prebuilt appliance.

VMware offers various prebuilt virtual machines [7] on the Solution Exchange website. The comparatively small Ubuntu JeOS [8] (80MB) is fine for this example. To begin, extract the archive to a folder of your choice, and in the vSphere web client, go to the VMs and Templates section.

Then click Actions | Deploy OVF Template to start the wizard for uploading the OVF container. Select the OVF file and click your way through the settings. As when creating a vSphere appliance, also make sure here that the disk uses the thin provision format.

The deployment then occurs in the task list, and the vSphere web client installed in your browser automatically uses the disk image from your computer. Unfortunately, the collection created from the OVF file is still a classic virtual machine at the moment, so you need to fix this in the properties by selecting Actions | Template | Convert to Template.

After completing this step, check that the template was updated successfully. You can do this directly in Puppet – Listing 2 shows sample output that lists the vCenter VM, as well as the newly created template. Templates are lifeless blueprints of virtual machines that need to deployed for use, which the following Puppet command does:

puppet node_VMware create --template \
  /Datacenters/Datacenter/vm/ub1404lts --vmname NewVM --wait-for-boot

Listing 2: Check Template Conversion

puppet node_VMware list
Warning: Cloud Provisioner is deprecated in PE 3.8.
         For more information and recommendations,
         see the release notes documentation here:
         https://docs.puppetlabs.com/pe/3.8/release_notes.html
Notice: Connecting ...
Notice: Connected to 192.168.121.136 as administrator@vsphere.local (API version 4.1)
Notice: Finding all Virtual Machines ... (Started at 09:52:09 AM)
Notice: Control will be returned to you in 10 minutes
        at 10:02 AM if locating is unfinished.
Locating: 100%     |oooooooooooooooooooooooooooooooooooooo|    Time: 00:00:00
Notice: Complete
/Datacenters/Datacenter/vm/MyVCenter
    powerstate: poweredOn
    name: MyVCenter
    hostname: localhost
    InstanceID: 527fb2f6-5877-6ea8-d04b-054660d97131
    ipaddress: 192.168.121.136
    template: false
/Datacenters/Datacenter/vm/ub1404lts
    powerstate: poweredOff
    name: ub1404lts
    hostname: --------
    InstanceID: 503c08ca-e63e-60de-f04f-5fe543d07e0a
    ipaddress: ---.---.---.---
    template: true

Keep in mind that the progress bar displayed while processing the command is pure guesswork. Puppet does not receive any feedback from ESXi and thus assumes a worst case running time. Puppet does not detect the boot process of the extremely lean Ubuntu because the VM does not advertise its presence adequately over the network. However, it is irrelevant in this case; if the boot attempt fails, you can us the puppet node_VMware list command again to make sure the new VM has appeared. Alternatively, you can log in to the new VM in the vSphere web console as root/root.

Virtual machines can be started, switched off, and terminated using Puppet. The commands for this are as follows:

puppet node_VMware start
puppet node_VMware stop
puppet node_VMware terminate

Note that terminate kills the virtual machine and wipes its image from the ESXi instance's non-volatile memory. This operation is – obviously – irreversible.

Final Adjustments

At this point, your work is done: The virtual machine created by Puppet is a normal Linux system, which is waiting for Puppet clients to be installed and configuration options to be assigned.

Smart administrators rely on shell scripts or management applications written in high-level languages from here on out. The new VM is easiest to identify if you assign a randomly generated name to it. Based on the IP address output by list, you can then copy Puppet to the VM.

For reasons of completeness, note that there is a risk of overcommitments here. If you run ESXi on a machine with 10GB of RAM, you cannot offer your flock of VMs 12GB or even 16GB of RAM total without expecting problems. In practical applications, this will tend not to work well because the VMs will force the host to start swapping, resulting in loss of performance.

Conclusions

The ESX solution presented here has its appeal; unfortunately, an ESX installation is typically limited to the resources available on the ESX host. For eCommerce and similar applications, this behavior is risky, because sudden traffic spikes could lead to huge order increases, which could cause major damage to your reputation if not handled prudently. Fortunately, Puppet can link up to classical cloud providers that provide their customers with almost unlimited computer resources – if you are willing and able to pay the price. Managing Puppet in operations with Google Compute Engine (GCE) or AWS is, on the whole, very similar to VMware, but be sure to set an upper cost threshold: A Puppet script that goes wild can cause immense financial damage in operations that rely on Amazon, GCE, or similar services, because you pay for resources per minute.

Attempting to cover the basic load with virtual machines is an economic mistake: Cloud providers can never compete with local hardware, because the lease rates for instances need to consider the costs of providing a reserve. Amazon EC2 and GCE are, however, perfectly suited to cover a short-term load peak. VMware is certainly not cheap, and it only plays to its strengths if a company needs to support multiple clients at the same time. Coverage of peak loads is achieved in this case without incurring costs simply by assigning additional resources of the in-house server.