Author: antonym

Configuring bonding with systemd

With release of systemd version 216, a number of bugs were corrected and configurations options added related to bonding support in systemd-networkd.

Here's a basic example on setting up bonding in systemd:

/etc/systemd/network/bond0.network:

[Match]
Name=enp1*

[Network]
Bond=bond0

You can also specify the interfaces like enp1s0f[01] if you need to be more specific.

/etc/systemd/network/bond0.netdev:

[NetDev]
Name=bond0
Kind=bond

[Bond]
Mode=802.3ad
LACPTransmitRate=fast
MIIMonitorSec=1s
UpDelaySec=2s
DownDelaySec=8s

/etc/systemd/network/Management.network:

[Match]
Name=bond0

[Network]
Address=192.168.20.20/24
Gateway=192.168.20.1
DNS=8.8.8.8

Enable and start systemd-networkd:

systemctl enable systemd-networkd
systemctl start systemd-networkd

For more options, make sure to check out the systemd.network and systemd.netdev man pages.

Building and Booting Debian Live Over the Network

If you've ever downloaded a LiveCD, you know it's a self contained distribution that can run in memory and usually serves some sort of purpose. It can act as the front end for an installer, run a series of tools like Kali Linux, or be used to access the internet anonymously. But lets face it, who uses CDs today? Today, I'm going to run through what Debian Live is, how to build it, and how you can potentially use it an appliance you can boot over the network with having to worry about local storage.

You'll typically want to run the live-build utility on Debian. Live-build has active development currently so there are newer builds in testing (Jessie) but they can be unstable. The version in stable (wheezy) is currently 3.0.5-1 whereas the one in testing is 4.0~alpha36-1. You might want to start out with the stable version first as the testing one is in active development.

What is Debian Live

Debian Live is a Debian operating system that does not require a classical installer to use it. It comes on various media, including CD-ROM, USB sticks, or via netboot. Because it's ephemeral, it configures itself every time on boot. There are several phases it goes through on boot before init is started:

  • live-boot handles the initial start up and retrieval of the squashfs image. This runs within the initramfs.
  • live-config which handles all the configuration of how the image will be booted, including the initial setup of networking. This is ran from within the squashfs image.
  • Custom hooks are ran near the end of live-config which allow you to manipulate the filesystem and take care of any custom setup.  You can also key off custom kernel command line key values pairs that you've put in place to do different actions.

Once those phases are completed init is started and the image will boot as normal into the appropriate runlevel.

Building the Debian Live Image

To install live-build, run:

apt-get install live-build

This will install the lb binary used to generate the live image. Create and change into a directory that will hold your image.

mkdir debian-live
cd debian-live
lb init
lb config \
--distribution wheezy \
--architectures amd64 \
--binary-images netboot \
--debconf-frontend dialog \
--chroot-filesystem squashfs \
--parent-mirror-bootstrap http://mirrors.kernel.org/debian/ \
--parent-mirror-chroot-security http://mirrors.kernel.org/debian-security/ \
--parent-mirror-binary http://mirrors.kernel.org/debian/ \
--parent-mirror-binary-security http://mirrors.kernel.org/debian-security/ \
--mirror-bootstrap http://mirrors.kernel.org/debian/ \
--mirror-chroot-security http://mirrors.kernel.org/debian-security/ \
--mirror-binary http://mirrors.kernel.org/debian/ \
--mirror-binary-security http://mirrors.kernel.org/debian-security/ \
--archive-areas "main non-free contrib" \
--apt-options "--force-yes --yes" \
--bootappend-live "keyboard-layouts=en"

This will create a default directory structure to generate your live build. Most of those options are optional, but they will give you a good head start. With that base configuration, you should be able to run the following to start generating the build:

lb build

This will take a while, but once completed you'll have generated some files that you can use for your netbooted image. The files you'll need are here:

debian-live/tftpboot/live/vmlinuz
debian-live/tftpboot/live/initrd.img
debian-live/binary/live/filesystem.squashfs

Those three files are all you need to netboot a Debian Live image. The vmlinuz and initrd.img are called first and then the filesystem.squashfs is retrieved during initrd.img bootup.

Regenerating a New Image

If you need to regenerate the image again, you'll need to clean the previous build up first:

To reset the directory but leave the package cache:

lb clean

To reset the directory and clear the cache:

lb clean --cache

To clean everything and just leave the config directory:

lb clean --all

Setting up iPXE

Using iPXE is probably the easiest way to load up the new image. With the files hosted via HTTP, you can set up your iPXE config like this, add it to your iPXE menu and then use it to load your new image.

#!ipxe
kernel http://mywebserver.com/live/vmlinuz
module http://mywebserver.com/live/initrd.img
imgargs vmlinuz boot=live config hooks=filesystem ${conkern} username=live noeject fetch=http://mywebserver.com/live/filesystem.squashfs
boot

It will create a default user of live with the password of live.

In a future article, I'll talk about how you can further customize the live distribution. For now, make sure to check out the Debian Live manual that gives a great run down on customization.

Developing boot.rackspace.com

When I started down the path of building osimag.es, I started realizing that it could be really useful for others, especially in a cloud environment. Since my main focus has been working on Rackspace Cloud Servers for a number of years, I decided to see how feasible it would be to put together a menu driven installer for any Operating System working in a Infrastructure as a Service type of environment. I figured there's probably a number of power users who might not want to start out with the default images provided, but possibly would want the opportunity to create their own custom image from scratch.

Will it even work?

I started testing out the XenServer boot from ISO code in Openstack to see if someone might have already gotten that working for another use case. To my delight, the boot from ISO code worked out pretty well. I was able to upload the iPXE 1MB iso into Glance and boot from that image type.

The next problem to solve was the fact that Rackspace Cloud Servers assigns static IP addresses and does not currently run a DHCP service to assign out the networking. iPXE usually works best when DHCP is used as the network stack gets set up automatically. Because of this, a customer launching a cloud server could boot the iPXE image but would have to specify the networking manually of the instance in order to chain load boot.rackspace.com.

We started thinking about how to automate this, and with the help of a few developers came up with a solution. The solution retrieves an iPXE image on boot, brings it down to the hypervisor, extracts the iPXE kernel, and regenerates the ISO with a new iPXE startup script that contains the networking information of the instance. Then when the instance is started, iPXE is able to get on the network and load up boot.rackspace.com automatically. Once iPXE has those values, they can then be passed to kernel command line for distributions that support network options. This allows for the user to not have to worry about any networking input during installation.

Hosting the Menu

Because boot.rackspace.com is just a bunch of iPXE scripts, they are hosted on Cloud Files in a container. The domain is a CNAME to the containers URL and then hosted on the Akamai CDN. The source is deployed from Github to the Cloud Files container when new commits are checked in via a Jenkins job. This makes it very lightweight and very scalable to run. The next thing I'm probably going to look at is seeing if I can remove the Jenkins server completely and just run the deploy out of Github. I was also able to enable CDN logs within the container and I'm using a service called Qloudstat to parse those logs and provide metrics on the usage of the scripts.

Delete those old ISOs
Having a small 1MB image is really nice for those times when you need to deploy an OS onto a remote server, or just need to install something into Virtual Box or VMware. There's really no point in storing tons of ISOs on your machine if you can just stream the packages you need.

What's Next?
I have a few ideas about some new features that I'd like to add. I'd like to add a menu of experimental items and I'd also like to have the ability to generate a new version of the menu from a pull request so that new changes can be quickly validated before being merged into the main code base. If you haven't tried out boot.rackspace.com yet, I encourage you to check it out. You can get a quick overview from my Rackspace blog post.

Citrix XenServer 6.1 Automated Installer for Openstack

I've put my Openstack XenServer 6.1 (Tampa) installer onto Github here: https://github.com/amesserl/xs-tampa-openstack.

It has all of the modifications to get it running with the XenAPI Openstack Nova code and also includes the latest hotfixes. All you need to do is snag the latest CD and drop it in. I'll continue to publish repos for new versions as they come out (Clearwater should be released soon). You can also boot it from osimag.es as well.

Deploying XenServer with Puppet Razor

If you've never heard of Puppet Razor, it's a useful tool for provisioning bare metal servers. The servers netboot a microkernel that allows the server to be inventoried and remotely controlled by the Razor server so that the servers can be provisioned automatically. You can follow the instructions on how to install it here: https://github.com/puppetlabs/Razor/wiki/Installation

If you're looking to install XenServer via Razor here's a quick guide on doing so. After setting up your Razor server, you'll want to snag the latest Citrix XenServer ISO. They have a free download that provides full functionality.

First we'll add the iso to the image library:

[root@razor ~]# razor image add -t xenserver -p /root/XenServer-6.1-install-cd.iso -n xenserver_tampa -v 6.1.0
 
Attempting to add, please wait…
 
New image added successfully
 
Added Image:
UUID => 1xE9oYTH0Zc9FqgPBItVuO
Type => XenServer Hypervisor Install
ISO Filename => XenServer-6.1-install-cd.iso
Path => /opt/razor/image/xenserver/1xE9oYTH0Zc9FqgPBItVuO
Status => Valid
Version => 6.1.0

Then we'll create a model that will hold all of the generic settings we'll want to set for each node. It also lets us define the range of IPs that we want to assign to all servers that are being provisioned.

[root@razor ~]# razor model add -t xenserver_tampa -l xenserver_tampa_example -i 1xE9oYTH0Zc9FqgPBItVuO
--- Building Model (xenserver_tampa):
 
Please enter IP Subnet (example: 255.255.255.0)
default: 255.255.255.0
(QUIT to cancel)
 > 255.255.255.0
Please enter NTP server for node (example: ntp.razor.example.local)
(QUIT to cancel)
 > pool.ntp.org
Please enter Gateway for node (example: 192.168.1.1)
(QUIT to cancel)
 > 192.168.1.1.
Value (192.168.1.1.) is invalid
Please enter Gateway for node (example: 192.168.1.1)
(QUIT to cancel)
 > 192.168.1.1
Please enter IP Network for hosts (example: 192.168.10)
(QUIT to cancel)
 > 12.168.1
Please enter root password (> 8 characters) (example: P@ssword!)
default: test1234
(QUIT to cancel)
 > test1234
Please enter Starting IP address (1-254) (example: 1)
(QUIT to cancel)
 > 5
Please enter Nameserver for node (example: 192.168.10.10)
(QUIT to cancel)
 > 4.4.4.4
Please enter Ending IP address (2-255) (example: 50)
(QUIT to cancel)
 > 20
Please enter Prefix for naming node (example: xs-node)
(QUIT to cancel)
 > xs-node
Model created
 Label =>  xenserver_tampa_example
 Template =>  xenserver_hypervisor
 Description =>  Citrix XenServer 6.1 (tampa) Deployment
 UUID =>  3BYDqVasC4g0GFikNJYCdA
 Image UUID =>  1xE9oYTH0Zc9FqgPBItVuO

Once the model has been created, now would be a good time to boot up your node into the microkernel and have it discover the hardware if you haven't already. If you type:

[root@razor ~]# razor node
Discovered Nodes
         UUID           Last Checkin  Status                 Tags
6z2AodeElhhYdd3yR7juE0  33 sec        A       [nics_3,HP,cpus_4,memsize_32GiB]

you can see the servers that have booted up and reported into Razor. If you want to further look at the attributes of the server, you can run:

razor node get 6z2AodeElhhYdd3yR7juE0 -f attrib

which will show you all of the facter attributes of that node. Now you'll want to add that node to a policy so that we can start provisioning the node. Using the tags above, you can utilize those to apply the policy to nodes of that type.

[root@razor ~]# razor policy add --template xenserver_hypervisor --label tampa --model-uuid 3BYDqVasC4g0GFikNJYCdA --tags nics_3,HP,cpus_4,memsize_32GiB --enabled true
Policy created
 UUID =>  5luw9q3cPYhruPgCILRFOe
 Line Number =>  3
 Label =>  tampa
 Enabled =>  true
 Template =>  xenserver_hypervisor
 Description =>  Policy for deploying a XenServer hypervisor.
 Tags =>  [nics_3, HP, cpus_4, memsize_32GiB]
 Model Label =>  xenserver_tampa_example
 Broker Target =>  none
 Currently Bound =>  0
 Maximum Bound =>  0
 Bound Counter =>  0

Once you've added the policy and enabled it, any server that's reported in should pick up the new policy and start the provisioning process. Razor will tell the microkernel to reboot and start provisioning on the next netboot. If you need to make additional changes to the installation, you can modify this file for the first phase of the install:

/opt/razor/lib/project_razor/model/xenserver/tampa/postinstall.erb

and this file for any changes you want to occur on the second phase of the installation, or firstboot:

/opt/razor/lib/project_razor/model/xenserver/tampa/os_boot.erb

Note: This applies to Citrix XenServer 6.1 (Tampa)

XenServer Auto Patcher

I put together a little script that might come in handy to get Citrix XenServer fully up to date after doing a factory install. You can find it here:

https://github.com/amesserl/xs_patcher

It will detect the version of XenServer you are running and install all of the latest Citrix XenServer hotfixes that are available in sequential order. It will also detect any previous patches and install anything that might not be present. If you don not have the hotfixes on the machine, it will retrieve them for you. After running the script, all you will need to do is reboot so it will pick up the latest kernel.

To install it automatically during an install, you will need to put the patcher script on the disk with the cache prepropulated with all of the patches to avoid the script retrieving them each time. It's usually best to put this in place during the post install. You won't want to run it during the post install because XAPI isn't up and running at that point which the hotfixes require. You'll want to install a script into /etc/firstboot.d with a starting number higher than all the other processes that run during firstboot. Once the initial firstboot has run which sets up XenServer and all of it's storage repositories, you can then kick off the xs_patcher.sh script which will install all of the needed hotfixes. I usually then have one more call to reboot occur after that.

I'll try and maintain the script going forward as new hotfixes are released by Citrix. Currently it supports Boston, Sanibel, and Tampa. I'll probably go back and grab earlier versions as well in the future as I have time.

Fixing the Built-In VPN Client in Snow Leopard

Occasionally I'll receive this error when trying to connect to VPN when using Mac's built-in Cisco VPN client.

VPN Connection
A configuration error occurred. Verify your settings and try reconnecting.

To fix:

1
2
3
4
5
6
7
8
9
10
11
reaction:~ user$ ps -ef | grep racoon
    0   265     1   0   0:00.22 ??         0:00.34 /usr/sbin/racoon
  501   339   335   0   0:00.00 ttys001    0:00.00 grep racoon
reaction:~ user$ sudo kill -9 265
Password:
reaction:~ user$ ps -ef | grep racoon
  501   345   335   0   0:00.00 ttys001    0:00.00 grep racoon
reaction:~ user$ sudo /usr/sbin/racoon
reaction:~ user$ ps -ef | grep racoon
    0   347     1   0   0:00.00 ??         0:00.01 /usr/sbin/racoon -x
  501   349   335   0   0:00.00 ttys001    0:00.00 grep racoon

Then try and reconnect to VPN. That'll save you from having to reboot to fix your VPN.

The Dreaded Flipping of NICs

I recently had a problem with NICs flipping around after removing all traces of MAC address rules from the server. I did this because I wanted the flexibility to be able to swap machines around at any point in time and not have to worry about tracking the MAC addresses on all of the devices. The gear was identical in specifications and after doing some research, I ran across a solution that has worked really well so far. It involves creating udev rules that don't contain any MAC addresses but that instead check the vendor id and bus location of the device. By knowing these items, you can guarantee you'll always have the correct ethernet device assigned to the correct physical network and you can make the rules a lot more generic in nature. As an example, first you'll want to identify the devices (example is from an HP ProLiant DL385):

1
2
3
4
lspci | grep -i eth
04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)
42:00.0 Ethernet controller: Broadcom Corporation
NetXtreme II BCM5708 Gigabit Ethernet (rev 12)

We'll take the first line for this example and break it down. The first group of numbers is the bus number (04), device number (00), and function
number (0). From here we should be able to generate our udev rules file. Create /etc/udev/rules.d/70-persistent-net.rules and enter in the following or whatever your setup looks like:

1
2
SUBSYSTEM=="net",ACTION=="add",BUS=="pci",KERNEL=="eth*",ID=="0000:04:00.0",NAME="eth0"
SUBSYSTEM=="net",ACTION=="add",BUS=="pci",KERNEL=="eth*",ID=="0000:42:00.0",NAME="eth1"

Once that's in place, you should be able to reboot and not have to worry about the NICs flipping around. If you're curious, you can also view more device information by looking at /sys:

1
ls -la /sys/bus/pci/devices/0000:04:00.0

I've had success with this in Citrix XenServer (dom0 is based on CentOS) and Debian.