Live migration in the cloud can be useful at times as it can minimize downtime during maintenence and move instances from overloaded compute nodes. A little while back I setup a devstack cluster with a shared NFS filesystem to perform live migration of OpenStack instances from one compute node to another using KVM hypervisors.
When using the Brocade VCS plugin for OpenStack Neutron, the tenant network VLAN configuration is automatically updated in the physical network when a new instance is created and also when it is moved to another compute node. This enables live migration without needing to make any changes to the network.
In this writeup I describe the process to reconfigure an existing 2-node Icehouse devstack deployment to support shared storage-based live migration on OpenStack instances using an NFS server. If you don’t already have a working devstack setup, take a look at this post using Ubuntu.
NFS Server Configuration
I built a simple NFS server using Ubuntu. Install the software package and prepare a directory to export.
1 2 3 4 |
|
Add an entry like the one below into /etc/exports
and then export the directory via sudo exportfs -ra
1
|
|
OpenStack Node Configuration
Each of the devstack nodes will be NFS clients. Setup a directory and mount the remote filesystem.
Optionally add the mount point to your /etc/fstab
so it persists after reboot.
1 2 3 |
|
Several changes to libvirt were made to enable migration. Edit /etc/libvirt/libvirtd.conf
to include the
following
– listen_tls = 0
– listen_tcp = 1
– auth_tcp = “none”
Edit libvirtd options in /etc/default/libvirt-bin
to listen over tcp
– libvirtd_opts = “ -d -l”
Restart libvirt
1
|
|
The final configuration is to make sure the VNC server listens on all interfaces and the path to hold the nova images is set to the mounted NFS directory. This can be done by adding the following lines to devstack’s local.conf
1 2 3 |
|
That should be it- run stack.sh and make sure everything is up and running properly.
Testing things out
Launch an instance in the cloud using Horizon or the CLI. You can check which compute node the instance lives on using nova commands (currently it resides on icehouse1)
1 2 3 |
|
If you take a look at the VCS fabric, you can find the physical port in the network for the compute node node hosting this instance based on it’s mac address (in my case port Gi 5/0/7).
1 2 3 4 5 6 |
|
You’ll notice it belongs to openstack-profile-901. If you examine the configuration for this port-profile, you can see the VLAN association. Any instances in this particular tenant network will be carried on VLAN 901 as the traffic traverses the VCS fabric.
1 2 3 4 5 6 7 |
|
Run the nova live-migration command to move the VM to another compute node. I ran a continuous ping from the instance to another server to see if any packets were dropped during the migration.
1 2 3 4 5 6 7 8 |
|
Looging at the VCS fabric again after a few moments, you should see that the instance has moved to another OpenStack compute node (it’s now on port Gi 6/0/7)
1 2 3 4 5 6 |
|
Running nova show confirms the migration has taken place. If you check the instance, you should see that no pings were lost during the move.
1 2 3 |
|
Congratulations, you have successfully performed a live migration of an OpenStack instance with zero downtime ;)