When a linux VPS that was previously provisioned on our Xen infrastructure is migrated to KVM there are three changes that affect the server:
- The driver required for the virtual disk changes from xenblk to virtio_blk
- The driver required for the virtual network interface changes from xennet to virtio_net
- The disk device naming scheme changes from xvda to vda
The name of the network device remains the same on both platforms - eth0 - along with the assigned MAC address.
During the migration process, Mammoth will automatically apply the configuration changes required for your linux VPS to boot on the new KVM infrastructure. To simplify your experience in the migration, we have strived wherever possible to preserve the same structure that your Xen server is already using, even where this differs from what would be utilised for a fresh server.
This document details the precise changes that will be made during the migration process. For most customers this information is not going to be required, but may prove useful in understanding what exactly happens during the migration process.
Over the history of Mammoth Cloud, three different drive configurations have been used.
- Our earliest servers - circa 2010-11 had a small swap disk at /dev/xvda1 and the root disk of /dev/xvda2 . Notably there was no /dev/xvda as these were not partitions.
- Later, the root disk was moved to /dev/xvda and the swap to /dev/xvds. The root disk was not partitioned i.e. there was no /dev/xvda1
- Since around 2013, all servers were provided with a disk /dev/xvda that contained a single partition /dev/xvda1 which held the root filesystem.
You can check which style your current VPS has by ls /dev/xvd*
KVM does not support the arbitrary device naming of style #1 , and does not allow skipping device letters as used in style #2. For this reason, following migration the layout will be as follows:
- For customers who currently have root on /dev/xvda2 , this will become /dev/vda
- For customers who currently have root on /dev/xvda (unpartitioned), this will become /dev/vda and will remain unpartitioned.
- For customers who currently have root on /dev/xvda1 (only partition of /dev/xvda), this will become /dev/vda1 and will remain partitioned.
In line with our current recommendations the dedicated swap disk will be removed.
Booting / Grub
Similarly to disk layout, three different styles of VPS booting have been employed over our history:
- Our earliest servers utilised Mammoth-maintained kernels that were pre-compiled with the Xen drivers. This style of server does not have grub installed and typically will have a completely empty /boot directory.
- Later, we started providing pv-grub , a mechanism to boot from a Grub1 configuration file. Amazon utilise this mechanism so it was widely supported. mPanel refers to this as "distribution kernel".
- Since around 2013, servers have "real" Grub installed into the MBR of a partitioned disk and boot in the same way a physical computer would. mPanel refers to this as "full virtualisation".
You can determine which mechanism your VPS is currently using by looking at the Server Kernel displayed at the bottom of https://www.mammoth.com.au/mpanel/manage
All three mechanism are still supported on our KVM infrastructure. During the migration we will make the following changes to facilitate this:
- For servers using Mammoth-maintained kernels, we have recompiled these kernels with the KVM drivers. The new kernel will automatically be used when booting on KVM.
- For servers using "distribution kernel", we provide an equivalent to Xen's pv-grub that will continue to boot from your /boot/grub/menu.lst file. During migration this configuration file will be modified to use the correct KVM root device - i.e. root=/dev/vda or root=/dev/vda1 as required by your disk layout.
- For servers using "full virtualisation", we will modify your Grub1 or Grub2 installation with the KVM root device name.
Kernel + initrd
The VPS kernels in use can be broken down into three generations:
- The earliest servers utilise Mammoth-maintained kernels.
- The second generation of servers have Xen-specific kernel packages, with names like kernel-xen or linux-image-xen . As the name suggests, these packages are built specifically for use on Xen infrastructure and do not boot on KVM.
- Later, the Xen drivers were added to the "mainline" kernel and are available within the standard kernel or linux-image distribution package.
You can determine which kernel you are using by looking at the output of the command uname -r . If the output contains "mammoth", then you are running a Mammoth-maintained kernel. If the output contains "xen" this indicates a Xen-specific kernel package. If you see neither "mammoth" or "xen", you have a mainline kernel.
During the migration we will make changes as follows:
- For customers on Mammoth-maintained kernels, we have recompiled these kernels with KVM drivers and no changes are necessary within the VPS itself.
- For customers using Xen-specific kernel package, we will install the latest package (.rpm or .deb) provided by your distribution for the specific version you are running.
- For customers using a mainline kernel package, we will rebuild your initrd / initramfs files to include the virtio drivers required to boot.
The /etc/fstab file tells your operating system where to find the root partition and swap disk.
As detailed earlier in the "Disk layout" section, your root partition will become either /dev/vda or /dev/vda1 and the swap disk will be removed.
During migration we will update your /etc/fstab file to contain the correct device name and remove the swap disk line.
Xen provides a special mechanism (a VPS driver and host-specific mechanism) to request a server shutdown which is used when you click the "Restart" button in mPanel.
KVM does not have an equivalent, and instead relies on the standard ACPI power mechanism utilised on physical servers. Under Linux this requires the ACPI daemon software to be installed, however this software was not typically included in our operating system images as Xen did not require it.
During migration we will automatically install the latest package (.rpm or .deb) provided by your distribution for the specific version you are running.