<<

VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

Hands On using

Hands On Virtualization using XEN General Setup The Machine Hardware Host Preparation (Standard XEN Host) Installation of the XEN Packages Modification of the Bootloader GRUB Reboot the Host System Explore Your New XEN dom0 Hardware Host Preparation Start Your Virtual Machine Working with the Virtual Machines Network Setup on the Host System Start/Stop the Virtual Machines Change the Memory Allocated to the VM High Availability Shared Network Storage Solution Host Preparation Phase Configure the DRBD Device Startup the DRBD Device Setup the Filesystem on the Device Test the DRBD Raid Device Migration of the VMs Advanced tutorial (if you have time left): usage with XEN Installation of libvirt and tools VM libvirt configuration virsh usage libvirt GUI example "virt-manager" Additional Information

General Setup

The Machine Hardware

The host systems are running Ubuntu 9.04 (Jaunty Jackalope). The following procedures will be possible on most common distributions with specific changes to the software installation steps. For Ubuntu we will use the Advanced Packaging Tool ( apt ) similar to . RedHat or SuSE are using rpm or some GUI (Graphical User Interface) installation tool.

Each workshop group has access to two hardware hosts:

hostname gks- <1/2>-X .fzk.de

gks- <1/2>-Y .fzk.de

Replace <1/2> and X and Y with the numbers given on the workshop handout.

Host Preparation (Standard UBUNTU XEN Host)

1 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

Installation of the XEN Packages

The following procedure has to be done on both hardware hosts:

At first logon to both hosts as user root (take the password and X, Y from the handout):

ssh -p24 root@gks-<1/2>-X.fzk.de and ssh -p24 root@gks-<1/2>-Y.fzk.de

In order to install the needed packages you have to execute the following commands to update the package repository in the first place and then install the XEN package:

( Please do not upgrade the machines, as with a new kernel you will have a new boot loader configuration, which will not work. Therefore, the reboot of the machine would fail.) .

$> aptitude update

and

$> aptitude install ubuntu-xen-server

(You can use as well the -y switch to automatically answer 'yes/no' questions during the installation process with 'yes': aptitude install -y ubuntu-xen-server )

The package ubuntu-xen-server is the Ubuntu XEN meta-package. Aptitude will resolve all package dependencies and install all other needed ones. Usually all Linux distributions deliver a meta-package for XEN in a similar way. In this Ubuntu release there is no Xen kernel delivered within the standard Ubuntu package repository. Therefore we have to download the XEN patched kernel manually from the Debian repository:

The kernel modules:

$> wget http://security.debian.org/debian-security/pool/updates/main/l/linux-2.6/li nux-modules-2.6.26-2-xen-68 $> dpkg -i linux-modules-2.6.26-2-xen-686_2.6.26-17lenny2_i386.deb

The kernel:

$> wget http://security.debian.org/debian-security/pool/updates/main/l/linux-2.6/lin ux-image-2.6.26-2-xen-686_2 $> dpkg -i linux-image-2.6.26-2-xen-686_2.6.26-17lenny2_i386.deb

Typically the bootloader of the linux distribution is configured during the installation procedure of the XEN software tools in the right way. In our case the bootloader of our default machines was modified to enable the cloned installation of all workshop machines and you will be asked by a dialogue, which grub-config you want to install.

The following dialogue- should show up after the ubuntu-xen-server installation. Choose the first option and press 'OK'

2 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

Wait for the installation process to finish.

Modification of the Bootloader GRUB

Now we have to tell the bootloader of the linux system to use our new installed XEN kernel for system startup. Therefore you have to manually edit the configuration file of the used bootloader - in our case GRUB.

To modify files, use your favourite editor, e.g. nano , pico , vi , vim or ed are installed already. (In case you prefer another editor, feel free to install it via aptitude .) Now open the GRUB configuration file:

/boot/grub/menu.lst

and ensure that the installation process has added the following needed configuration lines:

[...] ## ## End Default Options ##

title Xen 3.3 / Ubuntu 9.04, kernel 2.6.26-2-xen-686 root (hd0,0) kernel /boot/xen-3.3.gz module /boot/vmlinuz-2.6.26-2-xen-686 root=/dev/hda1 ro console=tty0 module /boot/initrd.img-2.6.26-2-xen-686

quiet [...]

Search for the

3 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

## ## End Default Options ##

part and verify that the first block looks like the above one. There you have to change the line:

root /dev/hda1(hd0,0)

to

root (hd0,0)

and

module /boot/vmlinuz-2.6.26-2-xen-686 root=HERE_IS_SOMETHIN_LONG ro console=tty0

to

module /boot/vmlinuz-2.6.26-2-xen-686 root=/dev/hda1 ro console=tty0

(Just change root=HERE_IS_SOMETHIN_LONG to root=/dev/hda1 .)

Reboot the Host System

All needed packages are installed, but the machines are still running the standard UBUNTU which lacks the ability of being a XEN dom0 (hardware host). To replace the running with the newly installed XEN-patched kernel, keep the fingers crossed and reboot the machines:

$> reboot; exit

After waiting some seconds we try to log in again via:

$> ssh -p24 root@gks-<1/2>-X/Y.fzk.de

In case the machines do not come back to business after the reboot, contact one of the workshop organisers to reset them.

Explore Your New XEN dom0 Hardware Host

In order to get the status (e.g. memory usage, etc.) of running virtual machines and the host, use the following command:

$> xm list

The XEN administration tool lists all your virtual instances including the host (dom0 / Domain-0) and manages the xen- (type xm --help for a detailed documentation). For now it should return an entry for the dom0 (memory, etc. will vary):

Name ID Mem VCPUs State Time(s) Domain-0 0 1895 2 r----- 71.1

Your host has also to bridge the networking of its virtual machines (VMs) to its own network interfaces. A detailed description of the XEN networking concept can be found here .

XEN will create the needed network bridge (eth0, in some distributions called xenbr0) automatically and you can check this by executing

$> brctl show

4 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

In the list, an entry for the bridge eth0 should be present which is linked to the physical network interface peth0, e.g.:

bridge name bridge id STP enabled interfaces eth0 8000.00e0812a2eaf no peth0

Now the standard XEN setup procedure for UBUNTU is finished! Congratulations, you have just configured your first XEN host machine.

Please prepare both hosts before you move on to the next part of the tutorial!

Virtual Machine Preparation

Pick ONE host were you setup the VM. The other host should be left untouched for now!

Your XEN installation also includes the xen-tools, a very usefull script-package to create VMs easily. We will use them to create a first VM on our host. Have a look at /etc/xen-tools/xen-tools.conf . This is the configuration file for the various scripts, e.g. the automatic setup tool xen-create-image . The following guest systems are supported and tested: UBUNTU (edgy, feisty, dapper), Debian (sid, sarge, etch, lenny), CentOS (4, 5) and fedora-core (4, 5, 6, 7). We leave the standard configuration (which is configured for creating a Debian VM) as it is - except for one line which we have to uncomment to have a serial console:

serial_device = hvc0 #default

Then use the following command line to start the creation/installation process:

$> xen-create-image --dist lenny --hostname=testvm1 \ --ip \ --gateway 141.52.174.1 --netmask 255.255.255.0 \ --dir /xenhome --size 400Mb --passwd --fs ext3 \ --role= #important for newer distributions. will install udev

(Tip: If you need to re-create the VM for some reason, you can use the --force option to overwrite your old VM or you have to manually delete the VM, which can be found in /xenhome/domains/testvm1 and the config file /etc/xen/testvm1.cfg )

This will configure a XEN domU virtual machine and start the installation. This can take a while.... Get some coffee ;-). Please do this only on one host and for one VM to reduce our network load!!! During the installation it will ask you for the root password which you can choose on your own (please not too simple and you have to remember it!!). More information about the installation process can be found in /var/log/xen-tools/testvm1.log

After the installation has finished please have a look at the generated VM configuration file /etc/xen/testvm1.cfg .

# Configuration file for the Xen instance testvm1, created # by xen-tools 3.9 on Thu Aug 27 07:56:10 2009. #

# # Kernel + memory size # kernel = '/boot/vmlinuz-2.6.26-2-xen-686' ramdisk = '/boot/initrd.img-2.6.26-2-xen-686' memory = '128'

5 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

# # Disk device(s). # root = '/dev/sda2 ro' disk = [ 'file:/xenhome/domains/testvm1/swap.img,sda1,w', 'file:/xenhome/domains/testvm1/disk.img,sda2,w', ]

Here you can e.g. change the default values for the kernel or memory a.s.o. before the start of a VM.

Start Your Virtual Machine

Now you can start your virtual machine by executing

$> xm create testvm1.cfg

afterwards with the command

$> xm list

you can see your VM running:

Name ID Mem VCPUs State Time(s) Domain-0 0 1893 2 r----- 1343.7 testvm1 1 128 1 -b---- 1.8

In order to test if your VM is working, you can ping it and logon to it via:

ping and ssh

Now you have a basic Debian (etch) Virtual Machine! Logout of the VM by typing exit and play around a bit with XEN.

Working with the Virtual Machines

Network Setup on the Host System

By using brctl show you now should see (as your VM machine is running) the additional entry of vif10.0 in the interfaces tab, which indicates that the virtual interface is connected to the physical network card via the XEN bridge eth0:

bridge name bridge id STP enabled interfaces eth0 8000.00e0812a2e93 no peth0 vif10.0

Start/Stop the Virtual Machines

Shutdown (here the option testvm1 indicates the identification string of the running machine as presented in xm list ):

$> xm shutdown testvm1

6 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

Immediately destroy the VM in case that a normal shutdown is not working:

$> xm destroy testvm1

Creation (as before, here you need the suffix .cfg in order to refer to the configuration file and not to the VM identification string):

$> xm create testvm1.cfg

You can use as well xm create - testvm1.cfg in order to directly attach the console output to the machine.

Attach the console (a virtual screen, just as if you sit in front of a physical machine):

$> xm console testvm1

In order to detach the console, press Ctrl + 5 on your keyboard. Depending on the keyboard layout or the keyboard emulation of your terminal/ssh client, this can as well be e.g. Ctrl + * or Ctrl + 9 .

Change the Memory Allocated to the VM

To change the memory of your VMs just use:

$> xm mem-set < VMID > < VM MEMORY >

Note: You can only assign as much memory as the VM was booted with, e.g. you can start the VM with 1GB RAM and then reduce it and then increase it if needed. Set the start value for the memory in the configuration file in e.g.: /etc/xen/testvm1.cfg

High Availability Shared Network Storage Solution

Host Preparation Phase

Before you proceed shutdown the running VM!

For our small high availability solution we need to install the tools for the Distributed Raid Block Device (DRBD) . DRBD is used to setup a raid system between two storage systems connected via Ethernet, e.g. the local disks of two machines can be mirrored via a local area network connection.

This has to be done on both host machines!

The packages will be installed with:

$> aptitude install drbd8-utils

and

$> wget http://ftp.de.debian.org/debian/pool/main/l/linux-modules-extra-2.6/drbd8-m odules-2.6.26-2-xen-686_2.6 $> dpkg -i drbd8-modules-2.6.26-2-xen-686_2.6.26+8.0.14-6+lenny1_i386.deb

We also want to load this module at each startup of the host machine. Therefore, just add drbd to the end of the /etc/modules file, e.g.:

# /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored.

7 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

loop lp drbd

The DRBD device will sync between two partitions (one on your first host, the other on the second) on the hosts which we have to create. For this purpose, use the standard linux tool fdisk as follows:

$> fdisk /dev/hda

Output:

The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):

Press n for a new partition.

Output:

Command action e extended p primary partition (1-4)

Press p for a primary partition and then 3 for the partition number. Confirm the default values for the first cylinder by just pressing Enter . For the last cylinder of your new partition choose +1G to create a 1 GB partition (we try to keep the partition small so syncing of the discs between the hosts will take not too long).

This will then create the device /dev/hda3 on your harddisk. Check by pressing p, if the new device was created properly. It should look like:

Disk /dev/hda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0000e413

Device Boot Start End Blocks Id System /dev/hda1 * 1 1946 15625000 83 Linux /dev/hda2 1946 2432 3906250 82 Linux swap / Solaris /dev/hda3 2432 9729 58616942 83 Linux

Command (m for help):

If everything is fine press w to write the new partition table onto the disk. You can confidently ignore the following warning message, if it occurs: WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

Now reboot the machines:

$> reboot; exit

8 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

Wait a few seconds and then login again via ssh .

Configure the DRBD Device

You should have a configuration file /etc/drbd.conf , which we will replace with a new one. So move it away ( only on one host! ):

$> mv /etc/drbd.conf /etc/drbd.conf.old

Create a new one looking like:

#/etc/drbd.conf global { usage-count yes; } common { syncer { rate 10M; } } resource r0 { protocol C; handlers { pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f"; pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f"; local-io-error "echo o > /proc/sysrq-trigger ; halt -f"; outdate-peer "/usr/sbin/drbd-peer-outdater"; } startup { } disk { on-io-error detach; } net { allow-two-primaries; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } syncer { rate 1000M; al-extents 257; } on gks-<1/2>-X.fzk.de { device /dev/drbd0; disk /dev/hda3; address 141.52.174.X:7788; flexible-meta-disk internal; } on gks-<1/2>-Y.fzk.de { device /dev/drbd0; disk /dev/hda3; address 141.52.174.Y:7788;

9 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

meta-disk internal; }

These are the definitions of the storage mirrors in the network, their corresponding linux devices and the used space on the hard disk. Everything is preconfigured. Just replace the X and Y by the values on your handout (Be aware: You have to change all the on gks-<1/2>-< X/Y > and 141.52.174.< X/Y > settings!). Then copy this file to the other host via ssh/scp, e.g. on host machine X, do:

$> scp /etc/drbd.conf gks-<1/2>-Y:/etc/drbd.conf

Now we have to get some file permissions right, in order to remove some annoying warning messages during the following setup procedure. However, this is only essential if you like to use the heartbeat functionality of DRBD (e.g. check the current state of the DRBD devices / machines). But this is not part of this tutorial. Execute the following commands on both machines:

$> addgroup haclient

$> chgrp haclient /sbin/drbdsetup $> chmod o-x /sbin/drbdsetup $> chmod u+s /sbin/drbdsetup

$> chgrp haclient /sbin/drbdmeta $> chmod o-x /sbin/drbdmeta $> chmod u+s /sbin/drbdmeta

Take both hosts and initialize the DRBD device with:

$> drbdadm create-md r0

If you get an error as there is e.g. a swap signature on /dev/hda3 you have to zero out the first part of /dev/hda3 :

$> dd if=/dev/zero of=/dev/hda3 count=1000 bs=1024 $> drbdadm create-md r0

Startup the DRBD Device

Next step is to startup the DRBD device on both hosts (The first one will wait for the connection of the second host. Ignor the warnings.):

$> /etc/init.d/drbd restart

If DRBD is setup properly check it via

$> drbdadm role r0

It should give you:

Secondary/Secondary

That means both DRBD hosts are connected but in "secondary state". Now we have to sync the both drbd devices:

$> drbdadm dstate r0 Inconsistent/Inconsistent

This means both devices are not synced as drbd does not know which host is the first primary host and which is the slave. Now we set ONE of the two hosts as primary and trigger the sync with:

10 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

$> drbdadm -- -o primary r0

on ONE of the hosts.

You can check the sync status with cat /proc/drbd :

version: 8.0.14 (:86/proto:86) GIT-hash: bb447522fc9a87d0069b7e14f0234911ebdab0f7 build by phil@fat-tyre, 2008-11-1 2 16:40:33 0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r--- ns:370688 nr:0 dw:0 dr:370688 al:0 bm:22 lo:32 pe:0 ua:32 ap:0 [======>...... ] sync'ed: %RED%35.3% %ENDCOLOR%(685284/1055972)K finish: 0:01:03 speed: 10,744 (10,296) K/sec resync: used:1/61 hits:23177 misses:23 starving:0 dirty:0 changed:23 act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0

When it is finished drbdadm dstate r0 will give you an UpToDate/UpToDate , and drbdadm role r0 gives you Primary/Secondary on the primary and Secondary/Primary on the secondary host.

For our purpose both have to be in "primary state" so that they will sync between each other. To change the state of the raid device, execute the following command on the secondary host:

$> drbdadm primary r0

Now on both hosts drbdadm state r0 should give:

Primary/Primary

which means both DRBD devices on the hosts are connected and are marked as primary so that both have write permissions. REMARK: After a reboot of a host you have to check the drbd status again. Both hosts have to be primary!

Setup the Filesystem on the Device

At this point we would run in problems with standard file systems on the drbd device, as they cannot deal with synchronous writes from both hosts. Due to this reason we have to install a cluster file system. We chose Oracle Cluster File System 2 .

To install OCFS2 on both hosts (confirm to install the missing depending pakages):

$> aptitude install ocfs2-tools ocfs2console

Create the directory for the configuration file on both hosts :

$> mkdir -p /etc/ocfs2

Then create on one host the file /etc/ocfs2/cluster.conf with these entries (again replace "X" and "Y" accordingly, change gks-<1/2>-X/Y and 141.52.174.X/Y):

#/etc/ocfs2/cluster.conf node: ip_port = 7777 ip_address = 141.52.174.X number = 0 name = gks-<1/2>-X cluster = ocfs2

11 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

node: ip_port = 7777 ip_address = 141.52.174.Y number = 1 name = gks-<1/2>-Y cluster = ocfs2 cluster: node_count = 2 name = ocfs2

and copy it to the other host via ssh/scp by using on e.g. host machine X:

$> scp /etc/ocfs2/cluster.conf gks-<1/2>-Y:/etc/ocfs2/cluster.conf

You have to modify the file /etc/default/o2cb on both hosts :

Change the line

O2CB_ENABLED=false

to

O2CB_ENABLED=true

and start the OCFS2 services ( both hosts ):

$> /etc/init.d/o2cb start $> /etc/init.d/ocfs2 restart

Now create the OCFS2 file system on the DRBD device ( only on one host , it will be automatically synced to the other host!)

$> mkfs.ocfs2 /dev/drbd0

The setup of DRBD is finished and your smal high availability cluster is ready!

Test the DRBD Raid Device

We will use our created VM to test it. Since it is located in /xenhome , we just copy everything from there into the drbd partition ( only on the host, where you created the testvm1 ):

$> mkdir -p /mnt/tmp $> mount.ocfs2 /dev/drbd0 /mnt/tmp $> cp -r /xenhome/domains /mnt/tmp/ $> umount /mnt/tmp

Take a coffee break...

When finished move /xenhome to a different directory create a new /xenhome and mount the drbd in there ( on both hosts ):

$> mv /xenhome /xenhome.old $> mkdir /xenhome $> mount.ocfs2 /dev/drbd0 /xenhome

We created our virtual test machine only on one host. So the other host also needs the XEN configuration for

12 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

testvm1 . Therefore, just copy it via scp (Y/X should be the machine, where you have not installed the VM):

$> scp /etc/xen/testvm1.cfg gks-<1/2>-Y/X:/etc/xen/testvm1.cfg

As a last intervention to the configuration we have to tell both XEN daemons ( xend ) on the host machines that they allow each other to migrate the VMs between them. Therefore edit /etc/xen/xend-config.sxp on both machines.

Search for the entry:

(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$')

and add the respective hostname. On host X add hostname Y and vice versa :

(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$ ^gks-<1/2>-X/Y.fzk.de')

To apply the changes restarte the xen-daemons on both hosts :

$> /etc/init.d/xend restart

Migration of the VMs

Lets do some magic! Start the testvm1, lets say on host X again:

$> xm create testvm1.cfg

Now live-migrate it from the host where the VM is running (host X) to the other host (Y). Type the command on host X:

$> xm migrate -l testvm1 gks-<1/2>-Y

The migration whithout -l will just shutdown the VM and start it on the other host while will only interrupt the running machine for a short time. You can test it by doing a ping on the VM while migrating.

$> ping

Advanced tutorial (if you have time left): libvirt usage with XEN

As presented in the talk, there are serveral different Open Source virtualization solutions available. Xen and KVM aswell as QEMU and others have their own setup, management and configuration tools. To provide a common API which eases the development of virtualization managment tools, libvirt was created. It provides a common configuration system using XML files, a common management tool virsh and programming interfaces for different programming and scripting languages (e.g. python, C/C++, ...). Based on libvirt there are already many different tools available like graphical user interfaces or automatic VM creation and clone tools, which are independent of the underlying virtualization software.

In the following you will get some basic ideas how libvirt works.

Installation of libvirt and tools

Choose one of the hosts

$> aptitude install libvirt-bin python-libvirt virt-manager

If an error about failing KVM modules occurs, do not bother. This is due to the lack of hardware virtualisation capabilities on the host machines.

13 von 15 16.12.2009 15:13 VirtualizationWorkshop09 < GridkaSchool09 < TWiki http://www-ekp.physik.uni-karlsruhe.de/~twiki/bin/view/GridkaSchool...

libvirt directly connects to the Xen hypervisor. Therefore we have different possibilities. Either via network (http, tcp with kerberos or sasl authentication via x509 certificates ) or on the host via unix sockets. We use the latter and activate it in the xend by editing /etc/xen/xend-config.sxp :

Un-comment and change

# (xend-unix-server no)

to

(xend-unix-server yes)

and restart the xend with

$> /etc/xen/xend restart

to make the changes active.

VM libvirt configuration

We just start our testvm1 via xm (if it is not running) to get a libvirt XML configuration automatically for this VM.

$> xm create testvm1.cfg

Lets have a look if libvirt has a connection to Xen:

$> virsh -c xen:///system list Id Name State ------0 Domain-0 running 3 testvm1 idle

so libvirt has a connection to the hypervisor and recognizes your testvm1. libvirt itself stores VMs in its own database. The usual workflow is to create an XML file describing the VM. Have a look in this XML file and define the VM within libvirt. Finally you can manage and access the VM via its ID, name, or unique ID.

virsh has a built in export function to dump out XML files from running VMs. This is what we need. Find out which command it is using from the virsh manpage. Copy the XML output to a file, e.g. /etc/libvirt/testvm1.xml .

Done? Fine. Now have a look at the XML output. It is quite self explaining and will almost work. We just have to add one line to tell libvirt which network bridge to use:

In the block:

Web Analytics