Enabel Numa Xen Server

For Citrix Hypervisor 8.2, it is a good idea to enable NUMA placement on each hypervisor for optimal performance. To enable NUMA placement, add "numa-placement=true" to the Xen configuration file in /etc/xenopsd.conf and restart the Xen toolstack via XenCenter or the CLI.

After I enabled this feature, the VMs on the server got a "soft" affinity and thus the VM were automatically "fixed" to the NUMA nodes. Interestingly, NUMA node 0 was not used in some cases.

Figuring out the NUMA characteristics of the box goes by running xl info -n.

Cli Live Migration

different Pool

xe vm-migrate vm=VM_name live=true remote-master=IP_Addr remote-username=root remote-password='password'

same Pool

xe vm-migrate vm=VM_name host=xenhostXXX live=true

Due to bug on XS6.5 xen motion doesn't appear in contest menu.

Xen VM-migrate continue to work using this cli-command:

example to move submit1 (on xenhost118) to xenhost112 (Master Pool)

[root@xenhost118 ~]# xe vdi-list name-label=submit1-disk 
uuid ( RO) : 2ba4e7e3-1146-4113-978a-11199e5cd15f
name-label ( RW): submit1-disk
name-description ( RW): submit1-disk ( da condorcl1-disk )
sr-uuid ( RO): 5a2b9532-cbfe-fe1f-116a-441f46793120
virtual-size ( RO): 32212254720
sharable ( RO): false
read-only ( RO): false
xe vm-migrate vm=submit1  host=xenhost112 remote-master=10.100.200.112 remote-username=root remote-password='password' vdi:2ba4e7e3-1146-4113-978a-11199e5cd15f=5a2b9532-cbfe-fe1f-116a-441f46793120 live=true

https://support.citrix.com/article/CTX213656


TRANSFER VMS BETWEEN XENSERVER POOLS

JUNE 16, 2016 NICHOLAS 7 COMMENTS

When Xenserver 7 came out I found myself unable to easily upgrade to it thanks to my custom RAID 1 build. If I wanted Xenserver 7 I would have to blow the whole instance away and start from scratch. This posed a problem because I have a pool of 2 xenserver hosts. You cannot add a server with a higher xenserver version to a lower versioned pool; the pool master must always have the highest version of Xenserver installed. My decision to have an mdadm RAID 1 setup on my pool master ultimately turned into forced VM downtime for an upgrade despite having a pool of other xenserver hosts.

After transferring VMs to my secondary host and promoting it to pool master, I wiped my primary xenserver and installed 7. When it was up and running I essentially had two separate pools running. To transfer my VMs back to my primary server I had to resort to the command line.

Offline VM transfer

The xe vm-export and vm-import commands work with stdin/out and piping. This is how I accomplished transferring my VMs directly between two pools. Simply pipe xe vm-export commands with an ssh xe vm-import command like so:
xe vm-export uuid=<VM_UUID> filename= | ssh <other_server> xe vm-import filename=/dev/stdin

Note the lack of a filename – this instructs xenserver to pipe to standard output instead. Also note that transferring the VM scrambles the MAC addresses of its interfaces. If you want to keep the MAC address you’ll have to manually re-assign it after the copy is complete.

Minimal downtime

For the method above you will have to turn the VM off in order to transfer it. I had some VMs that I didn’t want to stay down for the entire transfer. A way around this is to take a snapshot of the VM and then copy the snapshot to the other pool. Note that this method does not retain any changes made inside the VM that occurred after you took the snapshot. You will have to manually transfer any file changes that took place during the VM transfer (or be fine with losing them.)

In order to export a snapshot you must first convert it to a VM from a template (thanks to this site for outlining how.) The full procedure is as follows:
  1. Take a snapshot of the VM you want to move
    xe vm-snapshot uuid=<VM_UUID> new-name-label=<snapshotname>
  2. Convert the snashot template to a VM (the command xe snapshot-list is a handy way to obtain UUIDs of your snapshots)
    xe template-param-set is-a-template=false ha-always-run=false uuid=<UUID of snapshot>
  3. Transfer the template to the new pool
    xe vm-export uuid=<UUID of snapshot> filename= | ssh <other_server> xe vm-import filename=/dev/stdin
  4. Rename VM and/or modify interface MAC addresses as needed on the new host. Stop the VM on the old host and start it on the new one.
I used both methods above to successfully move my VMs from my older 6.5 pool to the newer 7 pool. Success.

"The VDI is not available"

During a Reboot of a VM at some unavoidable circumstances, the Startup logs might end up with “VDI not available” error. Follow the below steps to fix this:

Determine the UUID of the Storage Repository and the VDI that’s exhibiting the issue (we can either get this from xencenter or by using xe sr-list name-lable= ”name of the SR” and xe vdi-list sr-uuid = ”UUID obtained from the previous command”)

Run xe vdi-forget uuid = <vdi_uuid>

Now you could not see the Disk in the corresponding SR.

Make a “Scan” again and you will see the VDI sitting on the SR.

Reattach the VDI to your VM of choice and the VM will bootup fine.

Finally we fixed VDI not available error in XenServer.
Topic revision: r5 - 29 Jul 2022, Dibiase
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Wiki_Virgo_LSC? Send feedback