Improved NUMA memory allocation
NUMA affine memory allocation (NAMA) aims to be similar in spirite to this NUMA allocation policy in RHEL:
<numatune> <memory mode='preferred' nodeset='0'> </numatune>
While Xen is in has some NUMA-awareness, there are potential issues where memory allocation might not be ideal for NUMA.
Parallel boot issues
Booting multiple VMs in parallel will result in potentially allocating both on the same NUMA node:
This race condition can happen when multiple VMs are booted in parallel, because the XAPI toolstack does not wait until Xen has constructed each domain before constructing the next VM. That would serialize domain construction, which is currently parallel.
The current understanding of this parallel boot race is that, while creating the p2m (this is before the domain starts running) you could run into a problem without reservations, where during the parallel contruction boot of two domains, the toolstack may think they have individually enough free memory before the VMs are actually fully booted during which the VMs start to populate their p2m, but collectively when theil p2m is fully populated, they don't fit onto the same NUMA node and Xen has to populate the p2m of part of these VMs from other NUMA nodes.