Walkthrough: VM build using xenguest
VM build using xenguest
Xenguest is used by xenopsd to start domains. See https://xapi-project.github.io/new-docs/xenopsd/walkthroughs/VM.start for more details on what how xenopsd does at the start of the the VM.start task and how it works in detail.
When Xenopsd receives requests to start a VM, it splits those tasks into a sequence of micro-ops. It then queues these micro-ops into the appropriate per-VM queue via the function queue_operation.
The queues are executed in parallel using a thead pool. This means that multiple VM builds, and in turn multiple xenguest invocations to build VMs may run in parallel, in the case of NUMA, potentially, even for VMs that target overlapping NUMA nodes.
This can lead to situations where the memory on some NUMA nodes could be exhausted, leading the Xen buddy allocator to fall back to other NUMA nodes which causes remote memory accesses that can cause degraded performance.
This walkthrough is primarily focussed on describing the context under which the VM build phase, executed by xenguest happens, with a special focus on allocating memory for the domains.
1. Xenopsds creates a Xen domain and prepares the "build" step
The VM_create
micro-op calls the VM.create
function in the backend. In the classic Xenopsd backend, the VM.create_exn function must
- check if we’re creating a domain for a fresh VM or resuming an existing one: if it’s a resume then the domain configuration stored in the VmExtra database table must be used
- ask squeezed to create a memory “reservation” big enough to hold the VM memory. Unfortunately the domain cannot be created until the memory is free because domain create often fails in low-memory conditions. This means the “reservation” is associated with our “session” with squeezed; if Xenopsd crashes and restarts the reservation will be freed automatically.
- create the Domain via the libxc hypercall
Xenctrl.domain_create
- call generate_create_info() for storing the platform data (vCPUs, etc) the domain’s Xenstore tree.
xenguest
then uses this in thebuild
phase (see below) to build the domain.- NUMA-design: Add
- “transfer” the squeezed reservation to the domain such that squeezed will free the memory if the domain is destroyed later
- compute and set an initial balloon target depending on the amount of memory reserved (recall we ask for a range between dynamic_min and dynamic_max)
- apply the “suppress spurious page faults” workaround if requested
- set the “machine address size”
- “hotplug” the vCPUs. This operates a lot like memory ballooning – Xen creates lots of vCPUs and then the guest is asked to only use some of them. Every VM therefore starts with the “VCPUs_max” setting and co-operative hotplug is used to reduce the number. Note there is no enforcement mechanism: a VM which cheats and uses too many vCPUs would have to be caught by looking at the performance statistics.
3. build the domain
On Xen, Xenctrl.domain_create
creates an empty domain and returns the domain ID (domid
) of the new domain to xenopsd
.
In the build
phase, the xenguest
program is called to create the system memory layout of the domain, set vCPU affinity and a lot more.
The function VM.build_domain_exn must
- invoke the xenguest binary to interact with libxenguest.
- apply the
cpuid
configuration - store the current domain configuration on disk – it’s important to know the difference between the configuration you started with and the configuration you would use after a reboot because some properties (such as maximum memory and vCPUs) as fixed on create.
2 xenguest
, libxenguest
and the Xen buddy allocator
xenguest was originally created as a separate program due to issues with libxenguest
that were fixed, but we still shell out to xenguest
:
- Wasn’t threadsafe: fixed, but it still uses a per-call global struct
- Incompatible licence, but now licensed under the LGPL.
The xenguest
binary has evolved to build more of the initial domain state. xenopsd
passes it:
- The domain type to build for (HVM, PHV or PV),
- The
domid
of the created empty domain, - The amount of system memory of the domain,
- The platform data (vCPUs, vCPU affinity, etc) using the Xenstore:
- the vCPU affinity
- the vCPU credit2 weight/cap parameters
- whether the NX bit is exposed
- whether the viridian CPUID leaf is exposed
- whether the system has PAE or not
- whether the system has ACPI or not
- whether the system has nested HVM or not
- whether the system has an HPET or not
3 xenguest
walkthrough for memory allocation from stub_xc_hvm_build()
Based on the given type, the xenguest
program calls dedicated functions for the build process of given domain type.
- For HVM, this is stub_xc_hvm_build() (official code in xenguest.patch the XS8 xenguest.patch)
Among many things, they call these functions:
- get_flags() to get the platform data from the Xenstore
- configure_vcpus() which uses the platform data from the Xenstore to configure vCPU affinity and the credit scheduler parameters vCPU weight and vCPU cap (max % pCPU time for throttling)
- For HVM, hvm_build_setup_mem to:
- Decide the
e820
memory layout of the system memory of the domain including memory holes depending on PCI passthrough and vGPU flags. - Load the BIOS/UEFI firmware images
- Store the final MMIO hole parameters in the Xenstore
Call the
libxenguest
function xc_dom_boot_mem_init() to allocate and map the domain’s system memory. For HVM domains, it calls meminit_hvm() to loop over thevmemranges
of the domain for mapping the system RAM of the guest from the Xen hypervisor heap. Its goals are:- Attempt to allocate 1GB superpages when possible
- Fall back to 2MB pages when 1GB allocation failed
- Fall back to 4k pages when both failed
It uses the hypercall XENMEM_populate_physmap() to perform memory allocation and to map the allocated memory to the system RAM ranges of the domain. The hypercall must:
- convert the arguments for allocating a page to hypervisor structures
- set flags and calls functions according to the arguments
allocate the requested page at the most suitable place
- depending on passed flags, allocate on a specific NUMA node
- else, if the domain has node affinity, on the affine nodes
- also in the most suitable memory zone within the NUMA node
fall back to less desirable places if this fails
- or fail for “exact” allocation requests
split superpages if pages of the requested size are not available
Call construct_cpuid_policy() to apply the
CPUID
featureset
policy
- Decide the
See https://xapi-project.github.io/new-docs/xenopsd/walkthroughs/VM.start/index.html#4-mark-each-vbd-as-active for how xenopsd completes the VM.start task.