Walkthrough: VM build using xenguest

From Xen
Jump to navigationJump to search

VM build using xenguest

Xenguest is used by xenopsd to start domains. See https://xapi-project.github.io/new-docs/xenopsd/walkthroughs/VM.start for more details on what how xenopsd does at the start of the the VM.start task and how it works in detail.

When Xenopsd receives requests to start a VM, it splits those tasks into a sequence of micro-ops. It then queues these micro-ops into the appropriate per-VM queue via the function queue_operation.

The queues are executed in parallel using a thead pool. This means that multiple VM builds, and in turn multiple xenguest invocations to build VMs may run in parallel, in the case of NUMA, potentially, even for VMs that target overlapping NUMA nodes.

This can lead to situations where the memory on some NUMA nodes could be exhausted, leading the Xen buddy allocator to fall back to other NUMA nodes which causes remote memory accesses that can cause degraded performance.

This walkthrough is primarily focussed on describing the context under which the VM build phase, executed by xenguest happens, with a special focus on allocating memory for the domains.

1. Xenopsds creates a Xen domain and prepares the "build" step

The VM_create micro-op calls the VM.create function in the backend. In the classic Xenopsd backend, the VM.create_exn function must

  1. check if we’re creating a domain for a fresh VM or resuming an existing one: if it’s a resume then the domain configuration stored in the VmExtra database table must be used
  2. ask squeezed to create a memory “reservation” big enough to hold the VM memory. Unfortunately the domain cannot be created until the memory is free because domain create often fails in low-memory conditions. This means the “reservation” is associated with our “session” with squeezed; if Xenopsd crashes and restarts the reservation will be freed automatically.
  3. create the Domain via the libxc hypercall Xenctrl.domain_create
  4. call generate_create_info() for storing the platform data (vCPUs, etc) the domain’s Xenstore tree.
    1. xenguest then uses this in the build phase (see below) to build the domain.
    2. NUMA-design: Add
  5. “transfer” the squeezed reservation to the domain such that squeezed will free the memory if the domain is destroyed later
  6. compute and set an initial balloon target depending on the amount of memory reserved (recall we ask for a range between dynamic_min and dynamic_max)
  7. apply the “suppress spurious page faults” workaround if requested
  8. set the “machine address size”
  9. “hotplug” the vCPUs. This operates a lot like memory ballooning – Xen creates lots of vCPUs and then the guest is asked to only use some of them. Every VM therefore starts with the “VCPUs_max” setting and co-operative hotplug is used to reduce the number. Note there is no enforcement mechanism: a VM which cheats and uses too many vCPUs would have to be caught by looking at the performance statistics.

3. build the domain

On Xen, Xenctrl.domain_create creates an empty domain and returns the domain ID (domid) of the new domain to xenopsd.

In the build phase, the xenguest program is called to create the system memory layout of the domain, set vCPU affinity and a lot more.

The function VM.build_domain_exn must

  1. invoke the xenguest binary to interact with libxenguest.
  2. apply the cpuid configuration
  3. store the current domain configuration on disk – it’s important to know the difference between the configuration you started with and the configuration you would use after a reboot because some properties (such as maximum memory and vCPUs) as fixed on create.

2 xenguest, libxenguest and the Xen buddy allocator

xenguest was originally created as a separate program due to issues with libxenguest that were fixed, but we still shell out to xenguest:

  • Wasn’t threadsafe: fixed, but it still uses a per-call global struct
  • Incompatible licence, but now licensed under the LGPL.

The xenguest binary has evolved to build more of the initial domain state. xenopsd passes it:

  • The domain type to build for (HVM, PHV or PV),
  • The domid of the created empty domain,
  • The amount of system memory of the domain,
  • The platform data (vCPUs, vCPU affinity, etc) using the Xenstore:
    • the vCPU affinity
    • the vCPU credit2 weight/cap parameters
    • whether the NX bit is exposed
    • whether the viridian CPUID leaf is exposed
    • whether the system has PAE or not
    • whether the system has ACPI or not
    • whether the system has nested HVM or not
    • whether the system has an HPET or not

3 xenguest walkthrough for memory allocation from stub_xc_hvm_build()

Based on the given type, the xenguest program calls dedicated functions for the build process of given domain type.

Among many things, they call these functions:

  1. get_flags() to get the platform data from the Xenstore
  2. configure_vcpus() which uses the platform data from the Xenstore to configure vCPU affinity and the credit scheduler parameters vCPU weight and vCPU cap (max % pCPU time for throttling)
  3. For HVM, hvm_build_setup_mem to:
    1. Decide the e820 memory layout of the system memory of the domain including memory holes depending on PCI passthrough and vGPU flags.
    2. Load the BIOS/UEFI firmware images
    3. Store the final MMIO hole parameters in the Xenstore
    4. Call the libxenguest function xc_dom_boot_mem_init() to allocate and map the domain’s system memory. For HVM domains, it calls meminit_hvm() to loop over the vmemranges of the domain for mapping the system RAM of the guest from the Xen hypervisor heap. Its goals are:

      • Attempt to allocate 1GB superpages when possible
      • Fall back to 2MB pages when 1GB allocation failed
      • Fall back to 4k pages when both failed

      It uses the hypercall XENMEM_populate_physmap() to perform memory allocation and to map the allocated memory to the system RAM ranges of the domain. The hypercall must:

      1. convert the arguments for allocating a page to hypervisor structures
      2. set flags and calls functions according to the arguments
      3. allocate the requested page at the most suitable place

        • depending on passed flags, allocate on a specific NUMA node
        • else, if the domain has node affinity, on the affine nodes
        • also in the most suitable memory zone within the NUMA node
      4. fall back to less desirable places if this fails

        • or fail for “exact” allocation requests
      5. split superpages if pages of the requested size are not available

    5. Call construct_cpuid_policy() to apply the CPUID featureset policy

See https://xapi-project.github.io/new-docs/xenopsd/walkthroughs/VM.start/index.html#4-mark-each-vbd-as-active for how xenopsd completes the VM.start task.