NUMA node-specifc memory allocation: Difference between revisions

From Xen
Jump to navigationJump to search
mNo edit summary
mNo edit summary
Line 37: Line 37:


<syntaxhighlight lang="c">rc = dom->arch_hooks->meminit(dom);</syntaxhighlight>
<syntaxhighlight lang="c">rc = dom->arch_hooks->meminit(dom);</syntaxhighlight>

== <code>meminit_hvm()</code>, the x86 HVM meminit() to allocate HVM domain memory. ==

It is the <code>meminit<code> hook for x86 HVM domains:
https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1348

=== Dynamic memory / Populate on Demand ===

When <code>dom-&gt;target_pages</code> is smaller than <code>dom-&gt;total_pages</code>,
the X86 <code>meminit_hvm()</code> function enables <code>XENMEMF_populate_on_demand</code>:
https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1368

In this case, <code>meminit_hvm()</code> function calls <code>xc_domain_set_pod_target()</code>
to set the populate on demand target to <code>dom-&gt;target_pages</code>:
https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1454

=== <code>meminit_hvm</code>: <code>vmemranges</code> ===

The allocation of domain boot memory is defined using <code>vmemranges</code>.

<code>meminit_hvm()</code> creates a default <code>vmemranges</code> array based on the amount of
* <code>lowmem</code> (below 4G, until <code>dom-&gt;lowmem_end</code>) and
* highmem (above 4G, until <code>dom-&gt;highmem_end</code>).

The default <code>vmemranges</code> do not carry information about from which NUMA node the memory shall be allocated.

<code>meminit_hvm()</code> uses two vmemranges by default:
* <code>0</code> to <code>dom-&gt;lowmem_end</code>
* <code>4G</code> to <code>dom-&gt;highmem_end</code>

The NUMA node IDs (<code>nid</code>) are set to 0:
* A dummy <code>vnode_to_pnode[]</code> for <code>nid</code> maps <code>0</code> to <code>XC_NUMA_NO_NODE</code>.

Code: https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1371

Alternatively, callers of the <code>meminit</code> hook have the possibility to pass
an array of memory ranges in <code>dom-&gt;vmemranges</code>,
which can contain a specific NUMA node for each <code>vmemrange</code> andthe "<code>exact</code>" flag:

While <code>vmemranges</code> have been added for vNUMA, <code>meminit_hvm()</code> does not differentiate on that.
It is only concerned about memory init.
Hence, its code always acts on <code>vmemranges</code> to avoid code duplications
and creates default non-NUMA <code>vmemranges</code>.
If the caller does not pass specific <code>dom-&gt;vmemranges</code> for NUMA cases, the default <code>vmemranges</code> are used.

Caveat: When passing <code>dom-&gt;vmemranges</code>, Populate-On-Demand (<code>target_pages</code> &lt; <code>total_pages</code>) cannot be used:
With custom <code>vmemranges</code> as Populate-On-Demand does not support NUMA.
As a result, memory overcommitment with ballooning is currently impossible for NUMA node-specific memory allocations.

=== Memory allocation of the Domain’s memory ===

<code>meminit_hvm()</code> attempts to allocate 1GB pages if possible, falls back on 2MB pages if 1GB allocation fails,
and 4KB pages will be used eventually if both fail:

https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1475

For each <code>vmemrange</code>, <code>new_memflags</code> are composed, based on the base <code>memflags</code>:

<syntaxhighlight lang="c">unsigned int new_memflags = memflags;
unsigned int vnode = vmemranges[vmemid].nid;
// With custom vmemranges, vnode_to_pnode is custom too:
unsigned int pnode = vnode_to_pnode[vnode]

if ( pnode != XC_NUMA_NO_NODE ) // vnode maps to a physical NUMA node:
// XENMEMF_exact_node: (XENMEMF_node(n) | XENMEMF_exact_node_request)
new_memflags |= XENMEMF_exact_node(pnode);
</syntaxhighlight>

Thus, to allocate memory in a specific NUMA node, this is needed:
* Populate-On-Demand (POD) must not be used (<code>target_pages == total_pages</code>)
* The sum of the <code>vmemranges</code> equals <code>dom-&gt;total_pages</code>
* <code>dom-&gt;vmemranges</code> must be passed with a <code>nid</code>
* <code>dom-&gt;vnode_to_pnode</code> must be passed, the <code>nid</code> must map to the pNUMA node to allocate on.
* <code>dom-&gt;nr_vmemranges</code> and <code>dom-&gt;nr_vnodes</code> must be set accordingly

For the allocation of each <code>vmemrange</code> (extent), a call to <code>xc_domain_populate_physmap()</code> is used.

It calls the <code>XENMEM_populate_physmap</code> hypercall for each group of extents to allocate.

See [[populate_physmap.md]] for its implementation.

== Other callers of the <code>meminit()</code> calls ==

<code>libxl</code> calls <code>xc_dom_boot_mem_init()</code> using <code>libxl__build_dom()</code> from [https://github.com/xenserver-next/xen/blob/xenguest/tools/helpers/init-xenstore-domain.c#L262 init-xenstore-domain.c/build()].


[[Category:NUMA]]
[[Category:NUMA]]

Revision as of 15:30, 30 January 2025

NUMA node-specific memory allocation

Entry point: xenguest --mode hvm_build

xenguest --mode hvm_build: It calls do_hvm_build(), which calls stub_xc_hvm_build().

stub_xc_hvm_build()

It starts the HVM/PVH domain creation by filling out the fields of struct flags and struct xc_dom_image and calls hvm_build_setup_mem().

hvm_build_setup_mem()

  • Gets struct xc_dom_image *dom, max_mem_mib, and max_start_mib.
  • Calculates start and size of most parts of the domain’s memory maps
    • taking memory holes for I/O into account, e.g. mmio_size and mmio_start.
  • It then uses those to calculate lowmem_end and highmem_end.
  • Finally, calls xc_dom_boot_mem_init().

xc_dom_boot_mem_init()

In all cases, xc_dom_boot_mem_init() is called.

It calls the architecture-specific meminit hook for the domain type:

rc = dom->arch_hooks->meminit(dom);

meminit_hvm(), the x86 HVM meminit() to allocate HVM domain memory.

It is the meminit hook for x86 HVM domains: https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1348

Dynamic memory / Populate on Demand

When dom->target_pages is smaller than dom->total_pages, the X86 meminit_hvm() function enables XENMEMF_populate_on_demand: https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1368

In this case, meminit_hvm() function calls xc_domain_set_pod_target() to set the populate on demand target to dom->target_pages: https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1454

meminit_hvm: vmemranges

The allocation of domain boot memory is defined using vmemranges.

meminit_hvm() creates a default vmemranges array based on the amount of

  • lowmem (below 4G, until dom->lowmem_end) and
  • highmem (above 4G, until dom->highmem_end).

The default vmemranges do not carry information about from which NUMA node the memory shall be allocated.

meminit_hvm() uses two vmemranges by default:

  • 0 to dom->lowmem_end
  • 4G to dom->highmem_end

The NUMA node IDs (nid) are set to 0:

  • A dummy vnode_to_pnode[] for nid maps 0 to XC_NUMA_NO_NODE.

Code: https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1371

Alternatively, callers of the meminit hook have the possibility to pass an array of memory ranges in dom->vmemranges, which can contain a specific NUMA node for each vmemrange andthe "exact" flag:

While vmemranges have been added for vNUMA, meminit_hvm() does not differentiate on that. It is only concerned about memory init. Hence, its code always acts on vmemranges to avoid code duplications and creates default non-NUMA vmemranges. If the caller does not pass specific dom->vmemranges for NUMA cases, the default vmemranges are used.

Caveat: When passing dom->vmemranges, Populate-On-Demand (target_pages < total_pages) cannot be used: With custom vmemranges as Populate-On-Demand does not support NUMA. As a result, memory overcommitment with ballooning is currently impossible for NUMA node-specific memory allocations.

Memory allocation of the Domain’s memory

meminit_hvm() attempts to allocate 1GB pages if possible, falls back on 2MB pages if 1GB allocation fails, and 4KB pages will be used eventually if both fail:

https://github.com/xen-project/xen/blob/master/tools/libs/guest/xg_dom_x86.c#L1475

For each vmemrange, new_memflags are composed, based on the base memflags:

unsigned int new_memflags = memflags;
unsigned int vnode = vmemranges[vmemid].nid;
// With custom vmemranges, vnode_to_pnode is custom too:
unsigned int pnode = vnode_to_pnode[vnode]

if ( pnode != XC_NUMA_NO_NODE )  // vnode maps to a physical NUMA node:
    // XENMEMF_exact_node: (XENMEMF_node(n) | XENMEMF_exact_node_request)
    new_memflags |= XENMEMF_exact_node(pnode);

Thus, to allocate memory in a specific NUMA node, this is needed:

  • Populate-On-Demand (POD) must not be used (target_pages == total_pages)
  • The sum of the vmemranges equals dom->total_pages
  • dom->vmemranges must be passed with a nid
  • dom->vnode_to_pnode must be passed, the nid must map to the pNUMA node to allocate on.
  • dom->nr_vmemranges and dom->nr_vnodes must be set accordingly

For the allocation of each vmemrange (extent), a call to xc_domain_populate_physmap() is used.

It calls the XENMEM_populate_physmap hypercall for each group of extents to allocate.

See populate_physmap.md for its implementation.

Other callers of the meminit() calls

libxl calls xc_dom_boot_mem_init() using libxl__build_dom() from init-xenstore-domain.c/build().