CrossPoolMigration
From Xen
Jump to navigationJump to search
This page describes current work to make
- Cross-pool migration
- Storage migration
work on XCP
The following designs were considered:
- CrossPoolMigrationv1: based on DRBD
- CrossPoolMigrationv2: based on tapdisk log-dirty
- CrossPoolMigrationv3: based on tapdisk mirroring
Choosing a design
The following properties are desired:
- the chosen design should exploit all available disk structure information to minimise bandwidth and be fast
- the chosen design should clearly separate the storage migration from the domain migration, avoiding the need to add additional hooks into the domain memory send/receive code (in libxenguest)
- this should also make it easier to use libxl functions in future
- the chosen design should be as "live" as possible i.e. it should avoid extending the migration downtime
- the chosen design should be compatible with *dom0 disaggregation*, in particular where some storage elements are not in dom0
Proposed choice
The design based on tapdisk mirroring CrossPoolMigrationv3 is the proposed choice. It has the following benefits:
- it allows vhd-based storage backends to copy only disk blocks that cannot be found on the destination SR
- furthermore if a migration fails, many of the blocks will probably have been copied so subsequent migrations may be faster
- it separates "mirror creation" from "domain migration", so the libxenguest is unmodified
- by creating a mirror in advance, the code running in the "migration downtime" is unmodified
- *unknown*: the ease of disaggregating dom0 will depend on the exact APIs used, these are still TBD