Access a LVM-based DomU disk outside of the domU

From Xen
Jump to navigationJump to search


If you've installed a domU using virt-install (ie. Fedora/CentOS/RHEL), which by default uses LVM, or otherwise used LVM inside the domU, trying to access the disk in your dom0 is difficult. If you need to access the disk (to fix file system corruption, as an example), this is really annoying. Fortunately, there's a few methods.

In this example, my domU is installed on a logical volume called 'helium'.

kpartx

This method has the benefit of working even if your Xen install is not working - for example, if the hardware died and you pulled the hard drives to extract the data.

Now, using kpartx looks simple - once it reveals the partition structure of the disk image, you would think that you'd be able to mount it normally with mount. But trying just gives you a mount error, because mount doesn't understand how LVM works. Instead, you're going to need to use the LVM tools to get access.

Step 1

kpartx -a /path/to/logical/volume

For me, this was kpartx -a /dev/domU/helium. This created two block devices in /dev/mapper - domU-helium1 and domU-helium2. helium1 was my /boot, helium2 was the partition used by LVM to store the root and swap space.

Step 2

vgscan

This will give you a list of the volume groups on your system, hopefully including the volume group from your domU.

Step 3

vgchange -a y volume_group_from_VM

This will activate the logical volumes in the VG. Block devices should appear in /dev/mapper. For me, what appeared was vg_helium-lv_root and vg_helium-lv_swap.

Step 4

Do whatever you want with the disk image. For the purposes of this walkthrough, run a disk check with

fsck /dev/mapper/vg_helium-lv_root

Replace vg_helium-lv_root with your volume group and logical volume name. Wait for the check to finish before starting to clean up after yourself.

Step 5

vgchange -a n volume_group_from_VM

This will deactivate the volume group so LVM won't complain when you destroy the block devices that represent the domU LVM.

Step 6

kpartx -d /path/to/logical/volume
 

Finally, to clean up, you have to destroy the block devices that kpartx created so Xen doesn't complain that the logical volume already being accessed.

block-attach

The alternate method depends on Xen - you'll be using Xen's ability to attach block devices to domUs, just that instead of attaching a disk to a domU, you'll be attaching it to your dom0.

Step 1

xl block-attach 0 phy:/dev/domU/helium xvda w

A short explanation of the arguments:

xl [-v] block-attach <Domain> <BackDev> <FrontDev> [<Mode>] [BackDomain]
  • domain should be the domain ID, get it from xl list
  • backdev needs to include 'phy:' if you use LVM/raw partitions or 'file:' if you use disk images
  • frontdev doesn't appear to need /dev, like the config file
  • mode should be 'w' or 'r'

So in this case, you can see that I attached the disk /dev/domU/helium to my dom0, showing up as /dev/xvda, and allowing writes to it.

This created 2 entries in /dev - /dev/xvda1 and /dev/xvda2.

Step 2

The volume group of the domU was automatically detected - vg_helium, and the LVs appeared in /dev/vg_helium/ as /dev/vg_helium/lv-root and /dev/vg_helium/lv-swap.

It's a simple matter to mount the LV with mount /dev/vg_helium/lv-root /mnt/helium

Step 3

Once you're done whatever you're doing, unmount the LV.

Next, we have to do xl block-detach to release the block device. Looking at the syntax - xl [-v] block-detach <Domain> <DevId>, something like xl block-detach 0 xvda.

Except that's wrong. You get Error: Device xvda not connected when you try it.

Turns out, DevId doesn't mean the front end or back end device names. What it needs is something from xl block-list. Running xl block-list 0 (since we want the block devices attached to dom0) gave me this output:

Vdev  BE  handle state evt-ch ring-ref BE-path
51712 0   0      1     -1     -1       /local/domain/0/backend/vbd/0/51712

Look at the Vdev number. That's what you want.

Substitute it into the command to get xl block-detach 0 51712, run it, and your block device is detached.

As a point of note, it doesn't print any message on success though, so don't be worried if it doesn't say anything. You can double check that it was removed by running xl block-list 0 again and checking the output.