Xen FAQ Installation: Difference between revisions
OliverChick (talk | contribs) |
Rcpavlicek (talk | contribs) |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
= File Systems = |
= File Systems = |
||
== Is there a way to have a shared root file system amongst a set of |
== Is there a way to have a shared root file system amongst a set of guest VMs? == |
||
Yes, the best way to achieve this is to install your guest with an LVM-backed block device. You can then create a snapshot of this filesystem, with the command: |
Yes, the best way to achieve this is to install your guest with an LVM-backed block device. You can then create a snapshot of this filesystem, with the command: |
||
Line 9: | Line 9: | ||
Using snapshots allows you to avoid having to make a read-only root filesystem. However, should you wish to use a read only root fs, you can install the OS in an LVM partition and use it shared across all the xen domUs when you use the parition as 'r' instead of 'w' when defining the disk. |
Using snapshots allows you to avoid having to make a read-only root filesystem. However, should you wish to use a read only root fs, you can install the OS in an LVM partition and use it shared across all the xen domUs when you use the parition as 'r' instead of 'w' when defining the disk. |
||
Like 1. mount ramfs in /tmp and in /var ... http://en.opensuse.org/How-To_Make_the_root_filesystem_read-only |
|||
I use 2 disks in Xen with one as read-only mounted as / and the other is the data partition. I have a need to have scratch partition with pre-populated data and for this I create a LV and put data into it (eg:- software etc.,) and then create a snapshot of this volume and send it as rw to the xen machine. This way my original software partitions are intact and also the changes (may be damaging) done in the xen volumes are lost once the snapshot grows to 100%. |
|||
== Will I get good I/O performance if I use a file-backed (.img) block device? == |
== Will I get good I/O performance if I use a file-backed (.img) block device? == |
||
Line 20: | Line 17: | ||
method in your .cfg file will yield better performance. This is particularly important for I/O intensive VMs, such as databases. |
method in your .cfg file will yield better performance. This is particularly important for I/O intensive VMs, such as databases. |
||
⚫ | |||
== Is it possible to start a VM that contains just gpxe? == |
|||
(which when started, will get an image from a provisioning server and will load that image) |
|||
In this article, we'll show you the prcesses to setup PXE boot environment for Xen host (hypervisor + dom0) and Xen guest, both PV (Para-Virtualized) guest and HVM (Hardware-assisted Virtual Machine). Details at http://os-drive.com/files/docbook/xen-pxeboot.html. |
|||
⚫ | |||
I tried to resize a disk of my data guest from 100 to 400 GB. I did an lvresize |
I tried to resize a disk of my data guest from 100 to 400 GB. I did an lvresize |
||
/dev/xendata/data-disk -L 400G an it works. I started the Guest and did an df -h to check the size but there are still 100 G |
/dev/xendata/data-disk -L 400G an it works. I started the Guest and did an df -h to check the size but there are still 100 G |
||
Line 35: | Line 27: | ||
Oh, and if you've partitioned the LV inside the guest, you'll also need to resize the partition (BEFORE you do a resize2fs, etc.). There are two ways to do this - the safest is to use parted, which works if you're using ext2/ext3 (and a couple other of the most popular filesystems - reiser, I think). The other method is to delete the partition and recreate it with the extended end points. This isn't quite as safe and requires that 1) your start point for the partition is exactly the same as it was before, and 2) the partition is the last (or only) one on the LV. |
Oh, and if you've partitioned the LV inside the guest, you'll also need to resize the partition (BEFORE you do a resize2fs, etc.). There are two ways to do this - the safest is to use parted, which works if you're using ext2/ext3 (and a couple other of the most popular filesystems - reiser, I think). The other method is to delete the partition and recreate it with the extended end points. This isn't quite as safe and requires that 1) your start point for the partition is exactly the same as it was before, and 2) the partition is the last (or only) one on the LV. |
||
== How do I |
== How do I overcommit storage? == |
||
This is a matter of what storage backend you use. If you use one of the following, you will be able to overcommit storage: |
|||
I want use xen with dynamic slices. For example, I have 20 domU based on FreeBSD, xen hypervisor 3.3.1, Debian Lenny dom0 system. All domUs have 80Gb LVM partitions, but realy they use 20 of this 80Gb and I want to create more domU's. How can I do it? I know that some virtualisation have possibility to do dynamic slices(4 example Virtul box) |
|||
*sparse raw file (with file: or tap:aio:) |
|||
*qcow |
|||
Do you mean storage overcommit? That is, assign more storage to domU than what you actually have? |
|||
*vmdk/vdisk (I think full support is only in newer Xen or Opensolaris) |
|||
*zvol (on Opensolaris) |
|||
If you use disk/partition/LVM for domU storage, you won't be able to. |
|||
If yes, it's not a matter of Xen vs [[VirtualBox]]. It's a matter of what storage backend you use. If you use one of these: - sparse raw file (with file: or tap:aio:) - qcow - vmdk/vdisk (I think full support is only in newer Xen or Opensolaris) - zvol (on Opensolaris) then you can overcommit storage. But if you use disk/partition/LVM for domU storage, you won't be able to. |
|||
= 32bit vs 64 bit = |
= 32bit vs 64 bit = |
Latest revision as of 13:27, 9 July 2015
File Systems
Yes, the best way to achieve this is to install your guest with an LVM-backed block device. You can then create a snapshot of this filesystem, with the command: lvcreate -L<size of snapshot> -s -n <snapshot name> <backend disk name>
You should create one snapshot per guest, and then put the snapshot into the guest's .cfg file.
Using snapshots allows you to avoid having to make a read-only root filesystem. However, should you wish to use a read only root fs, you can install the OS in an LVM partition and use it shared across all the xen domUs when you use the parition as 'r' instead of 'w' when defining the disk.
Will I get good I/O performance if I use a file-backed (.img) block device?
No. Using lvm to create a volume, and using the
disk=['phy:/...']
method in your .cfg file will yield better performance. This is particularly important for I/O intensive VMs, such as databases.
How can I make disk resizing work?
I tried to resize a disk of my data guest from 100 to 400 GB. I did an lvresize /dev/xendata/data-disk -L 400G an it works. I started the Guest and did an df -h to check the size but there are still 100 G
The container is bigger but the filesystem isn't. Resizing an LV doesn't make the FS any bigger.
Log into the DomU and do a resize2fs <device>. You can do this while it's mounted as long as the filesystem is getting bigger.
Oh, and if you've partitioned the LV inside the guest, you'll also need to resize the partition (BEFORE you do a resize2fs, etc.). There are two ways to do this - the safest is to use parted, which works if you're using ext2/ext3 (and a couple other of the most popular filesystems - reiser, I think). The other method is to delete the partition and recreate it with the extended end points. This isn't quite as safe and requires that 1) your start point for the partition is exactly the same as it was before, and 2) the partition is the last (or only) one on the LV.
How do I overcommit storage?
This is a matter of what storage backend you use. If you use one of the following, you will be able to overcommit storage:
- sparse raw file (with file: or tap:aio:)
- qcow
- vmdk/vdisk (I think full support is only in newer Xen or Opensolaris)
- zvol (on Opensolaris)
If you use disk/partition/LVM for domU storage, you won't be able to.
32bit vs 64 bit
Is there any way to install 64Bit Linux DomU on 32Bit Linux Dom0?
Types of domU that can be run depends mostly on the hypervisor, and not dom0. So if you have a 64bit hypervisor, you should be able to run 32 and 64bit PV and HVM domUs, regardless whether dom0 is 32 or 64bit.
If you have 32bit dom0 and 32bit hypervisor, you should be able to run 64bit HVM domU, but not 64bit PV domU.