XCP storage driver domains: Difference between revisions
Dave.scott (talk | contribs) No edit summary |
(→Attach an SR via the driver domain: type is a required parameter; use ext) |
||
(9 intermediate revisions by 2 users not shown) | |||
Line 6: | Line 6: | ||
* the blkback kernel module |
* the blkback kernel module |
||
* the XCP block udev scripts (available in RPM) |
* the XCP block udev scripts (available in RPM and .deb form) |
||
* access to the storage medium (via the network or via a PCI passthrough device) |
* access to the storage medium (via the network or via a PCI passthrough device) |
||
Line 12: | Line 12: | ||
Check you have the "xen-blkback" module (Linux) and "modprobe" it if its missing. |
Check you have the "xen-blkback" module (Linux) and "modprobe" it if its missing. |
||
On Ubuntu 12.10, try |
|||
echo xen-blkback >> /etc/modules |
|||
=== The XCP block udev scripts === |
=== The XCP block udev scripts === |
||
These are needed to signal to the toolstack when a disk has been "attached". In Ubuntu the scripts are normally installed via |
These are needed to signal to the toolstack when a disk has been "attached". In Ubuntu the scripts are normally installed via |
||
apt-get install xen-utils-common |
|||
=== PCI passthrough === |
=== PCI passthrough === |
||
Line 39: | Line 43: | ||
Inside the driver VM run: |
Inside the driver VM run: |
||
⚫ | |||
⚫ | |||
cd dbus-test |
cd dbus-test |
||
sudo make install |
sudo make install |
||
This will add a python program (/usr/ |
This will add a python program (/usr/bin/xcp-sm-fs) |
||
⚫ | It's a good idea to arrange for this program to start automatically when the VM starts, for example by running it from /etc/rc.local There's a config file (/etc/xcp-sm-fs.conf) which has some tweakable settings. Check the VM is binding to port 80 on eth0 using "netstat" (or adjust the configuration if you're using a different interface) |
||
You also need to install some generic storage tools: |
|||
apt-get install blktap-utils nfs-common |
|||
=== Create a directory to contain the disk images === |
|||
⚫ | |||
sudo mkfs.ext3 /dev/myblockdevice |
|||
sudo mkdir /interestingstorage |
|||
Add the filesystem to /etc/fstab so that it is mounted after every boot. |
|||
== Attach an SR via the driver domain == |
|||
In dom0, "introduce" an SR: |
|||
SR=$(uuidgen) |
|||
xe sr-introduce uuid=$SR name-label=driverdomain type=ext |
|||
HOST=$(xe host-list params=uuid --minimal) |
|||
PBD=$(xe pbd-create sr-uuid=$SR host-uuid=$HOST device-config:path=/interestingstorage) |
|||
Next, associate this host's connection ("PBD") with the driver domain: |
|||
xe pbd-param-set uuid=$PBD other-config:storage_driver_domain=<driver domain uuid> |
|||
Finally, try attaching the storage: |
|||
xe pbd-plug uuid=$PBD |
|||
This should cause the VM to be started and the storage control plane software to be queried. |
|||
== Create a VM using the new storage == |
|||
In dom0 create a blank VM from a template e.g: |
|||
WIN=$(xe vm-install template="Windows XP SP3 (32-bit)" new-name-label=winxp sr-uuid=$SR) |
|||
⚫ | |||
Tweak the default configuration to allow qemu to find the disk device: |
|||
=== Try the prototype file-backed "Storage Repository" === |
|||
xe vm-param-set uuid=$WIN platform:qemu_disk_cmdline=true |
|||
⚫ | |||
The rest of the VM install should proceed as normal: try inserting a CD via a GUI like XenCenter. |
|||
[[Category:XCP]] |
[[Category:XCP]] |
Latest revision as of 18:30, 9 October 2013
This page describes how to configure a storage driver domain on XCP 1.6. This is experimental software: please send feedback to xen-api@lists.xen.org.
Create a suitable VM
Install a VM in the usual way. A storage driver domain can be PV or HVM. A storage driver domain must have:
- the blkback kernel module
- the XCP block udev scripts (available in RPM and .deb form)
- access to the storage medium (via the network or via a PCI passthrough device)
The blkback kernel module
Check you have the "xen-blkback" module (Linux) and "modprobe" it if its missing.
On Ubuntu 12.10, try
echo xen-blkback >> /etc/modules
The XCP block udev scripts
These are needed to signal to the toolstack when a disk has been "attached". In Ubuntu the scripts are normally installed via
apt-get install xen-utils-common
PCI passthrough
Use "lspci" in dom0 to identify the storage controller you want to pass through. Take a note of the "BBBB:DD.F" identification string. Note that unless a multi-function device supports "Function Level Reset" on individual Functions you must pass all functions together as a group.
Use the "xe" CLI to associate this PCI device with your driver VM:
xe vm-param-set uuid=<VM uuid> other-config:pci=0/BBBB:DD.F,1/BBBB:DD.F
If you shutdown and then restart the VM (not reboot), you should see the device appear in the driver VM with "lspci" while in dom0 the device should be associated with "pciback".
Add a VIF to the "internal management network"
Shut down the driver VM and execute in dom0:
NET=$(xe network-list other-config:is_host_internal_management_network=true params=uuid --minimal) xe vif-create vm-uuid=<my VM uuid> network-uuid=$NET device=0
This will add a special VIF connecting the driver VM to the "host internal management" network.
Install the storage control plane software
Inside the driver VM run:
git clone https://github.com/djs55/dbus-test cd dbus-test sudo make install
This will add a python program (/usr/bin/xcp-sm-fs)
It's a good idea to arrange for this program to start automatically when the VM starts, for example by running it from /etc/rc.local There's a config file (/etc/xcp-sm-fs.conf) which has some tweakable settings. Check the VM is binding to port 80 on eth0 using "netstat" (or adjust the configuration if you're using a different interface)
You also need to install some generic storage tools:
apt-get install blktap-utils nfs-common
Create a directory to contain the disk images
Inside the driver domain:
sudo mkfs.ext3 /dev/myblockdevice sudo mkdir /interestingstorage
Add the filesystem to /etc/fstab so that it is mounted after every boot.
Attach an SR via the driver domain
In dom0, "introduce" an SR:
SR=$(uuidgen) xe sr-introduce uuid=$SR name-label=driverdomain type=ext HOST=$(xe host-list params=uuid --minimal) PBD=$(xe pbd-create sr-uuid=$SR host-uuid=$HOST device-config:path=/interestingstorage)
Next, associate this host's connection ("PBD") with the driver domain:
xe pbd-param-set uuid=$PBD other-config:storage_driver_domain=<driver domain uuid>
Finally, try attaching the storage:
xe pbd-plug uuid=$PBD
This should cause the VM to be started and the storage control plane software to be queried.
Create a VM using the new storage
In dom0 create a blank VM from a template e.g:
WIN=$(xe vm-install template="Windows XP SP3 (32-bit)" new-name-label=winxp sr-uuid=$SR)
Tweak the default configuration to allow qemu to find the disk device:
xe vm-param-set uuid=$WIN platform:qemu_disk_cmdline=true
The rest of the VM install should proceed as normal: try inserting a CD via a GUI like XenCenter.