Driver Domain: Difference between revisions

From Xen
Jump to navigationJump to search
 
(6 intermediate revisions by 3 users not shown)
Line 1: Line 1:
A driver domain is unprivileged Xen domain that has been given responsibility for a particular piece of hardware. It runs a minimal kernel with only that hardware driver and the backend driver for that device class. Thus, if the hardware driver fails, the other domains (including Dom0) will survive and, when the driver domain is restarted, will be able to use the hardware again.
<!-- MoinMoin name: DriverDomain -->
<!-- Comment: -->
<!-- WikiMedia name: DriverDomain -->
<!-- Page revision: 00000007 -->
<!-- Original date: Tue Jun 1 08:52:42 2010 (1275382362000000) -->


As disk driver domains are not currently supported, this page will describe the setup for network driver domains.
{{Needs_Refactor|This document contains glossary and howto elements, which would better be separated out (but linked to)}}

__NOTOC__
'''Xen Driver Domain'''

A driver domain is unprivileged Xen domain that has been given responsibility for a particular piece of hardware. It runs a minimal kernel with only that hardware driver and the backend driver for that device class. Thus, if the hardware driver fails, the other domains (including Dom0) will survive and, when the driver domain is restarted, will be able to use the hardware again.


= Benefits =
= Benefits =


* '''Performance'''
* Performance: eliminate the dom0 bottleneck. All device backend in dom0 will result dom0 to have bad response latency.
This will eliminate dom0 as a bottleneck. All device backend in dom0 will result dom0 to have bad response latency.
* Enhanced security: Hardware drivers are the most failure-prone part of an operating system. It would be good for safety if you could isolate a driver from the rest of the system so that, when it failed, it could just be restarted without affecting the rest of the machine.
* '''Enhanced reliability'''
Hardware drivers are the most failure-prone part of an operating system. It would be good for safety if you could isolate a driver from the rest of the system so that, when it failed, it could just be restarted without affecting the rest of the machine.
* '''Enhanced security'''
Because of the nature of network protocols and routing, there is a higher risk of an exploitable bug existing somewhere in the network path (host driver, bridging, filtering, &c). Putting this in a separate, unprivileged domain limits the value of attacking the network stack: even if they succeed, they have no more access than a normal unprivileged VM.


= Setup =
= Requirements =


Having a system with a modern IOMMU (either AMD or VT-d version 2) is highly recommended. Without IOMMU support, there's nothing to stop the driver domain from using the network card's DMA engine to read and write any system memory. Furthermore, without IOMMU support, you cannot pass through a device to an HVM guest, only PV guests.
Software requirements of the driver domain:


If you don't have IOMMU support, you can still use PV domains to get the performance benefit, but you won't get any security or stability benefits.
* The passthroughed hardware driver.
* The backend driver.
* The hotplug scripts.


= Setup =
However, PCI passthrough is optional. For example, you can setup a driver domain just doing the network package filtering, but pass the packages to dom0 for transfer.


Setting up the driver domain is fairly straightforward, and can be broken down into the following steps:
To passthrough a device to a domain, see: http://zhigang.org/wiki/XenPCIPassthrough.
=== Set up a VM with the appropriate drivers ===
These drivers include the hardware driver for the NIC, as well as drivers to access xenbus, xenstore, and netback. Any Linux distro with dom0 Xen support should do. The author recommends <code>xen-tools</code> (also see [[xen-tools]]).


You should also give the VM a descriptive name; "domnet" would be a sensible default.
To start using the driver domain as block device backend:
<pre>
# xm block-attach <Domain> file:/disks/disk1.img xvdc w <DriverDomain>
</pre>


=== Install the xen-related hotplug scripts ===
These are scripts that listen for vif creation events on xenstore, and respond by doing the necessary setup with netback.


The easiest way to do this is by installing the full set of xen tools
The xenstore entries looks like:
in the VM -- either by installing the xen-utils package, or running
"make install-tools" inside the VM.
=== Use PCI passthrough to give the VM access to the hardware NIC ===
This has a lot of steps, but is fairly straightforward. Details for doing so can be found here: [[Xen PCI Passthrough]]
=== Set up network topology in the VM ===
This is identical to the setup you would do in domain 0. Normally this would be bridging, but NAT or openvswitch are other possibilities. See more information at [[Host_Configuration/Networking]].
=== Configure guests ===
You now have a fully-configured driver domain. To use it, simply add "backend=[domain-name]" to the vifspec of your guest vif; for example:
<pre>
<pre>
vif = [ 'bridge=xenbr0, mac=00:16:3E:0d:13:00, model=e1000, backend=domnet' ]
...
/local/domain/3/backend/vbd/4/51744=""
/local/domain/3/backend/vbd/4/51744/domain="OVM_EL5U3_X86_PVM_4GB"
/local/domain/3/backend/vbd/4/51744/frontend="/local/domain/4/device/vbd/51744"
/local/domain/3/backend/vbd/4/51744/uuid="1c955e9d-c6a8-dcee-dfb2-fcbe1e836479"
/local/domain/3/backend/vbd/4/51744/bootable="0"
/local/domain/3/backend/vbd/4/51744/dev="xvdc"
/local/domain/3/backend/vbd/4/51744/state="1"
/local/domain/3/backend/vbd/4/51744/params="/share/vm/disks/disk18.img"
/local/domain/3/backend/vbd/4/51744/mode="w"
/local/domain/3/backend/vbd/4/51744/online="1"
/local/domain/3/backend/vbd/4/51744/frontend-id="4"
/local/domain/3/backend/vbd/4/51744/type="file"
/local/domain/3/backend/vbd/4/51744/node="/dev/loop0"
/local/domain/3/backend/vbd/4/51744/physical-device="7:0"
/local/domain/3/backend/vbd/4/51744/hotplug-status="connected
...
/local/domain/4/device/vbd/51744=""
/local/domain/4/device/vbd/51744/virtual-device="51744"
/local/domain/4/device/vbd/51744/device-type="disk"
/local/domain/4/device/vbd/51744/protocol="x86_32-abi"
/local/domain/4/device/vbd/51744/backend-id="3"
/local/domain/4/device/vbd/51744/state="3"
/local/domain/4/device/vbd/51744/backend="/local/domain/3/backend/vbd/4/51744"
/local/domain/4/device/vbd/51744/ring-ref="836"
/local/domain/4/device/vbd/51744/event-channel="11"
...
</pre>
</pre>


= Limitations =

Due to architectural limitations of most PC hardware, driver domains with direct hardware access cannot be done securely. In effect, any domain that has direct hardware access has to be considered “trusted”.

The reason why is that all IO is done in physical addresses.

Consider the following case:
* domain A is mapped in 0-2GB of physical memory
* domain B is mapped in 2-4GB of physical memory
* domain A has direct access to a PCI NIC
* domain A programs the NIC to DMA in the 2-4GB physical memory range, overwriting domain B’s memory. Ooops!

The solution is a hardware unit known as an “IOMMU” (IO Memory Management Unit)


= Reference =
= Reference =
Line 89: Line 50:
[[Category:Glossary]]
[[Category:Glossary]]
[[Category:HowTo]]
[[Category:HowTo]]
[[Category:Developers]]
[[Category:Beginners]]
[[Category:Beginners]]
[[Category:Security]]
[[Category:Xen]]

Latest revision as of 17:25, 28 January 2013

A driver domain is unprivileged Xen domain that has been given responsibility for a particular piece of hardware. It runs a minimal kernel with only that hardware driver and the backend driver for that device class. Thus, if the hardware driver fails, the other domains (including Dom0) will survive and, when the driver domain is restarted, will be able to use the hardware again.

As disk driver domains are not currently supported, this page will describe the setup for network driver domains.

Benefits

  • Performance

This will eliminate dom0 as a bottleneck. All device backend in dom0 will result dom0 to have bad response latency.

  • Enhanced reliability

Hardware drivers are the most failure-prone part of an operating system. It would be good for safety if you could isolate a driver from the rest of the system so that, when it failed, it could just be restarted without affecting the rest of the machine.

  • Enhanced security

Because of the nature of network protocols and routing, there is a higher risk of an exploitable bug existing somewhere in the network path (host driver, bridging, filtering, &c). Putting this in a separate, unprivileged domain limits the value of attacking the network stack: even if they succeed, they have no more access than a normal unprivileged VM.

Requirements

Having a system with a modern IOMMU (either AMD or VT-d version 2) is highly recommended. Without IOMMU support, there's nothing to stop the driver domain from using the network card's DMA engine to read and write any system memory. Furthermore, without IOMMU support, you cannot pass through a device to an HVM guest, only PV guests.

If you don't have IOMMU support, you can still use PV domains to get the performance benefit, but you won't get any security or stability benefits.

Setup

Setting up the driver domain is fairly straightforward, and can be broken down into the following steps:

Set up a VM with the appropriate drivers

These drivers include the hardware driver for the NIC, as well as drivers to access xenbus, xenstore, and netback. Any Linux distro with dom0 Xen support should do. The author recommends xen-tools (also see xen-tools).

You should also give the VM a descriptive name; "domnet" would be a sensible default.

Install the xen-related hotplug scripts

These are scripts that listen for vif creation events on xenstore, and respond by doing the necessary setup with netback.

The easiest way to do this is by installing the full set of xen tools in the VM -- either by installing the xen-utils package, or running "make install-tools" inside the VM.

Use PCI passthrough to give the VM access to the hardware NIC

This has a lot of steps, but is fairly straightforward. Details for doing so can be found here: Xen PCI Passthrough

Set up network topology in the VM

This is identical to the setup you would do in domain 0. Normally this would be bridging, but NAT or openvswitch are other possibilities. See more information at Host_Configuration/Networking.

Configure guests

You now have a fully-configured driver domain. To use it, simply add "backend=[domain-name]" to the vifspec of your guest vif; for example:

vif = [ 'bridge=xenbr0, mac=00:16:3E:0d:13:00, model=e1000, backend=domnet' ]

Reference