RHEL5 CentOS5 Xen Intel SR-IOV NIC Virtual Function VF PCI Passthru Tutorial
Intel SR-IOV NIC Virtual Function (VF) PCI passthru with RHEL5/CentOS5 Xen
Requirements:
- You need at least RHEL 5.9 / CentOS 5.9 for the dom0 system. Earlier EL5 versions have some bugs that prevent SR-IOV VF passthru from working properly. For example in EL 5.8 PCI passthru to HVM guests is broken, and in EL 5.7 VF passthru works only once, and fails on the second time. See the end of this wiki page for more information about the bugs and links to the various bugzilla entries.
- You need a system with hardware IOMMU for PCI passthru (Intel VT-d). IOMMU needs to be supported by the CPU, chipset, BIOS/firmware and Xen.
Components used in this tutorial:
- Dell R510 server, BIOS version: 1.10.2 (04/27/2012).
- Intel Xeon CPU L5640.
- Virtualization technology (Intel VT-x/VMX) enabled in BIOS.
- IOMMU/VT-d enabled in BIOS.
- Intel 82599EB 10 Gbit/sec dual-port SR-IOV Server NIC.
- CentOS 5.9 x86_64 DVD1.
- Stock Xen rpms from CentOS 5.9.
- Stock kernel-xen from CentOS 5.9 as dom0 kernel.
Xen SR-IOV VF passthru to VMs
In this tutorial we'll configure el5 Xen SR-IOV Virtual Function (VF) interface passthru to the following VMs, using Intel 82599 10 Gbit/sec (ixgbe) NIC:
- RHEL5 / CentOS5 x64 PV domU.
- RHEL5 / CentOS5 x64 HVM guest.
- RHEL6 / CentOS6 x64 HVM guest.
VMs will have direct access to the 10 Gbit/sec Virtual Function PCI device, offering high performance and low latency for network traffic.
EL5 Host installation and configuration
- Install RHEL 5.9 or CentOS 5.9 x64 (64 bit) host using the "Server" profile from DVD1.
- Both RHEL 5.9 and CentOS 5.9 have been verified to work identically, but in this tutorial we'll use CentOS because it's publicly available for free.
- Use LVM disk configuration, and leave free space to the LVM volumegroup, so you can later create new LVM volumes and install VMs using LVM volumes as disks.
- After installation disable SElinux in "/etc/selinux/config", change to "SELINUX=disabled".
- Do all the usual network/IP, hostname, DNS etc configuration.
- Make sure there is free unallocated space in the LVM volume group:
[root@dom0 ~]# vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 38 VG Access read/write VG Status resizable MAX LV 0 Cur LV 27 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.82 TB PE Size 32.00 MB Total PE 59551 Alloc PE / Size 55670 / 1.70 TB Free PE / Size 3881 / 121.28 GB VG UUID QuouSc-MXj2-Hhdo-Ut9k-RLLO-jk9N-RjQD6E
- Note the "Free PE / Size" line on the vgdisplay command output above.
- Update the system with "yum update".
- Disable some of the extra services that are not needed in this tutorial:
chkconfig isdn off chkconfig mcstrans off chkconfig haldaemon off chkconfig hidd off chkconfig autofs off chkconfig avahi-daemon off chkconfig xfs off chkconfig bluetooth off chkconfig pcscd off chkconfig iptables off chkconfig ip6tables off
- Install Xen and related packages:
yum install xen xen-libs kernel-xen libvirt virt-viewer python-virtinst xorg-x11-xauth
- Edit "/etc/xen/xend-config.sxp" and disable (comment out) Xen network-script line:
#(network-script network-bridge)
- We want to configure dom0 networking settings and bridges ourselves.
- Modify "/boot/grub/grub.conf" and add all the usual options for Xen hypervisor:
default=0 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz #hiddenmenu title CentOS (2.6.18-348.el5xen) root (hd0,0) kernel /xen.gz-2.6.18-348.el5 dom0_mem=2048M loglvl=all module /vmlinuz-2.6.18-348.el5xen ro root=/dev/VolGroup00/LogVol00 module /initrd-2.6.18-348.el5xen.img
- Add "dom0_mem=2048M loglvl=all" options on the xen.gz line, adjust the dom0_mem as you wish/need.
- Make sure the Xen entry is the default in Grub
- Reboot the system to Xen.
- When the system has booted up, verify Xen works OK:
[root@dom0 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 2048 12 r----- 788.0
- Verify from the "xm list" output that dom0 is using the amount of memory you specified in Grub.
[root@dom0 ~]# xm info host : dom0.localdomain release : 2.6.18-348.el5xen version : #1 SMP Tue Jan 8 18:35:04 EST 2013 machine : x86_64 nr_cpus : 12 nr_nodes : 1 sockets_per_node : 1 cores_per_socket : 6 threads_per_core : 2 cpu_mhz : 2266 hw_caps : bfebfbff:2c100800:00000000:00000940:029ee3ff:00000000:00000001 total_memory : 49139 free_memory : 46145 node_to_cpu : node0:0-11 xen_major : 3 xen_minor : 1 xen_extra : .2-348.el5 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-54) cc_compile_by : mockbuild cc_compile_domain : (none) cc_compile_date : Tue Jan 8 17:45:10 EST 2013 xend_config_format : 2
- Versions of xen related rpms:
[root@dom0 ~]# rpm -qa | grep xen xen-libs-3.0.3-142.el5 xen-libs-3.0.3-142.el5 kernel-xen-2.6.18-348.el5 xen-3.0.3-142.el5
- Note that the actual Xen hypervisor is version 3.1.2 (+ a lot of patches from later Xen versions), only the userland tools are 3.0.3 based. Xen hypervisor is included in kernel-xen rpm in RHEL5/CentOS5.
Intel SR-IOV NIC
- In this tutorial we're using the following Intel SR-IOV capable 10 Gbit/sec 82599 NIC:
[root@dom0 ~]# lspci | grep 10-Gig 03:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
- which uses the Intel ixgbe driver, as can be seen from the "ethtool -i" output:
[root@dom0 ~]# ethtool -i eth2 driver: ixgbe version: 3.4.8-k firmware-version: 0.9-3 bus-info: 0000:03:00.0
[root@dom0 ~]# ethtool -i eth3 driver: ixgbe version: 3.4.8-k firmware-version: 0.9-3 bus-info: 0000:03:00.1
Enabling IOMMU support in Xen command line options
- Edit "/boot/grub/grub.conf" and add "iommu=1" option for Xen, and "pci_pt_e820_access=on" option for dom0 Linux kernel (vmlinuz). Both options are required:
default=0 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz #hiddenmenu title CentOS (2.6.18-348.el5xen) root (hd0,0) kernel /xen.gz-2.6.18-348.el5 dom0_mem=2048M loglvl=all iommu=1 module /vmlinuz-2.6.18-348.el5xen ro root=/dev/VolGroup00/LogVol00 pci_pt_e820_access=on module /initrd-2.6.18-348.el5xen.img
"pci_pt_e820_access=on" is EL5 kernel-xen specific option to enable MMCONF access method for the PCI configuration space, required for SR-IOV Virtual Function (VF) device support. See RHEL5 technical release notes for more information: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/5.9_Technical_Notes/Known_Issues-kernel-xen.html .
Enable Intel SR-IOV NIC Virtual Functions (VFs) in the ixgbe driver options
- Add the following line to end of "/etc/modprobe.conf":
options ixgbe max_vfs=8
- 8 is just an example here, 82599 NIC supports up to 64 VFs.
- Blacklist the Intel VF driver (ixgbevf) in dom0 so that dom0 kernel doesn't try to use the Virtual Functions (we want to PCI passthru them to Xen VMs!). Create "/etc/modprobe.d/blacklist-ixgbevf.conf" file with the following contents:
# intel ixgbe sr-iov vf (virtual function) driver blacklist ixgbevf
- Now reboot the system to activate the changes.
- After reboot check that the IOMMU support gets enabled in Xen hypervisor from the Xen dmesg log:
[root@dom0 ~]# xm dmesg | grep -i vt-d | grep -i enable (XEN) Intel VT-d has been enabled (XEN) Intel VT-d snoop control enabled (XEN) [VT-D]iommu.c:619: iommu_enable_translation: iommu->reg = ffff828bfff58000
- Check for I/O virtualisation in Xen log:
[root@dom0 ~]# xm dmesg | grep "I/O virt" (XEN) I/O virtualisation enabled
- If VT-d IOMMU doesn't get enabled, read the full "xm dmesg | less" log for more information.
- Also verify you can see the VFs in "lspci" output:
[root@dom0 ~]# lspci | grep "Virtual Function" 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:11.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
- Why 16 VFs when we configured max_vfs=8? There's 8 per NIC/port, so 16 total for 2 ports.
- Also check the dom0 Linux kernel dmesg for the ixgbe driver logs:
[root@dom0 ~]# dmesg | grep ixgbe ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 3.4.8-k ixgbe: Copyright (c) 1999-2011 Intel Corporation. ixgbe 0000:03:00.0: Multiqueue Disabled: Rx Queue count = 1, Tx Queue count = 1 ixgbe 0000:03:00.0: (PCI Express:5.0GT/s:Width x8) 00:3b:31:77:9e:1c ixgbe 0000:03:00.0: MAC: 2, PHY: 8, SFP+: 3, PBA No: E81283-002 ixgbe 0000:03:00.0: eth2: IOV is enabled with 8 VFs ixgbe 0000:03:00.0: eth2: IOV: VF 0 is enabled MAC fe:71:c2:a8:4d:28 ixgbe 0000:03:00.0: eth2: IOV: VF 1 is enabled MAC a2:cd:8b:d7:a0:3e ixgbe 0000:03:00.0: eth2: IOV: VF 2 is enabled MAC 2a:e4:d6:69:a2:9a ixgbe 0000:03:00.0: eth2: IOV: VF 3 is enabled MAC 5a:56:e0:a4:7d:36 ixgbe 0000:03:00.0: eth2: IOV: VF 4 is enabled MAC ae:f5:bb:ff:f4:bc ixgbe 0000:03:00.0: eth2: IOV: VF 5 is enabled MAC 2a:be:89:3e:53:3b ixgbe 0000:03:00.0: eth2: IOV: VF 6 is enabled MAC 1a:61:ce:c8:a7:c0 ixgbe 0000:03:00.0: eth2: IOV: VF 7 is enabled MAC 96:d7:3f:c4:f2:aa ixgbe 0000:03:00.0: Intel(R) 10 Gigabit Network Connection ixgbe 0000:03:00.1: Multiqueue Disabled: Rx Queue count = 1, Tx Queue count = 1 ixgbe 0000:03:00.1: (PCI Express:5.0GT/s:Width x8) 00:3b:31:77:9e:1d ixgbe 0000:03:00.1: MAC: 2, PHY: 8, SFP+: 4, PBA No: E81283-002 ixgbe 0000:03:00.1: eth3: IOV is enabled with 8 VFs ixgbe 0000:03:00.1: eth3: IOV: VF 0 is enabled MAC ee:b0:2b:65:c4:6c ixgbe 0000:03:00.1: eth3: IOV: VF 1 is enabled MAC 0a:f0:de:c0:b1:4d ixgbe 0000:03:00.1: eth3: IOV: VF 2 is enabled MAC 9a:ab:7f:4a:d6:ce ixgbe 0000:03:00.1: eth3: IOV: VF 3 is enabled MAC ee:f3:59:15:1b:02 ixgbe 0000:03:00.1: eth3: IOV: VF 4 is enabled MAC be:cd:68:3c:bc:f2 ixgbe 0000:03:00.1: eth3: IOV: VF 5 is enabled MAC a2:66:eb:00:25:80 ixgbe 0000:03:00.1: eth3: IOV: VF 6 is enabled MAC 6e:3e:da:74:84:63 ixgbe 0000:03:00.1: eth3: IOV: VF 7 is enabled MAC de:06:3d:af:8c:ce ixgbe 0000:03:00.1: Intel(R) 10 Gigabit Network Connection
Configuring Xen pciback for VF PCI passthru
- First check available PCI devices for Xen PCI passthru:
[root@dom0 ~]# xm pci-list-assignable-devices Error: pciback not loaded?
- The error message makes sense, we haven't yet configured any PCI devices for passthru, and we haven't loaded the Xen pciback module in dom0 kernel.
- Edit "/etc/modprobe.conf" and add options for the Xen pciback dom0 kernel module:
options pciback hide=(03:10.0)(03:10.1)(03:10.2)(03:10.3)(03:10.4)(03:10.5)(03:10.6)(03:10.7)(03:11.0)(03:11.1)(03:11.2)(03:11.3)(03:11.4)(03:11.5)(03:11.6)(03:11.7)
- Here we are "hiding" all the Virtual Function PCI device IDs, so they can be used for Xen PCI passthru.
- Now load the "pciback" module in dom0 kernel:
modprobe pciback
- After loading "pciback" module check the dom0 Linux kernel dmesg:
pciback 0000:03:10.0: seizing device pciback 0000:03:10.2: seizing device pciback 0000:03:10.4: seizing device pciback 0000:03:10.6: seizing device pciback 0000:03:11.0: seizing device pciback 0000:03:11.2: seizing device pciback 0000:03:11.4: seizing device pciback 0000:03:11.6: seizing device pciback 0000:03:10.1: seizing device pciback 0000:03:10.3: seizing device pciback 0000:03:10.5: seizing device pciback 0000:03:10.7: seizing device pciback 0000:03:11.1: seizing device pciback 0000:03:11.3: seizing device pciback 0000:03:11.5: seizing device pciback 0000:03:11.7: seizing device PCI: Enabling device 0000:03:11.7 (0000 -> 0002) PCI: Enabling device 0000:03:11.5 (0000 -> 0002) PCI: Enabling device 0000:03:11.3 (0000 -> 0002) PCI: Enabling device 0000:03:11.1 (0000 -> 0002) PCI: Enabling device 0000:03:10.7 (0000 -> 0002) PCI: Enabling device 0000:03:10.5 (0000 -> 0002) PCI: Enabling device 0000:03:10.3 (0000 -> 0002) PCI: Enabling device 0000:03:10.1 (0000 -> 0002) PCI: Enabling device 0000:03:11.6 (0000 -> 0002) PCI: Enabling device 0000:03:11.4 (0000 -> 0002) PCI: Enabling device 0000:03:11.2 (0000 -> 0002) PCI: Enabling device 0000:03:11.0 (0000 -> 0002) PCI: Enabling device 0000:03:10.6 (0000 -> 0002) PCI: Enabling device 0000:03:10.4 (0000 -> 0002) PCI: Enabling device 0000:03:10.2 (0000 -> 0002) PCI: Enabling device 0000:03:10.0 (0000 -> 0002)
- Now check again what PCI devices are ready for Xen PCI passthru:
[root@dom0 ~]# xm pci-list-assignable-devices 0000:03:10.1 0000:03:10.0 0000:03:10.2 0000:03:10.3 0000:03:10.4 0000:03:10.5 0000:03:10.6 0000:03:10.7 0000:03:11.0 0000:03:11.1 0000:03:11.2 0000:03:11.3 0000:03:11.4 0000:03:11.5 0000:03:11.6 0000:03:11.7
- Enable the SR-IOV NIC Physical Function (main PCI device) in dom0, otherwise all the VFs will be in "down" state and cannot be used in domUs/VMs!
[root@dom0 ~]# ifconfig eth2 up [root@dom0 ~]# ifconfig eth3 up
- Enable automatic loading of Xen pciback module in dom0 kernel after system reboot
[root@dom0 ~]# echo "modprobe pciback" > /etc/sysconfig/modules/xen-pciback.modules [root@dom0 ~]# chmod +x /etc/sysconfig/modules/xen-pciback.modules
Xen SR-IOV VF PCI passthru to CentOS 5 x64 PV domU, guest installation
- Create a new LVM volume for the new VM:
[root@dom0 ~]# lvcreate -L18G -nc59x64pv /dev/VolGroup00 Logical volume "c59x64pv" created
- Install a new CentOS 5.9 (or later) x64 PV domU, for example with virt-install. ssh with X11 forwarding enabled to dom0, so you can view the GUI installer (VNC client virt-viewer session) on your local desktop/X-server:
[root@dom0 ~]# virt-install --debug -b virbr0 -n c59x64pv -r 1024 --vcpus=2 -f /dev/VolGroup00/c59x64pv --vnc -p -l "http://ftp.funet.fi/pub/mirrors/centos.org/5.9/os/x86_64"
- Replace "ftp.funet.fi" with your local CentOS mirror site.
- Note that virt-install will attach the VM to "virbr0" ethernet bridge, which provides DHCP+NAT services, so the VM can connect to the Internet thru dom0.
- Install the VM with minimal package selection profile, no need to choose any extra packages, GUI, or anything.
- When the installation finishes, and after the VM is automatically rebooted, login to the console as root, and check the IP address of the VM. It'll probably be something in the 192.168.122.0/24 private subnet from the virbr0 dnsmasq setup. Use "ifconfig eth0" to verify the IP of the domU.
- Verify you can ssh to the VM from dom0.
- Run "lspci" in the VM to verify there are no PCI devices before we configure the Xen PCI passthru:
[root@c59x64pv ~]# lspci [root@c59x64pv ~]#
- Install CentOS updates:
[root@c59x64pv ~]# yum update
- Reboot the VM to verify everything works after the updates are installed.
- Again connect to the VM with ssh.
- Check the kernel version:
[root@c59x64pv ~]# uname -a Linux c59x64pv.localdomain 2.6.18-348.el5xen #1 SMP Tue Jan 8 18:35:04 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
- Shutdown the domU.
Xen SR-IOV VF PCI passthru to CentOS 5.9 x64 PV domU, guest configuration
- RHEL / CentOS 5.8 and 5.9 both seem to work OK as domU. 5.7 and earlier won't work as domU. Dom0 is required to be at least 5.9 or later!
- Check the output of "xm pci-list-assignable-devices" in dom0 and decide which VF you want to passthru to the VM.
- In this example we'll use the PCI device "03:10.0", which is the first VF on the first NIC port.
- Edit "/etc/xen/c59x64pv" configuration file for the VM and add a line to enable PCI passthru for the chosen VF:
pci = [ '03:10.0' ]
- So the whole "/etc/xen/c59x64pv" domU cfgfile looks like:
name = "c59x64pv" uuid = "ac23d84e-9807-8e3a-5a84-494aa881fd67" maxmem = 1024 memory = 1024 vcpus = 2 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=fi" ] disk = [ "phy:/dev/VolGroup00/c58x64pv,xvda,w" ] vif = [ "mac=00:16:3e:64:99:f3,bridge=virbr0,script=vif-bridge" ] pci = [ '03:10.0' ]
- Now start the VM:
[root@dom0 xen]# xm create -f /etc/xen/c59x64pv Using config file "/etc/xen/c59x64pv". Using <class 'grub.GrubConf.GrubConfigFile'> to parse /grub/menu.lst Started domain c59x64pv
- Connect to the VM with ssh (you checked the IP earlier).
- Check the domU Linux kernel log with "dmesg" :
[root@c59x64pv ~]# dmesg|grep ixg ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.1.0-k
- Check the domU "lspci" output:
[root@c59x64pv ~]# lspci 00:00.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
- Check "ethtool -i" output in domU:
[root@c59x64pv ~]# ethtool -i eth1 driver: ixgbevf version: 2.1.0-k firmware-version: N/A bus-info: 0000:00:00.0
- Enable and configure eth1 VF interface:
[root@c59x64pv ~]# ifconfig eth1 <ip> netmask <netmask> up
- Link status etc with ethtool:
[root@c59x64pv ~]# ethtool eth1 Settings for eth1: Supported ports: [ ] Supported link modes: 10000baseT/Full Supports auto-negotiation: No Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Unknown! (255) PHYAD: 0 Transceiver: Unknown! Auto-negotiation: off Current message level: 0x00000007 (7) Link detected: yes
- Check that interrupt (IRQ) counters are increasing for the VF:
[root@c59x64pv ~]# grep eth1 /proc/interrupts 253: 24 0 Phys-irq eth1:mbx 254: 14 0 Phys-irq eth1-tx-0 255: 661 0 Phys-irq eth1-rx-0
- Note that "eth0" will be the Xen PV NIC (using xennet driver), and "eth1" will the SR-IOV VF using Intel "ixgbevf" driver.
- Configure the network/IP settings in "/etc/sysconfig/network-scripts/ifcfg-eth1" if you want them to persist across VM reboots.
- Enjoy the 10 Gbit/sec VF in the Xen PV domU!
Xen SR-IOV VF PCI passthru to CentOS 5 x64 HVM guest, installation
- Create a new LVM volume for the VM:
[root@dom0 ~]# lvcreate -L18G -nc59x64hvm /dev/VolGroup00 Logical volume "c59x64hvm" created
- Download CentOS 5.9 x64 DVD1 .iso from your local CentOS mirror site:
[root@dom0 ~]# mkdir ~/iso [root@dom0 ~]# cd ~/iso [root@dom0 iso]# wget ftp://ftp.funet.fi/pub/mirrors/centos.org/5.9/isos/x86_64/CentOS-5.9-x86_64-bin-DVD-1of2.iso
- Create a Xen cfgfile for the new VM, edit "/etc/xen/c59x64hvm":
kernel = "/usr/lib/xen/boot/hvmloader" builder='hvm' device_model = '/usr/lib64/xen/bin/qemu-dm' name = "c59x64hvm" memory = 1024 shadow_memory = 8 vcpus=2 pae=1 acpi=1 apic=1 vif = [ 'mac=00:26:5f:14:18:19, bridge=virbr0, model=e1000' ] disk = [ 'phy:/dev/VolGroup00/c59x64hvm,hda,w', 'file:/root/iso/CentOS-5.9-x86_64-bin-DVD-1of2.iso,hdc:cdrom,r' ] boot='cd' xen_platform_pci=1 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' sdl=0 vnc=1 vncpasswd= stdvga=0 serial='pty' tsc_mode=0 usb=1 usbdevice='tablet' keymap='fi'
- Remember to edit and change the MAC address on the vif-line, every VM/vif needs to have a unique static MAC !
- Start the VM and CentOS 5.9 installation:
[root@dom0 ~]# xm create -f /etc/xen/c59x64hvm Using config file "/etc/xen/c59x64hvm". Started domain c59x64hvm
- Connect to the HVM guest console with the GUI VNC client:
[root@dom0 ~]# virt-viewer c59x64hvm
- Install using the defaults.
- Choose DHCP for eth0.
- Set the system hostname manually to "c59x64hvm".
- Deselect "Desktop - Gnome" package group, it's not needed in the VM.
- When the installation is complete, and the installer reboots the VM, you need to launch virt-viewer again to reconnect to the VM console.
- Login to the console as root.
- Check the IP of eth0 with "ifconfig eth0". If there's no IP yet, enable the network with "ifup eth0" and then re-check the IP.
- Connect to the VM with ssh from dom0.
- Make sure basic tools are available:
[root@c59x64hvm ~]# yum install pciutils vim nano wget tcpdump screen ethtool
- Install the CentOS updates with:
[root@c59x64hvm ~]# yum update
- After the updates are installed reboot the VM.
- Reconnect to the VM with ssh.
- Verify the kernel version:
[root@c59x64hvm ~]# uname -a Linux c59x64hvm.localdomain 2.6.18-348.el5 #1 SMP Tue Jan 8 17:53:53 EST 2013 x86_64 x86_64 x86_64 GNU/Linux
- Verify the PCI devices in the VM:
[root@c59x64hvm ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:01.3 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 SCSI storage controller: XenSource, Inc. Xen Platform Device (rev 01) 00:04.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
- Note that because this is an HVM (=fully virtualized) guest there are some Xen Qemu-dm emulated PCI devices visible in the lspci output.
- Shutdown the VM.
[root@c59x64hvm ~]# poweroff
Xen SR-IOV VF PCI passthru to CentOS 5 x64 HVM guest, configuration
- NOTE! There is a bug in RHEL 5.9 / CentOS 5.9 and earlier where kudzu+udev interaction might end up renaming the VF ethX interface to something like "__tmp254339888", which triggers a bug in the Intel ixgbevf driver, which crashes the guest kernel. This bug is fixed in a driver update in RHEL 5.10 / CentOS 5.10. The driver bug is still present in the el5.9 GA kernel (2.6.18-348.el5). For more information: https://bugzilla.redhat.com/show_bug.cgi?id=862862 .
- Check the output of "xm pci-list-assignable-devices" in dom0 and decide which VF you want to passthru to the VM.
- In this example we'll use the PCI device "03:10.1", which is the first VF on the second NIC port.
- Edit "/etc/xen/c59x64hvm" configuration file for the VM and add a line to enable PCI passthru for the chosen VF:
pci = [ '03:10.1' ]
- So the whole "/etc/xen/c59x64hvm" VM cfgfile looks like:
kernel = "/usr/lib/xen/boot/hvmloader" builder='hvm' device_model = '/usr/lib64/xen/bin/qemu-dm' name = "c59x64hvm" memory = 1024 shadow_memory = 8 vcpus=2 pae=1 acpi=1 apic=1 vif = [ 'mac=00:26:5f:14:18:19, bridge=virbr0, model=e1000' ] disk = [ 'phy:/dev/VolGroup00/c59x64hvm,hda,w', 'file:/root/iso/CentOS-5.9-x86_64-bin-DVD-1of2.iso,hdc:cdrom,r' ] boot='cd' xen_platform_pci=1 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' sdl=0 vnc=1 vncpasswd= stdvga=0 serial='pty' tsc_mode=0 usb=1 usbdevice='tablet' keymap='fi' pci = [ '03:10.1' ]
- Start the VM:
[root@dom0 ~]# xm create -f /etc/xen/c59x64hvm Using config file "/etc/xen/c59x64hvm". Started domain c59x64hvm
- Connect to the VM with ssh.
- Check "lspci" output for the VF:
[root@c59x64hvm ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:01.3 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 SCSI storage controller: XenSource, Inc. Xen Platform Device (rev 01) 00:04.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03) 00:06.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
- Check kernel dmesg log for VF driver entries:
[root@c59x64hvm ~]# dmesg | grep ixgbevf ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.1.0-k ixgbevf 0000:00:06.0: setting latency timer to 64
- Check "ethtool -i" output:
[root@c59x64hvm ~]# ethtool -i eth1 driver: ixgbevf version: 2.1.0-k firmware-version: N/A bus-info: 0000:00:06.0
- Check and verify "/etc/modprobe.conf"
[root@c59x64hvm ~]# cat /etc/modprobe.conf alias eth0 e1000 alias scsi_hostadapter ata_piix alias eth1 ixgbevf
The emulated e1000 NIC is eth0, so make sure modprobe.conf has "alias eth0 e1000". VF is eth1, so make sure modprobe.conf has "alias eth1 ixgbevf". If these aliases are the other way around (as kudzu sometimes writes them..) udev will start renaming the ethX interfaces at boot time, and that might trigger the ixgbevf driver bug mentioned earlier, which will crash the guest kernel!
- Check and verify "ifcfg-eth0" file in "/etc/sysconfig/network-scripts" directory.
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Intel Corporation 82540EM Gigabit Ethernet Controller DEVICE=eth0 BOOTPROTO=dhcp HWADDR=00:26:5F:14:18:19 ONBOOT=yes DHCP_HOSTNAME=c59x64hvm.localdomain
- Remove the VF MAC address from "ifcfg-eth1" in "/etc/sysconfig/network-scripts" directory because the VF MAC is random, and it'll change on every host reboot.
[root@c59x64hvm ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Intel Corporation 82599 Ethernet Controller Virtual Function DEVICE=eth1 BOOTPROTO=none ONBOOT=yes
- Enable and configure eth1 VF interface:
[root@c59x64hvm ~]# ifconfig eth1 <ip> netmask <netmask> up
- Check link status etc with ethtool:
[root@c59x64hvm ~]# ethtool eth1 Settings for eth1: Supported ports: [ ] Supported link modes: 10000baseT/Full Supports auto-negotiation: No Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Unknown! (255) PHYAD: 0 Transceiver: Unknown! Auto-negotiation: off Current message level: 0x00000007 (7) Link detected: yes
- Check that interrupt (IRQ) counters are increasing for the VF:
[root@c59x64hvm ~]# grep eth1 /proc/interrupts 193: 37696 726 PCI-MSI-X eth1-rx-0 201: 721 0 PCI-MSI-X eth1-tx-0 209: 13 0 PCI-MSI-X eth1:mbx
- Enjoy the 10 Gbit/sec VF in the Xen HVM guest!
Xen SR-IOV VF PCI passthru to CentOS 6 PV domU
RHEL6 / CentOS6 uses Linux 2.6.32 based kernel, which includes Xen domU support via the upstream Linux "pvops" framework. xen-pcifront PCI frontend driver was added to upstream Linux 2.6.37, so it's not included in Linux 2.6.32, which means xen-pcifront is not in el6 kernel either. This means you can't do Xen PCI passthru to an el6 PV domU, because the el6 kernel lacks the required pci frontend driver, so it's not possible to use any PCI devices in the el6 PV domU.
Xen SR-IOV VF PCI passthru to CentOS 6 HVM guest, installation
- Create a new LVM volume for the VM:
[root@dom0 ~]# lvcreate -L18G -nc63x64hvm /dev/VolGroup00 Logical volume "c63x64hvm" created
- Download CentOS 6.3 x64 DVD1 .iso from your local CentOS mirror site:
[root@dom0 ~]# mkdir ~/iso [root@dom0 ~]# cd ~/iso [root@dom0 iso]# wget ftp://ftp.funet.fi/pub/mirrors/centos.org/6.3/isos/x86_64/CentOS-6.3-x86_64-bin-DVD1.iso
- Create a Xen cfgfile for the new VM, edit "/etc/xen/c63x64hvm":
kernel = "/usr/lib/xen/boot/hvmloader" builder='hvm' device_model = '/usr/lib64/xen/bin/qemu-dm' name = "c63x64hvm" memory = 1024 shadow_memory = 8 vcpus=2 pae=1 acpi=1 apic=1 vif = [ 'mac=00:26:5f:12:12:11, bridge=virbr0, model=e1000' ] disk = [ 'phy:/dev/VolGroup00/c63x64hvm,hda,w', 'file:/root/iso/CentOS-6.3-x86_64-bin-DVD1.iso,hdc:cdrom,r' ] boot='cd' xen_platform_pci=1 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' sdl=0 vnc=1 vncpasswd= stdvga=0 serial='pty' tsc_mode=0 usb=1 usbdevice='tablet' keymap='fi'
- Remember to edit and change the MAC address on the vif-line, every VM/vif needs to have a unique static MAC !
- Start the VM and CentOS 6.3 installation:
[root@dom0 ~]# xm create -f /etc/xen/c63x64hvm Using config file "/etc/xen/c63x64hvm". Started domain c63x64hvm
- Connect to the HVM guest console with the GUI VNC client:
[root@dom0 ~]# virt-viewer c63x64hvm
- Install using the "minimal" package selection profile.
- When the installation is complete, and the installer reboots the VM, you need to launch virt-viewer again to reconnect to the VM console.
- Login to the console as root.
- Check the IP of eth0 with "ifconfig eth0". If there's no IP yet, enable the network with "ifup eth0" and then re-check the IP.
- Connect to the VM with ssh from dom0.
- Install some required extra packages that are missing from the "minimal" installation:
[root@c63x64hvm ~]# yum install pciutils vim nano wget tcpdump
- Edit "/etc/sysconfig/network-scripts/ifcfg-eth0" and change ONBOOT="yes" and NM_CONTROLLED="no". Now the network will be started automatically after VM reboot.
- Also we need to make sure eth0 stays as eth0, and won't get renamed in the future, so check the HWADDR line in "/etc/sysconfig/network-scripts/ifcfg-eth0", and copy the MAC address.
- Edit "/etc/udev/rules.d/70-persistent-net.rules" and add these lines, replacing ATTR{address} with your actual eth0 MAC address:
# eth0 SUBSYSTEM=="net", ACTION=="add", DRIVERS="?*", ATTR{address}="00:26:5f:12:12:11", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
- Install the CentOS updates with:
[root@c63x64hvm ~]# yum update
- After the updates are installed reboot the VM.
- Reconnect to the VM with ssh.
- Verify the kernel version:
[root@c63x64hvm ~]# uname -a Linux c63x64hvm.localdomain 2.6.32-279.5.1.el6.x86_64 #1 SMP Tue Aug 14 23:54:45 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
- Verify the PCI devices in the VM:
[root@c63x64hvm ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:01.3 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 SCSI storage controller: XenSource, Inc. Xen Platform Device (rev 01)
- Note that because this is an HVM (=fully virtualized) guest there are some Xen Qemu-dm emulated PCI devices visible in the lspci output.
- Shutdown the VM.
[root@c63x64hvm ~]# poweroff
Xen SR-IOV VF PCI passthru to CentOS 6 x64 HVM guest, configuration
- NOTE! You need at least RHEL 6.4 / CentOS 6.4 in the VM, because there is a known bug in EL6.3 that makes it impossible to enable SR-IOV VF interfaces in an HVM guest. The bug is fixed in kernel-2.6.32-318.el6 and later versions. For more information: https://bugzilla.redhat.com/show_bug.cgi?id=849223 .
- Check the output of "xm pci-list-assignable-devices" in dom0 and decide which VF you want to passthru to the VM.
- In this example we'll use the PCI device "03:10.1", which is the first VF on the second NIC port.
- Edit "/etc/xen/c63x64hvm" configuration file for the VM and add a line to enable PCI passthru for the chosen VF:
pci = [ '03:10.1' ]
- So the whole "/etc/xen/c63x64hvm" VM cfgfile looks like:
kernel = "/usr/lib/xen/boot/hvmloader" builder='hvm' device_model = '/usr/lib64/xen/bin/qemu-dm' name = "c63x64hvm" memory = 1024 shadow_memory = 8 vcpus=2 pae=1 acpi=1 apic=1 vif = [ 'mac=00:26:5f:12:12:11, bridge=virbr0, model=e1000' ] disk = [ 'phy:/dev/VolGroup00/c63x64hvm,hda,w', 'file:/root/iso/CentOS-6.3-x86_64-bin-DVD1.iso,hdc:cdrom,r' ] boot='cd' xen_platform_pci=1 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' sdl=0 vnc=1 vncpasswd= stdvga=0 serial='pty' tsc_mode=0 usb=1 usbdevice='tablet' keymap='fi' pci = [ '03:10.1' ]
- Start the VM:
[root@dom0 ~]# xm create -f /etc/xen/c63x64hvm Using config file "/etc/xen/c63x64hvm". Started domain c63x64hvm
- Connect to the VM with ssh.
- Check "lspci" output for the VF:
[root@c63x64hvm ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:01.3 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 SCSI storage controller: XenSource, Inc. Xen Platform Device (rev 01) 00:06.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
- Check kernel dmesg log for VF driver entries:
[root@c63x64hvm ~]# dmesg | grep ixgbevf ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function Network Driver - version 2.2.0-k ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation. ixgbevf 0000:00:06.0: setting latency timer to 64 ixgbevf 0000:00:06.0: irq 48 for MSI/MSI-X ixgbevf 0000:00:06.0: irq 49 for MSI/MSI-X ixgbevf 0000:00:06.0: irq 50 for MSI/MSI-X
- Check "ethtool -i" output:
[root@c63x64hvm ~]# ethtool -i eth1 driver: ixgbevf version: 2.2.0-k firmware-version: bus-info: 0000:00:06.0
- Enable and configure eth1 VF interface:
[root@c63x64hvm ~]# ifconfig eth1 <ip> netmask <netmask> up
- Check link status etc with ethtool:
[root@c63x64hvm ~]# ethtool eth1 Settings for eth1: Supported ports: [ ] Supported link modes: 10000baseT/Full Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Other PHYAD: 0 Transceiver: Unknown! Auto-negotiation: off Current message level: 0x00000007 (7) Link detected: yes
- Enjoy the 10 Gbit/sec VF in the Xen HVM guest!
Passing thru multiple VFs to a single VM
- PCI devices in dom0:
[root@dom0 ~]# lspci | grep 82599 03:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) 03:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
- The ordering of the Virtual Function PCI devices is the following:
03:10.0: first VF of the first PF (03:00.0). 03.10.1: first VF of the second PF (03:00.1). 03.10.2: second VF of the first PF (03:00.0). 03.10.3: second VF of the second PF (03:00.1). 03.10.4: third VF of the first PF (03:00.0). 03.10.5: thid VF of the second PF (03:00.1).
- VF = Virtual Function.
- PF = Physical Function (physical card/port).
- To passthru multiple VFs to a single VM use the following syntax for pci=[]
pci = [ '03:10.0', '03:10.1', '03:10.2', '03:10.3' ]
VF MAC addresses
Intel "ixgbe" PF driver will randomize the VF MAC addresses, so on every host reboot the VF MACs will change! There's no way to change this behaviour in RHEL5 / CentOS5 dom0. In later versions of "iproute" you can control VF MAC addresses from dom0 and set/change static MACs for the VFs, but unfortunately that's not possible in the EL5 version of iproute.
Bugzilla entries
For reference here are some RHEL5 / RHEL6 Redhat bugzilla entries related to Xen SR-IOV VF passthru:
- "PCI Virtual Function Passthrough - SR-IOV, Paravirt Guest fails to obtain IRQ after reboot / passthrough works only once":
https://bugzilla.redhat.com/show_bug.cgi?id=688673
- "SR-IOV VF doesn't work in EL6.3 HVM guest / EL6 kernel bug":
https://bugzilla.redhat.com/show_bug.cgi?id=849223
- "RHEL5 Xen dom0 qemu-dm: NIC unplug messes up MSI-X functionality of passed-through devices":
https://bugzilla.redhat.com/show_bug.cgi?id=861349
- "Xen qemu-dm removes wrong iomem range when unplugging emulated NIC":
https://bugzilla.redhat.com/show_bug.cgi?id=861352
- "Only 2 VF can be seen in RHEL5.9 PV guest":
https://bugzilla.redhat.com/show_bug.cgi?id=865736
- "RHEL 5.8 HVM guest kernel crash when enabling SR-IOV VF PCI passthru NIC in the VM":
https://bugzilla.redhat.com/show_bug.cgi?id=862862
- "RFE: rhel5 iproute: add support for SR-IOV VF operations / wontfix":