OpenStack CI Loop for Xen-Libvirt: Difference between revisions
Lars.kurth (talk | contribs) |
(→Baseline: Update baseline, it is now on Xenial) |
||
Line 11: | Line 11: | ||
Our baseline for testing is ... |
Our baseline for testing is ... |
||
Ubuntu Xenial Xerus 16.04 LTS. It comes with: |
|||
Ubuntu xen package xen_4.4.2-0ubuntu0.14.04.2 (Xen 4.4.2), with the following 5 patches applied |
|||
* Xen 4.6 |
|||
* 9369988 libxl: event handling: Break out ao_work_outstanding |
|||
* Libvirt 1.3.1 |
|||
* f1335f0 libxl: event handling: ao_inprogress does waits while reports outstanding |
|||
* Linux 4.4 |
|||
* 4783c99 libxl: In domain death search, start search at first domid we want |
|||
* 188e9c5 libxl: Domain destroy: fork |
|||
* 9acfbe1 libxl: Increase device model startup timeout to 1min. |
|||
We are using the Ubuntu libvirt package 1.2.2 libvirt_1.2.2-0ubuntu13.1.7 as baseline, updated it to a more recent libvirt release, version 1.2.15, with patch bcf1349 (libxl: Add timestamp to the libxl driver log.) to help for debug. |
|||
Note that all the necessary Xen changes are in Xen 4.5.1 and libvirt 1.2.15. |
|||
== Usage Instructions == |
== Usage Instructions == |
Revision as of 16:40, 17 November 2016
CI Loop
This article provides some information about jenkins.openstack.xenproject.org, which is the front-end for the Xen Project OpenStack CI loop, which is a 3rd party OpenStack CI loop (3rd party because it is operated by the Xen Project, not the OpenStack Foundation). You can find more information about 3rd party OpenStack testing systems in the following articles
- Setting Up an External OpenStack Testing System – Part 1
- Setting Up an External OpenStack Testing System – Part 2
- The Test Suite we run (OpenStack Tempest)
- Tempest Overview
- Xen Project now in OpenStack Nova Hypervisor Driver Quality Group B
Baseline
Our baseline for testing is ...
Ubuntu Xenial Xerus 16.04 LTS. It comes with:
- Xen 4.6
- Libvirt 1.3.1
- Linux 4.4
Usage Instructions
Our OpenStack CI loop interface is available at jenkins.openstack.xenproject.org. The CI loop is now integrated with OpenStack Gerrit, in voting mode (Group B). This means that functional testing provided by our CI loop does not gate commits, but advises patch authors and reviewers of results in gerrit.
Note that the DNS is broken, but you can access the CI loop via http://104.130.29.226 and http://104.130.29.226:8080 |
The CI loop tests our baseline against each Nova OpenStack commit.
Dashboard
The most useful view to get an overview over test failures can be found at the Build History page.
OpenStack review: Each build maps onto a changeset on review.openstack.org. The following figure shows how to map a test run on our CI loop to a specific OpenStack review.
Aborted builds: Sometimes, jobs are aborted. The main reason for aborted jobs is that the OpenStack CI loop run at the OpenStack Foundation (or another voting 3rd party CI loop) may have determined that a change breaks a test. In other words, it is not necessary to complete the test in this case. Aborted jobs are typically marked with a grey circle. Note that this is nothing to worry about
Checking up on Test Failures
Test failures can be caused by
- an OpenStack commit review.openstack.org
- an issue in libvirt
- an issue in Xen
The first thing to do is to check whether a failure has been caused by an OpenStack commit review.openstack.org. If that is the case, the OpenStack community will fix the issue and re-submit the changeset.
That is unless, the issue is specific to the Xen Project. In that case, there is an expectation that we help the committer investigate the issue. Note that you can also use zuul.openstack.xenproject.org/scoreboard to get a comparison which tests failures were Xen specific.
The figure below, shows how you can find the test reports and how you verify that an issue is Xen+Libvirt specific.
The figure below, shows you how you get to the libvirt and libxl libvirt driver log files, which you will need to investigate and reproduce an issue.
Scripts to investigate issues
A simple couple of bash lines to download the full console logs from the jobs and create a histogram of tempest tests caused the majority of the failures:
# for i in {1106..1374}; do wget http://104.239.148.40:8080/job/dsvm-tempest-xen/${i}/consoleFull -O ${i}_console; done # grep -h "\.\.\. FAIL" * | sed -e 's/.*\(tempest[^) ]*\).*/\1/' | sort | uniq -c | sort -n
Note that the
in {<start test-run>..<end test-run>}
portion of the script needs to be adjusted to the start and end test run.
Community Activities
Training Session (week of May 4th)
We are planning a training session the week of May 4th or the week after, For more details see here. This will also include a discussion on how to integrate the CI loop into the project's workflow.