Inter-domain communication for XAPI

From Xen
Jump to navigationJump to search

Note: this page is out-of-date and should be replaced with a description of the "message-switch" and "vchan"; see https://github.com/xapi-project/xapi-project.github.io/issues/39

In the future we will have

  1. storage driver domains
  2. network driver domains
  3. qemu stub domain(s)
  4. ... various other helper domains (e.g. possibly a domain running pygrub)

The XCP toolstack already has several pieces including:

  1. xapi: handling resource pools
  2. squeezed: applying ballooning policy
  3. xenopsd: (new) "babysitting" running VMs on a host
  4. perfmon: monitors alerts based on performance measurements

We currently use a set of ad-hoc protocols for inter-process communication including

  1. XMLRPC
  2. xenstore "rpc"
  3. JSONRPC

We desire to standardise on a particular messaging/RPC system which

  1. is available everywhere and very easy to use from many different source languages
  2. promotes location-transparency (so we don't suffer when a service is moved from dom0 to a domU)

Proposal: DBUS

DBUS is a simple IPC system originally designed for graphical desktop environments. It supports

  1. a message broker which can buffer messages and start receiving services on demand
  2. an IDL supporting both basic types (string, int etc) and structured types (arrays, structs)
  3. languages including: C, python, ocaml
  4. location-transparency via a notion of a "well-known bus address" (like org.xen.foo) used to identify services

It is also the system used by XCI.

Useful links:

  1. d-feet: a pygtk-based object inspector
  2. http://wiki.meego.com/D-Bus/Overview: an overview, including instructions for using d-feet

Concrete example

Consider a simplified XCP system containing the following services:

  1. xapi: running in a domU, handling XenAPI calls from clients
  2. xenopsd: running in dom0, performing start/shutdown/... on running VMs
  3. storage: running in a domU i.e. a storage driver domain

We would create a special bus for our communication (we wouldn't use the system or the session default buses)

Each service would bind to: org.xen.xcp.servicename

Each service would place its objects into path: org/xen/xcp/servicename/objectname

TBD: Would we want to expose internal application "objects" (e.g. the VMs being managed by xenopsd or the VDIs within a storage driver domain) as individual objects, so they are introspectable? Or would that require that we broadcast too much internal state?

We would want to specify IDL up-front for interfaces (even though it can be used dynamically e.g. with python)

TBD: Are signals useful for us?

TBD: Are there any default interfaces for heartbeating and diagnostics?