Archive/XCP DevStack

From Xen
Jump to navigationJump to search

Introduction

These instructions are based on the README file here: https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md

  • This README is required reading, the instructions below are a helpful variation I used in testing

Assumptions

  1. Pre-installed XCP (1.0 or 1.1) somewhere
  2. Highly recommended (but not required) to use a clean install
    1. The below instructions will not wipe out any existing VMs on XCP, but this is for development testing and there's absolutely NO GUARANTEES
  3. The instructions use a good amount of disk space on XCP to prepare the images, so you can either do that step on a separate machine as the README above suggests, or I found it easier to attach a spare USB disk to XCP and mount that to have the extra space there to work with

These instructions will do the following

  1. Download and run the Xen DevStack scripts on the new XCP installation
    1. This will install the plug-ins and configuration needed inside the XCP dom0
  2. Create a new VM on XCP called ALLINONE which contains the full set of OpenStack node components (nova compute/volume, glance image, keystone identity, etc)
  3. Allow you to then login to the OpenStack dashboard web page and play around with OpenStack

The OpenStack VM (called "ALLINONE" by default) installed in XCP will automatically be configured to use 1024MB of RAM and 2.5GB of storage.

Installation

  1. Install XCP
  2. SSH to your XCP dom0
  3. Now install DevStack per the below

Download the bootstrap script

cd /root
wget --no-check-certificate https://github.com/cloudbuilders/devstack/raw/xen/tools/xen/prepare_dom0.sh
chmod 755 prepare_dom0.sh

Fix XCP yum to allow it to install packages (we'll undo this at the end to be safe)

sed -i -e "s/enabled=0/enabled=1/" /etc/yum.repos.d/CentOS-Base.repo
sed -i -e "s/enabled=1/enabled=0/" /etc/yum.repos.d/Citrix.repo

If you decide to do what I did and use an external USB drive (mounted on /nasbackup on my XCP) you can do this. The scripts assume /root/devstack is the root path in some places, so I just setup a symlink to make my life easier.

# link in the storage directory since some of the scripts rely on /root/devstack
mkdir /nasbackup/devstack
ln -s /nasbackup/devstack /root

Setup all the pre-requisites

# install curl/expat for git
yum install curl-devel
yum install expat-devel

# install the pre-reqs using the script
./prepare_dom0.sh

The XENAPI_PASSWORD is only used inside the created VM. The part where the scripts install the VM into your XCP is done using "xe" on the command-line and so will install directly to the XCP machine the scripts are running from.

cd /root/devstack/tools/xen
cat > /root/devstack/localrc <<EOF
# IMPORTANT: The following must be set to your dom0 root password!
MYSQL_PASSWORD=my_super_secret
SERVICE_TOKEN=my_super_secret
ADMIN_PASSWORD=my_super_secret
RABBIT_PASSWORD=my_super_secret
# This is the password for your guest (for both stack and root users)
GUEST_PASSWORD=my_super_secret
# IMPORTANT: The following must be set to your dom0 root password!
XENAPI_PASSWORD=my_super_secret
# Do not download the usual images yet!
IMAGE_URLS=""
# Explicitly set virt driver here
VIRT_DRIVER=xenserver
# Explicitly set multi-host
MULTI_HOST=1
# Give extra time for boot
ACTIVE_TIMEOUT=45
EOF

If you didn't do the external USB drive step, you'll have to copy the devstack folder to another Linux machine with more available disk space to run this step. Please see the aforementioned official README for more information.

OPTIONAL: My VM had name resolution failures while watching the "/opt/stack/run.sh.log" setup log, which turned out to be because of the default gateway not working, so I fixed it by hand by editing templates/interfaces.in, adding my gateway to eth3, and re-installing.. just a note in case anyone else hits that problem, but otherwise recommended to leave this alone unless you have an issue.

# total hack to fix gateway
vi templates/interfaces.in 
# added this as part of the eth3 section
# set to your LAN's gateway, this is mine!
gateway 192.168.41.254
#

This creates a 2.5GB file "/xvas/ALLINONE.xva" which is the base VM image that will be customized and then installed on your XCP

./build_xva.sh

The VM is assigned a default network:

  • Public IP: 192.168.1.55
  • Management IP: 172.16.100.55

Customize and install the new VM onto your local XCP

./build_domU.sh

If you watch the VM console it may get stuck on "mounted filesystem with ordered data mode", but really in the background it's working away.

You can ssh to the machine with "root" and GUEST_PASSWORD above and then monitor the install status on the machine:

tail -f /opt/stack/run.sh.log

After it's all over you should be able to login to the web UI and have a play around.

Revert back the yum setup on XCP

sed -i -e "s/enabled=1/enabled=0/" /etc/yum.repos.d/CentOS-Base.repo
sed -i -e "s/enabled=0/enabled=1/" /etc/yum.repos.d/Citrix.repo

Mult-node setup

These scripts also contain a multi-node setup for separate HEAD and COMPUTE VMs, both still intended to run on the same XCP dom0.

The HEAD vm contains everything except compute, and the COMPUTE vm contains just compute.

Run this script to creates two XVA files:

  • xvas/HEADNODE.xva
  • xvas/COMPUTENODE.xva
./build_domU_multi.sh

These are assigned a default network:

  • HEAD
    • Public IP: 192.168.1.57
    • Management IP: 172.16.100.57
  • COMPUTE
    • Public IP: 192.168.1.58
    • Management IP: 172.16.100.58
  • Floating IP range: 192.168.1.196/30

Run this script to setup and install both of those VMs onto the current running XCP. The scripts do not currently by default have any support for installing them on separate XCPs, so you'd have to hack that together yourself if desired.

./install_domU_multi.sh

References