From Fedora Project Wiki

Basic Setup

These steps will setup OpenStack nova, glance, and keystone to be accessed by the OpenStack dashboard web UI on a single host, as well as launching our first instance (virtual machine).

Initial Installation

To get started with OpenStack, you can install it on Fedora 17, along with a few dependencies:

$> sudo yum install --enablerepo=updates-testing openstack-nova openstack-glance openstack-keystone openstack-dashboard qpid-cpp-server

NOTE: very latest openstack-keystone update might not be mirrored yet: https://admin.fedoraproject.org/updates/openstack-keystone-2012.1-0.10.e4.fc17


Run the helper script to get MySQL configured for use with openstack-nova. If mysql-server is not already installed, this script will install it for you.

$> sudo openstack-nova-db-setup

Nova requires the QPID messaging server to be running.

$> sudo systemctl start qpidd && sudo systemctl enable qpidd.service

Nova requires the libvirtd server to be running:

$> sudo systemctl start libvirtd && sudo systemctl enable libvirtd.service

Next, you should enable the Glance API and registry services:

$> for svc in api registry; do sudo systemctl start openstack-glance-$svc; done
$> for svc in api registry; do sudo systemctl enable openstack-glance-$svc; done

The openstack-nova-volume service requires an LVM Volume Group called nova-volumes to exist. We simply create this using a loopback sparse disk image.

$> sudo dd if=/dev/zero of=/var/lib/nova/nova-volumes.img bs=1M seek=20k count=0
$> sudo vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img)

If you are testing OpenStack in a virtual machine, you need to configure nova to use qemu without KVM and hardware virtualization:

$> sudo openstack-config-set /etc/nova/nova.conf DEFAULT libvirt_type qemu

Now you can start the various services:

$> for svc in api objectstore compute network volume scheduler; do sudo systemctl start openstack-nova-$svc; done
$> for svc in api objectstore compute network volume scheduler; do sudo systemctl enable openstack-nova-$svc; done

Check that all the services started up correctly and look in the logs in /var/log/nova for errors. If there are none, then Nova is up and running!

Preview Repository for Fedora 16

Openstack Essex will not be pushed as an update to Fedora 16 but there are rebuilds from Rawhide available for testing on the current stable Fedora release, similar to http://fedoraproject.org/wiki/Virtualization_Preview_Repository

Enabling the Openstack Preview Repository

Preview packages may be installed using yum after performing the following step.

$> cd /etc/yum.repos.d/
$> wget http://repos.fedorapeople.org/repos/apevec/openstack-preview/fedora-openstack-preview.repo

This repo can also be used with Fedora 17 to get the latest packages which are still being pushed to the official repos.

Admin User, Project and Network Setup

Now you should create an admin user, project and network. Replace 'markmc', 'demoproject' and 'demonet' with your own details of course:

$> sudo nova-manage user admin markmc
$> sudo nova-manage project create demoproject markmc
$> sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0

NB the network range here, should *not* be the one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if 10.0.0.0/24 clashes with your local network, pick another range

Then download a set of credentials for this user/project:

$> sudo nova-manage project zipfile demoproject markmc
$> sudo chmod 600 nova.zip
$> sudo chown markmc:markmc nova.zip

Unpack the credentials, source the novarc and add an SSH keypair:

$> mkdir novacreds && cd novacreds
$> unzip ../nova.zip
$> . ./novarc
$> euca-add-keypair nova_key > nova_key.priv
$> chmod 600 nova*

Images

To run an instance, you're going to need an image. Three options are described below:

  1. Building a Fedora 16 JEOS image using Oz
  2. Downloading a Fedora 16 JEOS image
  3. Downloading ttylinux based minimal images used by OpenStack developers for testing

Building Fedora 16 JEOS Images With Oz

You can very easily build an image using Oz. First, make sure it's installed:

$> sudo yum install /usr/bin/oz-install

Create a template definition file called f16-jeos.tdl containing:

<template>
 <name>fedora16_x86_64</name>
 <description>My Fedora 16 x86_64 template</description>
 <os>
  <name>Fedora</name>
  <version>16</version>
  <arch>x86_64</arch>
  <install type='url'>
    <url>http://download.fedoraproject.org/pub/fedora/linux/releases/16/Fedora/x86_64/os/</url>
  </install>
 </os>
 <commands>
   <command name='setup-rc-local'>
sed -i 's/rhgb quiet/console=ttyS0/' /boot/grub/grub.conf
 
cat >> /etc/rc.local &lt;&lt; EOF
if [ ! -d /root/.ssh ]; then
  mkdir -p /root/.ssh
  chmod 700 /root/.ssh
fi
 
# Fetch public key using HTTP
ATTEMPTS=10
while [ ! -f /root/.ssh/authorized_keys ]; do
    curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/aws-key 2>/dev/null
    if [ \$? -eq 0 ]; then
        cat /tmp/aws-key >> /root/.ssh/authorized_keys
        chmod 0600 /root/.ssh/authorized_keys
        restorecon /root/.ssh/authorized_keys
        rm -f /tmp/aws-key
        echo "Successfully retrieved AWS public key from instance metadata"
    else
        FAILED=\$((\$FAILED + 1))
        if [ \$FAILED -ge \$ATTEMPTS ]; then
            echo "Failed to retrieve AWS public key after \$FAILED attempts, quitting"
            break
        fi
        echo "Could not retrieve AWS public key (attempt #\$FAILED/\$ATTEMPTS), retrying in 5 seconds..."
        sleep 5
    fi
done
EOF
   </command>
 </commands>
</template>
 

Then simply do:

$> sudo oz-install -d4 -u f16-jeos.tdl

Once built, you simply have to register the image with Nova:

$> sudo nova-manage image image_register /var/lib/libvirt/images/fedora16_x86_64.dsk markmc f16-jeos
$> glance index

The last command should return a list of the images registered with the Glance image registry.

Downloading Fedora 16 JEOS Images

If your network connection to the nearest Fedora repository is slow, then you can save yourself some time by just downloading our pre-built Fedora 16 JEOS image:

$> wget http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2
$> sudo nova-manage image image_register --disk_format=qcow2 f16-x86_64-openstack-sda.qcow2 markmc f16-jeos

Downloading Existing Images

If you don't need a functioning Fedora 16 and want the smallest possible images, just download this set of images commonly used by OpenStack developers for testing and register them with Nova:

$> mkdir images
$> cd images
$> curl http://images.ansolabs.com/tty.tgz | tar xvfzo -
$> cd ..
$> sudo nova-manage image convert images/

Launch an Instance

As a last step before launching, make sure the nbd kernel module is loaded so that injecting SSH key files into the filesystem on the qcow2 image works:

$> sudo modprobe nbd

You should now be able to launch an image:

$> euca-run-instances f16-jeos -k nova_key

Or, in the case of the downloaded TTY images:

$> euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k nova_key

And then observe the instance running, observe the KVM VM running and SSH into the instance:

$> euca-describe-instances
$> sudo virsh list
$> ssh -i nova_key.priv root@10.0.0.2
$> euca-get-console-output i-00000001
$> euca-terminate-instances i-00000001

Configuring Keystone for authentication

Keystone is the openstack identity service, providing a central place to set up openstack users, groups, and accounts that can be shared across all other services. This deprecates the old style user accounts manually set up with nova-manage.

Setting up keystone is required for using the Openstack dashboard.

Initial setup

  • Configure the Keystone database, similar to how we do it for nova
$> sudo openstack-keystone-db-setup
Please enter the password for the 'root' MySQL user: 
Verified connectivity to MySQL.
Creating 'keystone' database.
Asking openstack-keystone to sync the database.
Complete!
  • Generate a random administrative token: this is basically the shared password that allows various services to talk to keystone.
$> ADMIN_TOKEN=$(openssl rand -hex 10)
$> sudo openstack-config-set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
  • Start and enable Keystone service
$> sudo systemctl start openstack-keystone.service && sudo systemctl enable openstack-keystone.service
  • Create sample Tenants, Users and Roles
$> sudo ADMIN_PASSWORD=verybadpass openstack-keystone-sample-data
  • Test the Keystone CLI is working
$> export OS_USERNAME=admin
$> export OS_PASSWORD=verybadpass
$> export OS_TENANT_NAME=admin
$> export OS_AUTH_URL=http://localhost:35357/v2.0
$> keystone user-list
+----------------------------------+---------+-------------------+-------+
|                id                | enabled |       email       |  name |
+----------------------------------+---------+-------------------+-------+
| 05742d10109540d2892d17ec312a6cd9 | True    | admin@example.com | admin |
| 25fe47659d6a4255a663e6add1979d6c | True    | admin@example.com | demo  |
+----------------------------------+---------+-------------------+-------+
  • Add the nova-volume service, which is used by the OpenStack Dashboard

NOTE: This step is NOT needed with openstack-keystone-2012.1-0.10.e4.fc17 which loads catalog in sample-data script!

$> keystone service-create --name="nova-volume" --type=volume --description="Nova Volume Service"
$> cat << \EOF | sudo tee -a /etc/keystone/default_catalog.templates                                                       
catalog.RegionOne.volume.publicURL = http://localhost:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://localhost:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = http://localhost:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = 'Volume Service'
EOF
$> sudo systemctl restart openstack-keystone

Configure nova to use keystone

  • Change nova configuration to use keystone:
$> sudo sed -i -e 's/# \(pipeline = .*\keystonecontext\)/\1/g' /etc/nova/api-paste.ini
$> sudo openstack-config-set /etc/nova/api-paste.ini filter:authtoken admin_token $ADMIN_TOKEN
$> sudo chown nova:nova /etc/nova/*
$> sudo systemctl restart openstack-nova-api.service
  • Verify that nova can talk with keystone (requires OS_* exports from previous keystone section)
$> nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID |    Name   | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1  | m1.tiny   | 512       |      | 0        | 1     | 1.0         |
| 2  | m1.small  | 2048      |      | 10       | 1     | 1.0         |
| 3  | m1.medium | 4096      |      | 10       | 2     | 1.0         |
| 4  | m1.large  | 8192      |      | 10       | 4     | 1.0         |
| 5  | m1.xlarge | 16384     |      | 10       | 8     | 1.0         |
+----+-----------+-----------+------+----------+-------+-------------+

Configure glance to use keystone

  • Tell keystone about the glance service

NOTE: This step is NOT needed with openstack-keystone-2012.1-0.10.e4.fc17 which loads catalog in sample-data script!

$> cat << EOF | sudo tee -a /etc/keystone/default_catalog.templates                                                       
catalog.RegionOne.image.publicURL = http://localhost:9292/v1
catalog.RegionOne.image.adminURL = http://localhost:9292/v1
catalog.RegionOne.image.internalURL = http://localhost:9292/v1
catalog.RegionOne.image.name = 'Image Service'
EOF
$> sudo systemctl restart openstack-keystone.service
  • Change glance configuration to use keystone:
$> echo -e "\n[paste_deploy]\nflavor = keystone" | sudo tee -a /etc/glance/glance-api.conf
$> echo -e "\n[paste_deploy]\nflavor = keystone" | sudo tee -a /etc/glance/glance-registry.conf
$> sudo openstack-config-set /etc/glance/glance-api-paste.ini filter:authtoken admin_token $ADMIN_TOKEN
$> sudo openstack-config-set /etc/glance/glance-registry-paste.ini filter:authtoken admin_token $ADMIN_TOKEN
$> sudo systemctl restart openstack-glance-api.service
$> sudo systemctl restart openstack-glance-registry.service
  • Verify that glance can talk with keystone (requires OS_* exports from the previous keystone section)
$> glance index

Configuring the OpenStack Dashboard

The OpenStack dashboard is the official web user interface for OpenStack. It should mostly work out of the box, as long as keystone has been configured properly.

  • Install the dashboard
$> sudo yum install openstack-dashboard
  • Make sure httpd is running
$> sudo systemctl restart httpd.service
$> sudo systemctl enable httpd.service
$> sudo mkdir /var/www/.novaclient

The dashboard should then be accessed with a web browser at http://localhost/dashboard . Account and password should be what you configured for the keystone setup.

Additional Functionality

Volumes

If you use the Chrome browser, kill it before embarking on this section, as it has been known to cause the lvcreate command to fail with 'incorrect semaphore state' errors.

Start the SCSI target daemon

$> sudo service tgtd start
$> sudo chkconfig tgtd on

Create a new 1GB volume

$> VOLUME=$(euca-create-volume -s 1 -z nova | awk '{print $2}')

View the status of the new volume, and wait for it to become 'available'

$> watch "euca-describe-volumes | grep $VOLUME | grep available"

Re-run the previously terminated instance if necessary:

$> INSTANCE=$(euca-run-instances f15 -k nova_key | grep INSTANCE | awk '{print $2}')

or:

$> INSTANCE=$(euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k nova_key | grep INSTANCE | awk '{print $2}')

Make the storage available to the instance (note -d is the device on the compute node)

$> euca-attach-volume -i $INSTANCE -d /dev/vdc $VOLUME

ssh to the instance and verify that the vdc device is listed in /proc/partitions

$> cat /proc/partitions

Now make the device available if /dev/vdc is not already present

$> mknod /dev/vdc b 252 32

Create and mount a file system directly on the device

$> mkfs.ext3 /dev/vdc
$> mkdir /mnt/nova-volume
$> mount /dev/vdc /mnt/nova-volume

Display some file system details

$> df -h /dev/vdc

Create a temporary file:

$> echo foo > /mnt/nova-volume/bar

Terminate and re-run the instance, then re-attach the volume and re-mount within the instance as above. Your temporary file will have persisted:

$> cat /mnt/nova-volume/bar

Unmount the volume again:

$> umount /mnt/nova-volume

Exit from the ssh session, then detach and delete the volume:

$> euca-detach-volume $VOLUME
$> euca-delete-volume $VOLUME

Floating IPs

You may carve out a block of public IPs and assign them to instances.

First thing you need to do is make sure that nova is configured with the correct public network interface. The default is eth0, but you can change it by e.g.

$> sudo openstack-config-set /etc/nova/nova.conf DEFAULT public_interface em1
$> sudo service openstack-nova-network restart

Then you can do e.g.

$> sudo nova-manage floating create 172.31.0.224/28
$> euca-allocate-address
$> euca-associate-address -i i-00000012 172.31.0.224
$> ssh -i nova_key.priv root@172.31.0.224
$> euca-disassociate-address 172.31.0.224
$> euca-release-address 172.31.0.224

Deployment

Adding a Compute Node

Okay, everything so far has been done on a single node. The next step is to add another node for running VMs.

Let's assume the machine you've set up above is called 'controller' and the new machine is called 'node'.

First, open the rabbitmq, MySQL, Nova API and iSCSI ports on controller:

$ controller> sudo lokkit -p 3306:tcp
$ controller> sudo lokkit -p 5672:tcp
$ controller> sudo lokkit -p 9292:tcp
$ controller> sudo lokkit -p 3260:tcp
$ controller> sudo service libvirtd reload

Then make sure that ntp is enabled on both machines:

$> sudo yum install -y ntp
$> sudo service ntpd start
$> sudo chkconfig ntpd on

Install libvirt and nova on node:

$ node> sudo yum install --enablerepo=updates-testing openstack-nova
$ node> sudo service libvirtd start
$ node> sudo chkconfig libvirtd on
$ node> sudo setenforce 0

Configure nova so that node can find the services on controller:

$ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT rabbit_host controller
$ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
$ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
$ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 172.31.0.107

(The {{{iscsi_ip_prefix}}} value is the IP address of the controller node)

Enable the compute service:

$ node> for svc in compute network; do sudo service openstack-nova-$svc start; done

Finally, you need to make sure the network is configured with a physical bridge interface:

$ controller> sudo nova-manage network create demonet 10.0.0.0/24 --bridge=demonetbr0 --bridge_interface=em1

Now everything should be running as before, except the VMs are launched either on controller or node.

Manual Setup of MySQL

As of openstack-nova-2011.3-9.el6 and openstack-nova-2011.3-8.fc16, openstack-nova is now set up to use MySQL by default. If you're updating an older installation or prefer to set up MySQL manually instead of using the openstack-nova-db-setup script, this section shows how to do it.

First install and enable MySQL:

$> sudo yum install -y mysql-server
$> sudo service mysqld start
$> sudo chkconfig mysqld on

Set a password for the root account and delete the anonymous accounts:

$> mysql -u root
mysql> update mysql.user set password = password('iamroot') where user = 'root';
mysql> delete from mysql.user where user = '';

Create a database and user account specifically for nova:

mysql> create database nova;
mysql> create user 'nova'@'localhost' identified by 'nova';
mysql> create user 'nova'@'%' identified by 'nova';
mysql> grant all on nova.* to 'nova'@'%';

(If anyone can explain why nova@localhost is required even though the anonymous accounts have been deleted, I'd be very grateful :-)

Then configure nova to use the DB and install the schema:

$> sudo openstack-config-set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@localhost/nova
$> sudo nova-manage db sync

As a final sanity check:

$> mysql -u nova -p nova
Enter password:
mysql> select * from migrate_version;

Miscellaneous

Smoke Tests

Nova comes with a selection of fairly basic smoke tests which you can run against your installation. It can be useful to use these to sanity check your configuration.

First off, you need the nova-adminclient python library which isn't yet packaged:

$> sudo yum install python-pip
$> sudo pip-python install nova-adminclient

Then you need a user and project both named admin:

$> sudo nova-manage user admin admin
$> sudo nova-manage project create admin admin
$> sudo nova-manage project zipfile admin admin
$> unzip nova.zip
$> . ./novarc

Make sure you have the tty images imported as described above. You also need a block of floating IPs created, also as described above.

Then, run the tests from a fedpkg checkout:

$> fedpkg clone openstack-nova
$> cd openstack-nova
$> fedpkg switch-branch f16
$> fedpkg prep
$> cd nova-2011.3/smoketests
$> python ./run_tests.py

All the tests should pass.

If you run into import errors such as:

ImportError: No module named nose

or:

ImportError (No module named paramiko)

simply install the missing dependency as follows:

$> sudo yum install -y python-nose.noarch
$> sudo yum install -y python-paramiko.noarch

Cleanup

While testing OpenStack, you might want to delete everything related to OpenStack and start testing with a clean slate again.

Here's how. First, make sure to terminate all running instances:

$> euca-terminate-instances ...

Double check that you have no lingering VMs, perhaps saved to disk:

$> virsh list --all && virsh undefine
$> rm -f /var/lib/libvirt/qemu/save/instance-00000*

Then stop all the services:

$> for iii in api objectstore compute network volume scheduler; do sudo service openstack-nova-$iii stop; done
$> for iii in api registry; do sudo service openstack-glance-$iii stop; done

Delete all the packages:

$> sudo yum erase python-glance python-nova* python-keystone* openstack-swift*

Delete the nova table from the MySQL DB:

$> mysql -u root -p -e 'drop database nova;'

Delete the nova-volumes VG:

$> sudo vgchange -an nova-volumes
$> sudo losetup -d /dev/loop0
$> sudo rm -f /var/lib/nova/nova-volumes.img

Take down the bridge and kill dnsmasq:

$> sudo ip link set br0 down
$> sudo brctl delbr br0
$> sudo kill -9 $(cat /var/lib/nova/networks/nova-br0.pid)

Remove all directories left behind from the packages:

$> sudo rm -rf /etc/{glance,nova,swift,keystone} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}

Finally, restart iptables to clear out all rules added by Nova. You also need to reload libvirt's iptables rules:

$> sudo service iptables restart
$> sudo service libvirtd restart