From Fedora Project Wiki

Basic Setup

  • These steps will setup OpenStack Nova, Glance, and Keystone to be accessed by the OpenStack Dashboard web UI on a single host, as well as launching our first instance (virtual machine). Fedora 18 includes OpenStack Folsom release.
  • Many of the examples here require 'sudo' to be properly configured, please see Configuring Sudo if you need help.

Install packages

First let us pull in OpenStack and some optional dependencies:

# OpenStack utils, Keystone (identity), Nova (compute), Cinder (block storage), 
# Glance (image), Swift (object storage), Quantum (network), Horizon (dashboard)
$> sudo yum install openstack-utils openstack-keystone openstack-nova \
  openstack-cinder openstack-glance openstack-swift openstack-swift-proxy \
  openstack-swift-account openstack-swift-container openstack-swift-object \
  openstack-quantum openstack-quantum-linuxbridge openstack-quantum-openvswitch \
  openstack-dashboard openstack-tempo
# MySQL, QPID (AMQP message bus), memcached, NBD (Network Block Device) wget module
$> sudo yum install mysql-server qpid-cpp-server memcached nbd wget
# Python bindings
$> sudo yum install python-nova-adminclient 
# Ruby bindings
$> sudo yum install rubygem-openstack rubygem-openstack-compute \
  rubygem-openstack-quantum-client
# Image creation
$> sudo yum install appliance-tools appliance-tools-minimizer \
  febootstrap rubygem-boxgrinder-build
# Some documentation
$> sudo yum install openstack-keystone-doc openstack-nova-doc \
  openstack-cinder-doc openstack-glance-doc openstack-swift-doc \
  python-keystoneclient-doc python-novaclient-doc \
  python-swiftclient-doc python-django-horizon-doc \
  rubygem-openstack-doc rubygem-openstack-quantum-client-doc

Start support services

OpenStack requires the MySQL database server to be running.

$> sudo systemctl start mysqld.service && sudo systemctl enable mysqld.service

Nova requires the QPID messaging server to be running.

$> sudo systemctl start qpidd.service && sudo systemctl enable qpidd.service

Nova requires the libvirtd server to be running:

$> sudo systemctl start libvirtd.service && sudo systemctl enable libvirtd.service

Configure Keystone

Keystone is the OpenStack identity service, providing a central place to set up OpenStack users, groups, and accounts that can be shared across all other services. This deprecates the old style user accounts manually set up with nova-manage.

Setting up Keystone is required for using the OpenStack dashboard.

  • Configure the Keystone database, similar to how we do it for nova
$> sudo openstack-db --service keystone --init
  • Set up a keystonerc file with a generated admin token and various passwords:
$> cat > ./.keystonerc << _EOF
export ADMIN_TOKEN=$(openssl rand -hex 10)
export OS_USERNAME=admin
export OS_PASSWORD=verybadpass
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/
export SERVICE_TOKEN=\$ADMIN_TOKEN
_EOF
$> . ./.keystonerc
  • Set the administrative token in the config file
$> sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
  • Start and enable Keystone service
$> sudo systemctl start openstack-keystone.service && sudo systemctl enable openstack-keystone.service
  • Create sample Tenants, Users and Roles
$> sudo ADMIN_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=servicepass openstack-keystone-sample-data
  • Test the Keystone CLI is working
$> keystone user-list
+----------------------------------+---------+--------------------+--------+
|                id                | enabled |       email        |  name  |
+----------------------------------+---------+--------------------+--------+
| 53c7ad6f1b154754bd59cf07ffe9b0c1 |   True  | admin@example.com  | admin  |
| 75194f7ca5354f92b42d80070df15dd3 |   True  | admin@example.com  |  demo  |
| 45861e2701d24c17a57da280d3a03c3b |   True  |  nova@example.com  |  nova  |
| fc205aedf6c34b2998847b0ee3bf3bd1 |   True  | glance@example.com | glance |
+----------------------------------+---------+--------------------+--------+

Configure Glance with Keystone

Setup Glance database

Similarly, run the helper script to get MySQL configured for use with openstack-glance.

$> sudo openstack-db --service glance --init

Configure glance to use keystone

  • Change glance configuration to use keystone:
$> sudo openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
$> sudo openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_user glance
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password servicepass
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_user glance
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_password servicepass

Starting the Glance services

Next, you should enable the Glance API and registry services:

$> for svc in api registry; do sudo systemctl start openstack-glance-$svc.service; done
$> for svc in api registry; do sudo systemctl enable openstack-glance-$svc.service; done
  • Verify that glance can talk with keystone (requires OS_* exports from the previous keystone section)
$> glance index

Configure Cinder with Keystone

Setup Cinder database

Similarly, run the helper script to get MySQL configured for use with openstack-cinder.

$> sudo openstack-db --service cinder --init

Setup volume storage

The volume service has been extracted from Nova and incorporated into a new dedicated component named Cinder (dubbed OpenStacked Block Storage). From the Folsom release, only Cinder has become officially supported and Nova-volumes has subsequently been deprecated.

Independently of the block storage service component, either Cinder from Folsom or Nova-volumes in Essex, a LVM volume group (vg) has to be created. The LVM volume group can be created either temporarily, e.g. through a simple loop-back sparse disk image, or permanently, e.g. thanks to a simple file mounted as a permanent partition. The Swift component and more permanent block devices are to be preferred for more production-oriented infrastructures.

File-based storage creation

Unless you have dedicated partitions and/or block device, a sparse disk image has to be created.

$> sudo mkdir -p /var/lib/cinder
$> sudo truncate --size=20G /var/lib/cinder/cinder-volumes.img

Volatile set up (to be redone after every reboot)

The newly created disk image can be mounted as a simple loop-back device.

$> sudo losetup --show -f /var/lib/cinder/cinder-volumes.img
$> CINDER_VOL_DEVICE=$(losetup -a | grep "/var/lib/cinder/cinder-volumes.img" | cut -d':' -f1)
$> sudo vgcreate cinder-volumes $CINDER_VOL_DEVICE

Permanent set up

The newly created disk image can now be mounted as a standard block device.

Cinder volumes
LOOP_EXEC_DIR=/usr/libexec/cinder
LOOP_SVC=cinder-demo-disk-image.service
LOOP_EXEC=voladm
GH_SYSD_BASE_URL=https://raw.github.com/openstack-fedora/openstack-configuration/master
GH_SYSD_LOOP_SVC_URL=$GH_SYSD_BASE_URL/systemd/$LOOP_SVC
GH_SYSD_LOOP_EXEC_URL=$GH_SYSD_BASE_URL/bin/$LOOP_EXEC
mkdir -p $LOOP_EXEC_DIR
curl $GH_SYSD_LOOP_SVC_URL -o /usr/lib/systemd/system/$LOOP_SVC
curl $GH_SYSD_LOOP_EXEC_URL -o $LOOP_EXEC_DIR/$LOOP_EXEC
chmod -R a+rx $LOOP_EXEC_DIR
systemctl start $LOOP_SVC && systemctl enable $LOOP_SVC
# By construction (hard-coded in the systemd script):
CINDER_VOL_DEVICE=/dev/loop0
# Create the cinder-volumes Volume Group (VG) for the volume service:
vgcreate cinder-volumes $CINDER_VOL_DEVICE

Starting the volume services

The Cinder service can now be started:

$> sudo systemctl start openstack-cinder-volume.service && systemctl enable openstack-cinder-volume.service

Configure Nova with Keystone

Setup Nova database

Run the helper script to get MySQL configured for use with openstack-nova. If mysql-server is not already installed, this script will install it for you.

$> sudo openstack-db --service nova --init

Then, synchronize the Nova database:

$> sudo nova-manage db sync

Configure Nova to use keystone

  • Change nova configuration to use keystone:
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password servicepass
$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

Starting the Nova services

$> for svc in api objectstore compute network scheduler cert; do sudo systemctl start openstack-nova-$svc.service; done
$> for svc in api objectstore compute network scheduler cert; do sudo systemctl enable openstack-nova-$svc.service; done

Check that all the services started up correctly and look in the logs in /var/log/nova for errors. If there are none, then Nova is up and running!

Note the network service should only be started on a single node, when setting up multiple compute nodes

  • Verify that nova can talk with keystone (requires OS_* exports from previous keystone section)
$> nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         | True      | {}          |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      | {}          |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      | {}          |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      | {}          |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      | {}          |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+

Nova Network Setup

To create the network do:

$> sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0

NB the network range here, should *not* be the one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if 10.0.0.0/24 clashes with your local network, pick another range

Register an Image

To run an instance, you are going to need an image. There are prebuilt Fedora 17 JEOS (Just Enough OS) images that can be downloaded.

$> wget http://berrange.fedorapeople.org/images/2012-11-15/f17-x86_64-openstack-sda.qcow2
$> glance add name=f17-jeos is_public=true disk_format=qcow2 container_format=bare < f17-x86_64-openstack-sda.qcow2
  • Verify that glance successfully registered the image.
$> glance image-list
+--------------------------------------+----------+-------------+------------------+-----------+--------+
| ID                                   | Name     | Disk Format | Container Format | Size      | Status |
+--------------------------------------+----------+-------------+------------------+-----------+--------+
| 30cc7aed-5b8f-49aa-9543-3751ac36fae1 | f17-jeos | qcow2       | bare             | 251985920 | active |
+--------------------------------------+----------+-------------+------------------+-----------+--------+

Launch an Instance

As a last step before launching, make sure the nbd kernel module is loaded so that injecting SSH key files into the filesystem on the qcow2 image works:

$> sudo modprobe nbd

Create a keypair:

$> nova keypair-add mykey > oskey.priv
$> chmod 600 oskey.priv

Launch an instance:

$> nova boot myserver --flavor 2 --key_name mykey \
     --image $(glance index | grep f17-jeos | awk '{print $1}')

And then observe the instance running, observe the KVM VM running and SSH into the instance:

$> nova list

If STATUS is BUILD, the instance is being built. check it again a little later. If STATUS is ACTIVE, the instance has started.

$> sudo virsh list
$> ssh -i oskey.priv root@10.0.0.2
$> exit
$> nova console-log myserver
$> nova delete myserver
$> nova list

Configure the OpenStack Dashboard

The OpenStack dashboard is the official web user interface for OpenStack. It should mostly work out of the box, as long as keystone has been configured properly.

  • Make sure httpd is running
$> sudo systemctl restart httpd.service && sudo systemctl enable httpd.service
  • If selinux is enabled, you will have to allow httpd to access other network services (the dashboard talks to the http API of the other OpenStack services)
$> sudo setsebool -P httpd_can_network_connect=on

The dashboard should then be accessed with a web browser at http://localhost/dashboard . Account and password should be what you configured for the keystone setup.

Configure Swift with Keystone

These are the minimal steps required to setup a swift installation with keystone authentication, this wouldn't be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notably it doesn't include replication, multiple zones and load balancing.

Installing swift

$> sudo yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached

Ensure the keystone env variables are still setup from the previous steps

We need to create 5 configuration files

$> cat > /tmp/swift.conf <<- EOF
[swift-hash]
swift_hash_path_suffix = randomestringchangeme
EOF
$> sudo mv /tmp/swift.conf /etc/swift/swift.conf
$> cat > /tmp/proxy-server.conf <<- EOF
[DEFAULT]
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_port = 35357
auth_host = 127.0.0.1
auth_protocol = http
admin_token = ADMINTOKEN
#  ??? Are these needed?
service_port = 5000
service_host = 127.0.0.1
service_protocol = http
auth_token = ADMINTOKEN
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = 127.0.0.1:11211
[filter:catch_errors]
use = egg:swift#catch_errors
EOF
$> sudo mv /tmp/proxy-server.conf /etc/swift/proxy-server.conf
$> cat > /tmp/account-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]
EOF
$> sudo mv /tmp/account-server.conf /etc/swift/account-server.conf
$> cat > /tmp/container-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
EOF
$> sudo mv /tmp/container-server.conf /etc/swift/container-server.conf
$> cat > /tmp/object-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
EOF
$> sudo mv /tmp/object-server.conf /etc/swift/object-server.conf

So that swift can authenticate tokens we need to set the keystone Admin token in the swift proxy file

$> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
$> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN

Create the storage device for swift, these instructions use a loopback device but a physical device or logical volume can be used

$> truncate --size=20G /tmp/swiftstorage
$> DEVICE=$(sudo losetup --show -f /tmp/swiftstorage)
$> sudo mkfs.ext4 -I 1024 $DEVICE
$> sudo mkdir -p /srv/node/partitions
$> sudo mount $DEVICE /srv/node/partitions -t ext4 -o noatime,nodiratime,nobarrier,user_xattr

Change the working dir so that the following commands will create the *.builder files on right place.

$> cd /etc/swift

Create the ring, with 1024 partitions (only suitable for a small test environment) and 1 zone

$> sudo swift-ring-builder account.builder create 10 1 1
$> sudo swift-ring-builder container.builder create 10 1 1
$> sudo swift-ring-builder object.builder create 10 1 1

Create a device for each of the account, container and object services

$> sudo swift-ring-builder account.builder add z1-127.0.0.1:6002/partitions 100
$> sudo swift-ring-builder container.builder add z1-127.0.0.1:6001/partitions 100
$> sudo swift-ring-builder object.builder add z1-127.0.0.1:6000/partitions 100

Rebalance the ring (allocates partitions to devices)

$> sudo swift-ring-builder account.builder rebalance
$> sudo swift-ring-builder container.builder rebalance
$> sudo swift-ring-builder object.builder rebalance

make sure swift owns appropriate files

$> sudo chown -R swift:swift /etc/swift /srv/node/partitions

Added the swift service and endpoint to keystone

$> SERVICEID=$(keystone  service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
$> echo $SERVICEID # just making sure we got a SERVICEID
$> keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s"

Start the services

$> sudo service memcached start
$> for srv in account container object proxy  ; do sudo service openstack-swift-$srv start ; done

Test the swift client and upload files

$> swift list
$> swift upload container /path/to/file