From Fedora Project Wiki

(remove workaround for buggy packaging)
 
(100 intermediate revisions by 19 users not shown)
Line 1: Line 1:
= Basic Setup =
= Basic Setup =


These steps will setup OpenStack nova, glance, and keystone to be accessed by the OpenStack dashboard web UI on a single host, as well as launching our first instance (virtual machine).
* These steps will setup OpenStack Nova, Glance, and Keystone to be accessed by the OpenStack Dashboard web UI on a single host, as well as launching our first instance (virtual machine). Fedora 17 includes OpenStack Essex release.


== Initial Installation ==
* Many of the examples here require 'sudo' to be properly configured, please see [[Configuring Sudo]] if you need help.


To get started with OpenStack, you can install it on Fedora 17, along with a few dependencies:
* If you have already installed OpenStack with DevStack (you may also be interested by [http://berrange.com/posts/2012/11/20/what-devstack-does-to-your-host-when-setting-up-openstack-on-fedora-17/ Daniel P. Berranger's post on that subject]), you have to remove the installation tree from the file system and the MySQL users and databases:
$> if [ -d /opt/stack ]; then \rm -rf /opt/stack; fi
$> cat > ~/clean_os_db.sql << _EOF
drop database if exists nova;
drop database if exists glance;
drop database if exists cinder;
drop database if exists keystone;
grant usage on *.* to 'nova'@'%'; drop user 'nova'@'%';
grant usage on *.* to 'nova'@'localhost'; drop user 'nova'@'localhost';
grant usage on *.* to 'glance'@'%'; drop user 'glance'@'%';
grant usage on *.* to 'glance'@'localhost'; drop user 'glance'@'localhost';
grant usage on *.* to 'cinder'@'%'; drop user 'cinder'@'%';
grant usage on *.* to 'cinder'@'localhost'; drop user 'cinder'@'localhost';
grant usage on *.* to 'keystone'@'%'; drop user 'keystone'@'%';
grant usage on *.* to 'keystone'@'localhost'; drop user 'keystone'@'localhost';
flush privileges;
_EOF
$> mysql -u root -p < ~/clean_os_db.sql


$> sudo yum install --enablerepo=updates-testing openstack-nova openstack-glance openstack-keystone openstack-dashboard qpid-cpp-server
* You may also want to see [[User:Denisarnaud#Cloud solutions submitted to Fedora|Denis']] [http://github.com/openstack-fedora/openstack-configuration step-by-step guide], greatly inspired by this page, but resulting from hours of debugging on Fedora 17 in November 2012.


NOTE: very latest openstack-keystone update might not be mirrored yet: https://admin.fedoraproject.org/updates/openstack-keystone-2012.1-0.10.e4.fc17
== Fedora OpenStack preview repository ==


It is recommended to install and configure the [http://wiki.openstack.org/Releases latest stable OpenStack release]. As of November 2012, the latest stable release is Folsom, aka 2012.2. For that purpose, enable the [[OpenStack#Preview repository |OpenStack Preview Repository]] before proceeding with the following sections:
$> sudo curl http://repos.fedorapeople.org/repos/openstack/openstack-folsom/fedora-openstack-folsom.repo -o /etc/yum.repos.d/fedora-openstack-folsom.repo


Run the helper script to get MySQL configured for use with openstack-nova.  If <code>mysql-server</code> is not already installed, this script will install it for you.
=== Check the OpenStack version ===
To know the [http://wiki.openstack.org/Releases release version of OpenStack]:
* For a remote repository:
$> yum info openstack-nova-compute | grep -e Version -e Release
# Standard Fedora 17 repositories:
Version    : 2012.1.3
Release    : 1.fc17
  # Fedora OpenStack preview repository:
Version    : 2012.2
Release    : 1.fc18
* From the RPM database:
$> rpm -qv openstack-nova-compute
# Standard Fedora 17 repositories:
openstack-nova-compute-2012.1.3-1.fc17.noarch
# Fedora OpenStack preview repository:
openstack-nova-compute-2012.2-1.fc18.noarch
* From OpenStack itself:
$> nova-manage version
# Standard Fedora 17 repositories:
2012.1.3 (2012.1.3-LOCALBRANCH:LOCALREVISION)
# Fedora OpenStack preview repository:
2012.2 (2012.2-LOCALBRANCH:LOCALREVISION)


  $> sudo openstack-nova-db-setup
=== Install packages ===
 
First let us pull in OpenStack and some optional dependencies:
# Nova (compute), Glance (images), Keystone (identity), Swift (object store), Horizon (dashboard)
$> sudo yum install openstack-utils openstack-nova openstack-glance openstack-keystone \
  openstack-swift openstack-dashboard openstack-swift-proxy openstack-swift-account \
  openstack-swift-container openstack-swift-object
# QPID (AMQP message bus), memcached, NBD (Network Block Device) module
$> sudo yum install qpid-cpp-server-daemon qpid-cpp-server memcached nbd
# Python bindings
  $> sudo yum install python-django-openstack-auth python-django-horizon \
  python-keystone python-keystone-auth-token python-keystoneclient \
  python-nova-adminclient python-quantumclient
# Some documentation
$> sudo yum install openstack-keystone-doc openstack-swift-doc openstack-cinder-doc \
  python-keystoneclient-doc
# New Folsom components: Quantum (network), Tempo, Cinder (replacement for Nova volumes)
$> sudo yum install openstack-quantum openstack-tempo openstack-cinder \
  openstack-quantum-linuxbridge openstack-quantum-openvswitch \
  python-cinder python-cinderclient
# Ruby bindings
$> sudo yum install rubygem-openstack rubygem-openstack-compute
# Image creation
$> sudo yum install appliance-tools appliance-tools-minimizer \
  febootstrap rubygem-boxgrinder-build
 
=== Setup the database ===
 
==== Nova database ====
Run the helper script to get MySQL configured for use with openstack-nova. If <code>mysql-server</code> is not already installed, this script will install it for you.
$> sudo openstack-db --service nova --init
Then, synchronize the Nova database:
$> nova-manage db sync
 
==== Glance database ====
Similarly, run the helper script to get MySQL configured for use with openstack-glance.
$> sudo openstack-db --service glance --init
 
==== Cinder database ====
Similarly, run the helper script to get MySQL configured for use with openstack-cinder.
$> sudo openstack-db --service cinder --init
 
=== Start support services ===


Nova requires the QPID messaging server to be running.
Nova requires the QPID messaging server to be running.


  $> sudo systemctl start qpidd && sudo systemctl enable qpidd.service
  $> sudo systemctl start qpidd.service && sudo systemctl enable qpidd.service


Nova requires the libvirtd server to be running:
Nova requires the libvirtd server to be running:


  $> sudo systemctl start libvirtd && sudo systemctl enable libvirtd.service
  $> sudo systemctl start libvirtd.service && sudo systemctl enable libvirtd.service
 
=== Starting the Glance services ===


Next, you should enable the Glance API and registry services:
Next, you should enable the Glance API and registry services:


  $> for svc in api registry; do sudo systemctl start openstack-glance-$svc; done
  $> for svc in api registry; do sudo systemctl start openstack-glance-$svc.service; done
  $> for svc in api registry; do sudo systemctl enable openstack-glance-$svc; done
  $> for svc in api registry; do sudo systemctl enable openstack-glance-$svc.service; done
 
=== Setup volume storage ===
* References:
** [http://docs.openstack.org/folsom/openstack-compute/install/yum/content/terminology-storage.html OpenStack terminology for the storage services]
** [http://wiki.openstack.org/MigrateToCinder Migration from Nova-volumes to Cinder]
 
The volume service has been extracted from Nova and incorporated into a new dedicated component named [http://wiki.openstack.org/ReleaseNotes/Folsom#OpenStack_Block_Storage_.28Cinder.29 Cinder (dubbed OpenStacked Block Storage)]. From the [http://wiki.openstack.org/Releases Folsom release], only Cinder has become officially supported and Nova-volumes has subsequently been deprecated.
 
Independently of the block storage service component, either Cinder from Folsom or Nova-volumes in Essex, a LVM volume group (vg) has to be created. The LVM volume group can be created either temporarily, e.g. through a simple loop-back sparse disk image, or permanently, e.g. thanks to a simple file mounted as a permanent partition. The Swift component and more permanent block devices are to be preferred for more production-oriented infrastructures.
 
==== File-based storage creation ====
Unless you have dedicated partitions and/or block device, a sparse disk image has to be created.
 
===== Cinder volumes (from Folsom) =====
$> sudo mkdir -p /var/lib/cinder
$> sudo truncate --size=20G /var/lib/cinder/cinder-volumes.img
 
===== Nova volumes (deprecated) =====
$> sudo mkdir -p /var/lib/nova
$> sudo truncate --size=20G /var/lib/nova/nova-volumes.img
 
==== Volatile set up (to be redone after every reboot) ====
The newly created disk image can be mounted as a simple loop-back device.
 
===== Cinder volumes (from Folsom) =====
$> sudo losetup --show -f /var/lib/cinder/cinder-volumes.img
$> CINDER_VOL_DEVICE=$(losetup -a | grep "/var/lib/cinder/cinder-volumes.img" | cut -d':' -f1)
$> sudo vgcreate cinder-volumes $CINDER_VOL_DEVICE
 
===== Nova volumes (deprecated) =====
$> sudo losetup --show -f /var/lib/nova/nova-volumes.img
$> NOVA_VOL_DEVICE=$(losetup -a | grep "/var/lib/nova/nova-volumes.img" | cut -d':' -f1)
$> sudo vgcreate nova-volumes $NOVA_VOL_DEVICE
 
==== Permanent set up ====
The newly created disk image can now be mounted as a standard block device.
 
===== Cinder volumes (from Folsom) =====
LOOP_EXEC_DIR=/usr/libexec/cinder
LOOP_SVC=cinder-demo-disk-image.service
LOOP_EXEC=voladm
GH_SYSD_BASE_URL=https://raw.github.com/openstack-fedora/openstack-configuration/master
GH_SYSD_LOOP_SVC_URL=$GH_SYSD_BASE_URL/systemd/$LOOP_SVC
GH_SYSD_LOOP_EXEC_URL=$GH_SYSD_BASE_URL/bin/$LOOP_EXEC
mkdir -p $LOOP_EXEC_DIR
curl $GH_SYSD_LOOP_SVC_URL -o /usr/lib/systemd/system/$LOOP_SVC
curl $GH_SYSD_LOOP_EXEC_URL -o $LOOP_EXEC_DIR/$LOOP_EXEC
chmod -R a+rx $LOOP_EXEC_DIR
 
systemctl start $LOOP_SVC && systemctl enable $LOOP_SVC
# By construction (hard-coded in the systemd script):
CINDER_VOL_DEVICE=/dev/loop0
 
# Create the cinder-volumes Volume Group (VG) for the volume service:
vgcreate cinder-volumes $CINDER_VOL_DEVICE
 
===== Nova volumes (deprecated) =====
Do something similar as above for Nova. If someone is keen to contribute, do not hesitate (e.g., file a pull request on [http://github.com/openstack-fedora/openstack-configuration GitHub]).
 
=== Starting the volume services ===
 
==== Cinder (from Folsom) ====
The Cinder service can now be started:
systemctl start openstack-cinder-volume.service && systemctl enable openstack-cinder-volume.service


The openstack-nova-volume service requires an LVM Volume Group called nova-volumes to exist. We simply create this using a loopback sparse disk image.
==== Nova volumes (deprecated) ====
The Nova-volumes service can now be started:
systemctl start openstack-nova-volume.service && systemctl enable openstack-nova-volume.service


$> sudo dd if=/dev/zero of=/var/lib/nova/nova-volumes.img bs=1M seek=20k count=0
=== Installing without hardware acceleration / within a virtual machine (VM) ===
$> sudo vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img)


If you are testing OpenStack in a virtual machine, you need to configure nova to use qemu without KVM and hardware virtualization:
* When the OpenStack controller machine does not support virtualization hardware accelaration (for instance when it is itself running within a virtual machine), nova needs to be configured to use QEMU without KVM and hardware accelaration. See for instance the [http://docs.openstack.org/essex/openstack-compute/install/apt/content/kvm.html KVM-related configuration section of the OpenStack documentation].
* The second command relaxes SELinux rules to allow this mode of operation (https://bugzilla.redhat.com/show_bug.cgi?id=753589):


  $> sudo openstack-config-set /etc/nova/nova.conf DEFAULT libvirt_type qemu
  $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
$> setsebool -P virt_use_execmem on # This may take a while


Now you can start the various services:
=== Starting the Nova services ===


  $> for svc in api objectstore compute network volume scheduler; do sudo systemctl start openstack-nova-$svc; done
  $> for svc in api objectstore compute network scheduler cert; do sudo systemctl start openstack-nova-$svc.service; done
  $> for svc in api objectstore compute network volume scheduler; do sudo systemctl enable openstack-nova-$svc; done
  $> for svc in api objectstore compute network scheduler cert; do sudo systemctl enable openstack-nova-$svc.service; done


Check that all the services started up correctly and look in the logs in <code>/var/log/nova</code> for errors. If there are none, then Nova is up and running!
Check that all the services started up correctly and look in the logs in <code>/var/log/nova</code> for errors. If there are none, then Nova is up and running!


=== Preview Repository for Fedora 16 ===
Note the network service should only be started on a single node, when setting up multiple compute nodes
Openstack Essex will not be pushed as an update to Fedora 16 but there are rebuilds from Rawhide available for testing on the current stable Fedora release, similar to http://fedoraproject.org/wiki/Virtualization_Preview_Repository
 
== Initial Keystone setup ==
 
Keystone is the OpenStack identity service, providing a central place to set up OpenStack users, groups, and accounts that can be shared across all other services. This deprecates the old style user accounts manually set up with <tt>nova-manage</tt>.
 
Setting up Keystone is required for using the OpenStack dashboard.
 
* Configure the Keystone database, similar to how we do it for nova
$> sudo openstack-db --service keystone --init
 
* Set up a keystonerc file with a generated admin token and various passwords:
$> cat > keystonerc << _EOF
export ADMIN_TOKEN=$(openssl rand -hex 10)
export OS_USERNAME=admin
export OS_PASSWORD=verybadpass
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/
export SERVICE_TOKEN=\$ADMIN_TOKEN
_EOF
$> . ./keystonerc
 
* Set the administrative token in the config file
$> sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
 
* Start and enable Keystone service
$> sudo systemctl start openstack-keystone.service && sudo systemctl enable openstack-keystone.service
 
* Create sample Tenants, Users and Roles
$> sudo ADMIN_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=servicepass openstack-keystone-sample-data
 
* Test the Keystone CLI is working
$> keystone user-list
+----------------------------------+---------+--------------------+--------+
|                id                | enabled |      email        |  name  |
+----------------------------------+---------+--------------------+--------+
| 53c7ad6f1b154754bd59cf07ffe9b0c1 |  True  | admin@example.com  | admin  |
| 75194f7ca5354f92b42d80070df15dd3 |  True  | admin@example.com  |  demo  |
| 45861e2701d24c17a57da280d3a03c3b |  True  |  nova@example.com  |  nova  |
| fc205aedf6c34b2998847b0ee3bf3bd1 |  True  | glance@example.com | glance |
+----------------------------------+---------+--------------------+--------+
 
* The Fedora 17 Keystone CLI version makes use of some Python PrettyTable-related functions, which have been deprecated. Upstream has reported the [https://bugs.launchpad.net/keystone/+bug/996638 bug (#996638)]. If you come across that bug (''i.e.'', the output of any <tt>keystone xxx-list</tt> command returns '<tt>printt</tt>' only), you can apply the patch suggested on their bug report:
$> pushd /usr/lib/python2.7/site-packages
$> wget https://launchpadlibrarian.net/104576486/replace-printt.diff
$> patch -p1 --dry-run < replace-printt.diff
$> # Uncomment the following line if everything seems fine:
$> # patch -p1 < replace-printt.diff
$> \rm -f replace-printt.diff
$> popd
: Note that this bug affects the calculation of temporary variables within the <tt>/usr/share/openstack-keystone/sample_data.sh</tt> Shell script (itself called by the <tt>openstack-keystone-sample-data</tt> executable). In that case, not all the users will be created by that Shell script (for instance, <tt>nova</tt> and <tt>glance</tt> may be missing). So, you will have to re-start everything from [[Getting started with OpenStack on Fedora 17#Basic Setup |Basic Setup section above]], replacing <tt>systemctl start</tt> by <tt>systemctl restart</tt> where appropriate.
 
== Configure nova to use keystone ==
 
* Change nova configuration to use keystone:
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password servicepass
$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
$> for svc in api compute; do sudo systemctl restart openstack-nova-$svc.service; done
 
* Verify that nova can talk with keystone (requires OS_* exports from previous keystone section)
 
$> nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID |    Name  | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1  | m1.tiny  | 512      |      | 0        | 1    | 1.0        |
| 2  | m1.small  | 2048      |      | 10      | 1    | 1.0        |
| 3  | m1.medium | 4096      |      | 10      | 2    | 1.0        |
| 4  | m1.large  | 8192      |      | 10      | 4    | 1.0        |
| 5  | m1.xlarge | 16384    |      | 10      | 8    | 1.0        |
+----+-----------+-----------+------+----------+-------+-------------+
 
== Configure glance to use keystone ==


==== Enabling the Openstack Preview Repository ====
* Change glance configuration to use keystone:
Preview packages may be installed using yum after performing the following
$> sudo openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
step.  
$> sudo openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_user glance
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password servicepass
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_user glance
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_password servicepass
$> for svc in api registry; do sudo systemctl restart openstack-glance-$svc.service; done


<pre><nowiki>
* Verify that glance can talk with keystone (requires OS_* exports from the previous keystone section)
$> cd /etc/yum.repos.d/
$> wget http://repos.fedorapeople.org/repos/apevec/openstack-preview/fedora-openstack-preview.repo
</nowiki></pre>


This repo can also be used with Fedora 17 to get the latest packages which are still being pushed to the official repos.
$> glance index


== Admin User, Project and Network Setup ==
== Nova Network Setup ==


Now you should create an admin user, project and network. Replace 'markmc', 'demoproject' and 'demonet' with your own details of course:
To create the network do:


$> sudo nova-manage user admin markmc
$> sudo nova-manage project create demoproject markmc
  $> sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0
  $> sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0


NB the network range here, should *not* be the one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if 10.0.0.0/24 clashes with your local network, pick another range
NB the network range here, should *not* be the one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if 10.0.0.0/24 clashes with your local network, pick another range


Then download a set of credentials for this user/project:
== Register an Image ==
 
To run an instance, you are going to need an image. There are prebuilt Fedora 16 JEOS (Just Enough OS) images that can be downloaded.
Note this will download a 200MB image (without a progress bar)
 
  $> glance add name=f16-jeos is_public=true disk_format=qcow2 container_format=bare \
      copy_from=http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2
another way:
  $> glance add name=f16-jeos is_public=true disk_format=qcow2 container_format=bare < f16-x86_64-openstack-sda.qcow2
 
== Launch an Instance ==
 
As a last step before launching, make sure the nbd kernel module is loaded so that injecting SSH key files into the filesystem on the qcow2 image works:
 
$> sudo modprobe nbd
 
Create a keypair:
$> nova keypair-add mykey > oskey.priv
$> chmod 600 oskey.priv
 
Launch an instance:
 
$> nova boot myserver --flavor 2 --key_name mykey \
      --image $(glance index | grep f16-jeos | awk '{print $1}')
 
And then observe the instance running, observe the KVM VM running and SSH into the instance:
 
$> sudo virsh list
$> nova list
$> ssh -i oskey.priv root@10.0.0.2
$> nova console-log myserver
$> nova delete myserver
 
== Configure the OpenStack Dashboard ==
 
The OpenStack dashboard is the official web user interface for OpenStack. It should mostly work out of the box, as long as keystone has been configured properly.
 
* Install the dashboard
$> sudo yum install openstack-dashboard
 
* Make sure httpd is running
$> sudo systemctl restart httpd.service && sudo systemctl enable httpd.service
 
* If selinux is enabled, you will have to allow httpd to access other network services (the dashboard talks to the http API of the other OpenStack services)
$> sudo setsebool -P httpd_can_network_connect=on
 
The dashboard should then be accessed with a web browser at http://localhost/dashboard . Account and password should be
what you configured for the keystone setup.
 
== Configure swift with keystone ==
These are the minimal steps required to setup a swift installation with keystone authentication, this wouldn't be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notably it doesn't include replication, multiple zones and load balancing.
 
Installing swift
$> sudo yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached
 
Ensure the keystone env variables are still setup from the previous steps
 
We need to create 5 configuration files
 
$> cat > /tmp/swift.conf <<- EOF
[swift-hash]
swift_hash_path_suffix = randomestringchangeme
EOF
$> sudo mv /tmp/swift.conf /etc/swift/swift.conf
 
$> cat > /tmp/proxy-server.conf <<- EOF
[DEFAULT]
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_port = 35357
auth_host = 127.0.0.1
auth_protocol = http
admin_token = ADMINTOKEN
#  ??? Are these needed?
service_port = 5000
service_host = 127.0.0.1
service_protocol = http
auth_token = ADMINTOKEN
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = 127.0.0.1:11211
[filter:catch_errors]
use = egg:swift#catch_errors
EOF
$> sudo mv /tmp/proxy-server.conf /etc/swift/proxy-server.conf
 
$> cat > /tmp/account-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]
EOF
$> sudo mv /tmp/account-server.conf /etc/swift/account-server.conf
 
$> cat > /tmp/container-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
EOF
$> sudo mv /tmp/container-server.conf /etc/swift/container-server.conf
 
$> cat > /tmp/object-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
EOF
$> sudo mv /tmp/object-server.conf /etc/swift/object-server.conf
 
So that swift can authenticate tokens we need to set the keystone Admin token in the swift proxy file
$> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
$> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN
 
Create the storage device for swift, these instructions use a loopback device but a physical device or logical volume can be used
$> truncate --size=20G /tmp/swiftstorage
$> DEVICE=$(sudo losetup --show -f /tmp/swiftstorage)
$> sudo mkfs.ext4 -I 1024 $DEVICE
$> sudo mkdir -p /srv/node/partitions
$> sudo mount $DEVICE /srv/node/partitions -t ext4 -o noatime,nodiratime,nobarrier,user_xattr
 
Change the working dir so that the following commands will create the <code>*.builder</code> files on right place.
$> cd /etc/swift
 
Create the ring, with 1024 partitions (only suitable for a small test environment) and 1 zone
$> sudo swift-ring-builder account.builder create 10 1 1
$> sudo swift-ring-builder container.builder create 10 1 1
$> sudo swift-ring-builder object.builder create 10 1 1
 
Create a device for each of the account, container and object services
$> sudo swift-ring-builder account.builder add z1-127.0.0.1:6002/partitions 100
$> sudo swift-ring-builder container.builder add z1-127.0.0.1:6001/partitions 100
$> sudo swift-ring-builder object.builder add z1-127.0.0.1:6000/partitions 100
 
Rebalance the ring (allocates partitions to devices)
$> sudo swift-ring-builder account.builder rebalance
$> sudo swift-ring-builder container.builder rebalance
$> sudo swift-ring-builder object.builder rebalance
 
make sure swift owns appropriate files
$> sudo chown -R swift:swift /etc/swift /srv/node/partitions
 
Added the swift service and endpoint to keystone
$> SERVICEID=$(keystone  service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
$> echo $SERVICEID # just making sure we got a SERVICEID
$> keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s"
 
Start the services
$> sudo service memcached start
$> for srv in account container object proxy  ; do sudo service openstack-swift-$srv start ; done
 
Test the swift client and upload files
$> swift list
$> swift upload container /path/to/file
 
= Additional Functionality =
 
== Using Eucalyptus tools ==


$> sudo nova-manage project zipfile demoproject markmc
Install the Eucalyptus tools
$> sudo chmod 600 nova.zip
<pre>
$> sudo chown markmc:markmc nova.zip
sudo yum install euca2ools
</pre>


Unpack the credentials, source the <code>novarc</code> and add an SSH keypair:
Set up a rc file for EC2 access (this expects a prior keystone configuration)
<pre>
$> . ./keystonerc
$> USER_ID=$(keystone user-list | awk '/admin / {print $2}')
$> ACCESS_KEY=$(keystone ec2-credentials-list --user $USER_ID | awk '/admin / {print $4}')
$> SECRET_KEY=$(keystone ec2-credentials-list --user $USER_ID | awk '/admin / {print $6}')
$> cat > novarc <<EOF
export EC2_URL=http://localhost:8773/services/Cloud
export EC2_ACCESS_KEY=$ACCESS_KEY
export EC2_SECRET_KEY=$SECRET_KEY
EOF
$> chmod 600 novarc
$> . ./novarc  
</pre>
 
You should now be able to launch an image:


  $> mkdir novacreds && cd novacreds
  $> euca-run-instances f16-jeos -k mykey
  $> unzip ../nova.zip
  $> euca-describe-instances
  $> . ./novarc
  $> euca-get-console-output i-00000001
  $> euca-add-keypair nova_key > nova_key.priv
  $> euca-terminate-instances i-00000001
$> chmod 600 nova*


== Images ==
== Images ==


To run an instance, you're going to need an image. Three options are described below:
Rather than the prebuilt Fedora 16 JEOS image referenced above, there are other image options.


# Building a Fedora 16 JEOS image using [http://aeolusproject.org/oz.html Oz]
# Building a Fedora 16 JEOS image using [http://aeolusproject.org/oz.html Oz]
# Downloading a Fedora 16 JEOS image
# Downloading ttylinux based minimal images used by OpenStack developers for testing
# Downloading ttylinux based minimal images used by OpenStack developers for testing


Line 95: Line 525:
You can very easily build an image using Oz. First, make sure it's installed:
You can very easily build an image using Oz. First, make sure it's installed:


  $> sudo yum install /usr/bin/oz-install
  $> sudo yum install oz


Create a template definition file called <code>f16-jeos.tdl</code> containing:
Create a template definition file called <code>f16-jeos.tdl</code> containing:
Line 153: Line 583:
Once built, you simply have to register the image with Nova:
Once built, you simply have to register the image with Nova:


  $> sudo nova-manage image image_register /var/lib/libvirt/images/fedora16_x86_64.dsk markmc f16-jeos
  $> glance add name=f16-jeos is_public=true container_format=bare disk_format=raw < /var/lib/libvirt/images/fedora16_x86_64.dsk
  $> glance index
  $> glance index


The last command should return a list of the images registered with the Glance image registry.
The last command should return a list of the images registered with the Glance image registry.
=== Downloading Fedora 16 JEOS Images ===
If your network connection to the nearest Fedora repository is slow, then you can save yourself some time by just downloading our pre-built Fedora 16 JEOS image:
$> wget http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2
$> sudo nova-manage image image_register --disk_format=qcow2 f16-x86_64-openstack-sda.qcow2 markmc f16-jeos


=== Downloading Existing Images ===
=== Downloading Existing Images ===
Line 171: Line 594:
  $> mkdir images
  $> mkdir images
  $> cd images
  $> cd images
  $> curl http://images.ansolabs.com/tty.tgz | tar xvfzo -
  $> curl -L http://github.com/downloads/citrix-openstack/warehouse/tty.tgz | tar xvfzo -
  $> cd ..
  $> glance add name=aki-tty disk_format=aki container_format=aki is_public=true < aki-tty/image
$> sudo nova-manage image convert images/
  $> glance add name=ami-tty disk_format=ami container_format=ami is_public=true < ami-tty/image
 
  $> glance add name=ari-tty disk_format=ari container_format=ari is_public=true < ari-tty/image
== Launch an Instance ==
 
As a last step before launching, make sure the nbd kernel module is loaded so that injecting SSH key files into the filesystem on the qcow2 image works:
 
$> sudo modprobe nbd
 
You should now be able to launch an image:
 
$> euca-run-instances f16-jeos -k nova_key
 
Or, in the case of the downloaded TTY images:
 
$> euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k nova_key
 
And then observe the instance running, observe the KVM VM running and SSH into the instance:
 
$> euca-describe-instances
$> sudo virsh list
$> ssh -i nova_key.priv root@10.0.0.2
$> euca-get-console-output i-00000001
$> euca-terminate-instances i-00000001
 
== Configuring Keystone for authentication ==
 
Keystone is the openstack identity service, providing a central place to
set up openstack users, groups, and accounts that can be shared across all
other services. This deprecates the old style user accounts manually set
up with nova-manage.
 
Setting up keystone is required for using the Openstack dashboard.
 
=== Initial setup ===
 
* Configure the Keystone database, similar to how we do it for nova
$> sudo openstack-keystone-db-setup
Please enter the password for the 'root' MySQL user:
Verified connectivity to MySQL.
Creating 'keystone' database.
Asking openstack-keystone to sync the database.
Complete!
 
* Generate a random administrative token: this is basically the shared password that allows various services to talk to keystone.
$> ADMIN_TOKEN=$(openssl rand -hex 10)
$> sudo openstack-config-set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
 
* Start and enable Keystone service
  $> sudo systemctl start openstack-keystone.service && sudo systemctl enable openstack-keystone.service
 
* Create sample Tenants, Users and Roles
$> sudo ADMIN_PASSWORD=verybadpass openstack-keystone-sample-data
 
* Test the Keystone CLI is working
$> export OS_USERNAME=admin
$> export OS_PASSWORD=verybadpass
$> export OS_TENANT_NAME=admin
$> export OS_AUTH_URL=http://localhost:35357/v2.0
$> keystone user-list
+----------------------------------+---------+-------------------+-------+
|                id                | enabled |      email      |  name |
+----------------------------------+---------+-------------------+-------+
| 05742d10109540d2892d17ec312a6cd9 | True    | admin@example.com | admin |
| 25fe47659d6a4255a663e6add1979d6c | True    | admin@example.com | demo  |
+----------------------------------+---------+-------------------+-------+
 
* Add the nova-volume service, which is used by the OpenStack Dashboard
NOTE: This step is NOT needed with openstack-keystone-2012.1-0.10.e4.fc17 which loads catalog in sample-data script!
 
$> keystone service-create --name="nova-volume" --type=volume --description="Nova Volume Service"
$> cat << \EOF | sudo tee -a /etc/keystone/default_catalog.templates                                                     
  catalog.RegionOne.volume.publicURL = http://localhost:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://localhost:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = http://localhost:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = 'Volume Service'
EOF
$> sudo systemctl restart openstack-keystone.service
 
=== Configure nova to use keystone ===
 
* Change nova configuration to use keystone:
$> sudo sed -i -e 's/# \(pipeline = .*\keystonecontext\)/\1/g' /etc/nova/api-paste.ini
$> sudo openstack-config-set /etc/nova/api-paste.ini filter:authtoken admin_token $ADMIN_TOKEN
$> sudo systemctl restart openstack-nova-api.service
 
* Verify that nova can talk with keystone (requires OS_* exports from previous keystone section)
 
$> nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID |    Name  | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1  | m1.tiny  | 512      |      | 0        | 1    | 1.0        |
| 2  | m1.small  | 2048      |      | 10      | 1    | 1.0        |
| 3  | m1.medium | 4096      |      | 10      | 2    | 1.0        |
| 4  | m1.large  | 8192      |      | 10      | 4    | 1.0        |
| 5  | m1.xlarge | 16384    |      | 10      | 8    | 1.0        |
+----+-----------+-----------+------+----------+-------+-------------+
 
=== Configure glance to use keystone ===
 
* Change glance configuration to use keystone:
$> sudo openstack-config-set /etc/glance/glance-api.conf paste_deploy flavor keystone
$> sudo openstack-config-set /etc/glance/glance-registry.conf paste_deploy flavor keystone
$> sudo openstack-config-set /etc/glance/glance-api-paste.ini filter:authtoken admin_token $ADMIN_TOKEN
$> sudo openstack-config-set /etc/glance/glance-registry-paste.ini filter:authtoken admin_token $ADMIN_TOKEN
$> sudo systemctl restart openstack-glance-api.service
$> sudo systemctl restart openstack-glance-registry.service
 
* Verify that glance can talk with keystone (requires OS_* exports from the previous keystone section)
 
$> glance index
 
== Configuring the OpenStack Dashboard ==
 
The OpenStack dashboard is the official web user interface for OpenStack. It should mostly work out of the box, as long as keystone has been configured properly.
 
* Install the dashboard
$> sudo yum install openstack-dashboard


* Make sure httpd is running
Then to start the image:
$> sudo systemctl restart httpd.service
$> sudo systemctl enable httpd.service


* There is currently a [https://bugzilla.redhat.com/show_bug.cgi?id=801202 python novaclient bug] that upsets horizon. To work around this:
  $> euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k mykey
  $> sudo mkdir /var/www/.novaclient
 
The dashboard should then be accessed with a web browser at http://localhost/dashboard . Account and password should be
what you configured for the keystone setup.
 
= Additional Functionality =


== Volumes ==
== Volumes ==


If you use the Chrome browser, kill it before embarking on this section, as it has been [https://bugzilla.redhat.com/show_bug.cgi?id=727925 known] to cause the lvcreate command to fail with 'incorrect semaphore state' errors.
If you use the Chrome browser, kill it before embarking on this section, as it has been [https://bugzilla.redhat.com/show_bug.cgi?id=727925 known] to cause the lvcreate command to fail with 'incorrect semaphore state' errors.
Note when setting up volumes in production, make sure you don't put your volume nodes on the same network as your guests when using the default volume driver, as all the iscsi targets are discoverable and accessible without any security.


Start the SCSI target daemon
Start the SCSI target daemon


  $> sudo service tgtd start
  $> sudo systemctl start tgtd.service
  $> sudo chkconfig tgtd on
  $> sudo systemctl enable tgtd.service


Create a new 1GB volume
Create a new 1GB volume
Line 323: Line 624:
Re-run the previously terminated instance if necessary:
Re-run the previously terminated instance if necessary:


  $> INSTANCE=$(euca-run-instances f15 -k nova_key | grep INSTANCE | awk '{print $2}')
  $> INSTANCE=$(euca-run-instances f16-jeos -k mykey | grep INSTANCE | awk '{print $2}')


or:
or:


  $> INSTANCE=$(euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k nova_key | grep INSTANCE | awk '{print $2}')
  $> INSTANCE=$(euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k mykey | grep INSTANCE | awk '{print $2}')


Make the storage available to the instance (note -d is the device on the compute node)
Make the storage available to the instance (note -d is the device on the compute node)
Line 343: Line 644:
Create and mount a file system directly on the device
Create and mount a file system directly on the device


  $> mkfs.ext3 /dev/vdc
  $> sudo mkfs.ext3 /dev/vdc
  $> mkdir /mnt/nova-volume
  $> sudo mkdir /mnt/nova-volume
  $> mount /dev/vdc /mnt/nova-volume
  $> sudo mount /dev/vdc /mnt/nova-volume


Display some file system details
Display some file system details
Line 353: Line 654:
Create a temporary file:
Create a temporary file:


  $> echo foo > /mnt/nova-volume/bar
  $> sudo su -c 'echo foo > /mnt/nova-volume/bar'


Terminate and re-run the instance, then re-attach the volume and re-mount within the instance as above. Your temporary file will have persisted:
Terminate and re-run the instance, then re-attach the volume and re-mount within the instance as above. Your temporary file will have persisted:
Line 361: Line 662:
Unmount the volume again:
Unmount the volume again:


  $> umount /mnt/nova-volume
  $> sudo umount /mnt/nova-volume


Exit from the ssh session, then detach and delete the volume:
Exit from the ssh session, then detach and delete the volume:
Line 374: Line 675:
First thing you need to do is make sure that nova is configured with the correct public network interface. The default is eth0, but you can change it by e.g.
First thing you need to do is make sure that nova is configured with the correct public network interface. The default is eth0, but you can change it by e.g.


  $> sudo openstack-config-set /etc/nova/nova.conf DEFAULT public_interface em1
  $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT public_interface em1
  $> sudo service openstack-nova-network restart
  $> sudo systemctl restart openstack-nova-network.service


Then you can do e.g.
Then you can do e.g.
Line 385: Line 686:
  $> euca-disassociate-address 172.31.0.224
  $> euca-disassociate-address 172.31.0.224
  $> euca-release-address 172.31.0.224
  $> euca-release-address 172.31.0.224
== VNC access ==
To setup VNC access to guests through the dashboard.
First you need to install openstack-nova-novncproxy.
<pre>
$> sudo yum install --enablerepo=updates-testing openstack-nova-novncproxy</pre>
nova-novncproxy reads some parameters in /etc/nova/nova.conf file.
You need to configure your cloud controller to enable VNC.
novncproxy_host = 0.0.0.0
novncproxy_port = 6080
and in the nova compute nodes you need something like this
<pre>novncproxy_base_url=http://NOVNCPROXY_FQDN:6080/vnc_auto.html
vnc_enabled=true
vncserver_listen=COMPUTE_FQDN
vncserver_proxyclient_address=COMPUTE_FQDN</pre>
You should also make sure that openstack-nova-consoleauth and openstack-nova-novncproxy have been started on the controller node:
<pre>
$ controller> sudo ln -s /usr/lib/systemd/system/openstack-nova-consoleauth.service /etc/systemd/system/multi-user.target.wants/
$ controller> sudo ln -s /usr/lib/systemd/system/openstack-nova-novncproxy.service /etc/systemd/system/multi-user.target.wants/
$ controller> sudo service openstack-nova-consoleauth restart
$ controller> sudo service openstack-nova-novncproxy restart</pre>
After restarting nova services on both nodes the newly created machines will run the qemu-kvm with a parameter -vnc compute_fqdn:display_number.
Then after starting the novncproxy and connecting to the dashboard it will discover the host and point to the novncproxy with the appropriate values and connect to the VM.
Note ensure than the iptables entries for VNC ports (5900+DISPLAYNUMBER) are allowed.
== Migrate and Resize ==
This is implemented currently by transferring the images between compute nodes over ssh.
Therefore currently you need to make these adjustments on each compute node to allow that.
* Allow logins for the nova user
  # usermod -s /bin/bash nova
  # su - nova
  $ mkdir .ssh && cd .ssh
* Disable host identity checking by adding this to ssh config
  $ cat > config <<EOF
  Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
  EOF
* Generate and distribute ssh key
  $ ssh-keygen -f id_rsa -b 1024 -P ''
  $ scp /var/lib/nova/.ssh/id_rsa.pub root@otherHost:/var/lib/nova/.ssh/authorized_keys
  # chown nova:nova /var/lib/nova/.ssh/authorized_keys
  # semanage permissive -a sshd_t
To improve the SELinux config in future, the nova_var_lib_t context on /var/lib/nova will,
need to be configured to allow search access by sshd_t.
Also the ssh_home_t context will need to be associated with /var/lib/nova/.ssh
== Live Migration of VM instances ==
First note the [http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html official OpenStack docs on the feature]
and a [https://review.openstack.org/#/c/11172/ doc patch distinguishing libvirt live migration].
* Seting NFS server
** Make an nfs share with no_root_squash (nova uses root-wrap to chown the instance's disk to qemu:qemu)
** Make nova user and qemu user:
nova:x:162:162::/home/nova:/bin/bash
qemu:x:107:107::/home/qemu:/bin/bash
** chown -R nova:nova /the/nfs/share
* Mount nfs share on each host at /var/lib/nova/instances
* Configure libvirt
** See the [http://libvirt.org/remote.html#Remote_certificates libvirt wiki] as to how to create certificates.
** Edit /etc/libvirt/libvirt.conf
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"
** Edit /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"
* Restart libvirtd & OpenStack compute services


= Deployment =
= Deployment =
Line 417: Line 805:
Configure nova so that node can find the services on controller:
Configure nova so that node can find the services on controller:


  $ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT rabbit_host controller
  $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
  $ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
  $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
  $ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
  $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
  $ node> sudo openstack-config-set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 172.31.0.107
  $ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 172.31.0.107


(The {{{iscsi_ip_prefix}}} value is the IP address of the controller node)
(The {{{iscsi_ip_prefix}}} value is the IP address of the controller node)
Line 436: Line 824:
== Manual Setup of MySQL ==
== Manual Setup of MySQL ==


As of <code>openstack-nova-2011.3-9.el6</code> and <code>openstack-nova-2011.3-8.fc16</code>, <code>openstack-nova</code> is now set up to use MySQL by default.  If you're updating an older installation or prefer to set up MySQL manually instead of using the <code>openstack-nova-db-setup</code> script, this section shows how to do it.
As of <code>openstack-nova-2011.3-9.el6</code> and <code>openstack-nova-2011.3-8.fc16</code>, <code>openstack-nova</code> is now set up to use MySQL by default.  If you're updating an older installation or prefer to set up MySQL manually instead of using the <code>openstack-db</code> script, this section shows how to do it.


First install and enable MySQL:
First install and enable MySQL:
Line 444: Line 832:
  $> sudo chkconfig mysqld on
  $> sudo chkconfig mysqld on


Set a password for the root account and delete the anonymous accounts:
Set a password for the root account and delete the anonymous accounts (via interactive prompt):


  $> mysql -u root
  $> mysql_secure_installation
mysql> update mysql.user set password = password('iamroot') where user = 'root';
mysql> <nowiki>delete from mysql.user where user = ''</nowiki>;


Create a database and user account specifically for nova:
Create a database and user account specifically for nova:
Line 461: Line 847:
Then configure nova to use the DB and install the schema:
Then configure nova to use the DB and install the schema:


  $> sudo openstack-config-set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@localhost/nova
  $> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@localhost/nova
  $> sudo nova-manage db sync
  $> sudo nova-manage db sync


Line 530: Line 916:
Then stop all the services:
Then stop all the services:


  $> for iii in api objectstore compute network volume scheduler; do sudo service openstack-nova-$iii stop; done
  $> for iii in /usr/lib/systemd/system/openstack-*.service; do sudo systemctl stop $(basename $iii); done
$> for iii in api registry; do sudo service openstack-glance-$iii stop; done


Delete all the packages:
Delete all the packages:
Line 537: Line 922:
  $> sudo yum erase python-glance python-nova* python-keystone* openstack-swift*
  $> sudo yum erase python-glance python-nova* python-keystone* openstack-swift*


Delete the nova table from the MySQL DB:
Delete the nova and keystone tables from the MySQL DB:


  $> mysql -u root -p -e 'drop database nova;'
  $> mysql -u root -p -e 'drop database nova;'
$> mysql -u root -p -e 'drop database keystone;'


Delete the nova-volumes VG:
Delete the nova-volumes VG:
Line 549: Line 935:
Take down the bridge and kill dnsmasq:
Take down the bridge and kill dnsmasq:


  $> sudo ip link set br0 down
  $> sudo ip link set demonetbr0 down
  $> sudo brctl delbr br0
  $> sudo brctl delbr demonetbr0
  $> sudo kill -9 $(cat /var/lib/nova/networks/nova-br0.pid)
  $> sudo kill -9 $(cat /var/lib/nova/networks/nova-demonetbr0.pid)


Remove all directories left behind from the packages:
Remove all directories left behind from the packages:


  $> sudo rm -rf /etc/{glance,nova,swift,keystone} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}
  $> sudo rm -rf /etc/{glance,nova,swift,keystone,openstack-dashboard} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}


Finally, restart iptables to clear out all rules added by Nova. You also need to reload libvirt's iptables rules:
Finally, restart iptables to clear out all rules added by Nova. You also need to reload libvirt's iptables rules:
Line 562: Line 948:
  $> sudo service libvirtd restart
  $> sudo service libvirtd restart


[[Category:Cloud SIG]]
[[Category:OpenStack]]

Latest revision as of 15:18, 26 January 2013

Basic Setup

  • These steps will setup OpenStack Nova, Glance, and Keystone to be accessed by the OpenStack Dashboard web UI on a single host, as well as launching our first instance (virtual machine). Fedora 17 includes OpenStack Essex release.
  • Many of the examples here require 'sudo' to be properly configured, please see Configuring Sudo if you need help.
  • If you have already installed OpenStack with DevStack (you may also be interested by Daniel P. Berranger's post on that subject), you have to remove the installation tree from the file system and the MySQL users and databases:
$> if [ -d /opt/stack ]; then \rm -rf /opt/stack; fi
$> cat > ~/clean_os_db.sql << _EOF
drop database if exists nova;
drop database if exists glance;
drop database if exists cinder;
drop database if exists keystone;
grant usage on *.* to 'nova'@'%'; drop user 'nova'@'%';
grant usage on *.* to 'nova'@'localhost'; drop user 'nova'@'localhost';
grant usage on *.* to 'glance'@'%'; drop user 'glance'@'%';
grant usage on *.* to 'glance'@'localhost'; drop user 'glance'@'localhost';
grant usage on *.* to 'cinder'@'%'; drop user 'cinder'@'%';
grant usage on *.* to 'cinder'@'localhost'; drop user 'cinder'@'localhost';
grant usage on *.* to 'keystone'@'%'; drop user 'keystone'@'%';
grant usage on *.* to 'keystone'@'localhost'; drop user 'keystone'@'localhost';
flush privileges;
_EOF
$> mysql -u root -p < ~/clean_os_db.sql
  • You may also want to see Denis' step-by-step guide, greatly inspired by this page, but resulting from hours of debugging on Fedora 17 in November 2012.

Fedora OpenStack preview repository

It is recommended to install and configure the latest stable OpenStack release. As of November 2012, the latest stable release is Folsom, aka 2012.2. For that purpose, enable the OpenStack Preview Repository before proceeding with the following sections:

$> sudo curl http://repos.fedorapeople.org/repos/openstack/openstack-folsom/fedora-openstack-folsom.repo -o /etc/yum.repos.d/fedora-openstack-folsom.repo

Check the OpenStack version

To know the release version of OpenStack:

  • For a remote repository:
$> yum info openstack-nova-compute | grep -e Version -e Release
# Standard Fedora 17 repositories:
Version     : 2012.1.3
Release     : 1.fc17
# Fedora OpenStack preview repository:
Version     : 2012.2
Release     : 1.fc18
  • From the RPM database:
$> rpm -qv openstack-nova-compute
# Standard Fedora 17 repositories:
openstack-nova-compute-2012.1.3-1.fc17.noarch
# Fedora OpenStack preview repository:
openstack-nova-compute-2012.2-1.fc18.noarch
  • From OpenStack itself:
$> nova-manage version
# Standard Fedora 17 repositories:
2012.1.3 (2012.1.3-LOCALBRANCH:LOCALREVISION)
# Fedora OpenStack preview repository:
2012.2 (2012.2-LOCALBRANCH:LOCALREVISION)

Install packages

First let us pull in OpenStack and some optional dependencies:

# Nova (compute), Glance (images), Keystone (identity), Swift (object store), Horizon (dashboard)
$> sudo yum install openstack-utils openstack-nova openstack-glance openstack-keystone \
  openstack-swift openstack-dashboard openstack-swift-proxy openstack-swift-account \
  openstack-swift-container openstack-swift-object
# QPID (AMQP message bus), memcached, NBD (Network Block Device) module
$> sudo yum install qpid-cpp-server-daemon qpid-cpp-server memcached nbd
# Python bindings
$> sudo yum install python-django-openstack-auth python-django-horizon \
  python-keystone python-keystone-auth-token python-keystoneclient \
  python-nova-adminclient python-quantumclient
# Some documentation
$> sudo yum install openstack-keystone-doc openstack-swift-doc openstack-cinder-doc \
  python-keystoneclient-doc
# New Folsom components: Quantum (network), Tempo, Cinder (replacement for Nova volumes)
$> sudo yum install openstack-quantum openstack-tempo openstack-cinder \
  openstack-quantum-linuxbridge openstack-quantum-openvswitch \
  python-cinder python-cinderclient
# Ruby bindings
$> sudo yum install rubygem-openstack rubygem-openstack-compute
# Image creation
$> sudo yum install appliance-tools appliance-tools-minimizer \
  febootstrap rubygem-boxgrinder-build

Setup the database

Nova database

Run the helper script to get MySQL configured for use with openstack-nova. If mysql-server is not already installed, this script will install it for you.

$> sudo openstack-db --service nova --init

Then, synchronize the Nova database:

$> nova-manage db sync

Glance database

Similarly, run the helper script to get MySQL configured for use with openstack-glance.

$> sudo openstack-db --service glance --init

Cinder database

Similarly, run the helper script to get MySQL configured for use with openstack-cinder.

$> sudo openstack-db --service cinder --init

Start support services

Nova requires the QPID messaging server to be running.

$> sudo systemctl start qpidd.service && sudo systemctl enable qpidd.service

Nova requires the libvirtd server to be running:

$> sudo systemctl start libvirtd.service && sudo systemctl enable libvirtd.service

Starting the Glance services

Next, you should enable the Glance API and registry services:

$> for svc in api registry; do sudo systemctl start openstack-glance-$svc.service; done
$> for svc in api registry; do sudo systemctl enable openstack-glance-$svc.service; done

Setup volume storage

The volume service has been extracted from Nova and incorporated into a new dedicated component named Cinder (dubbed OpenStacked Block Storage). From the Folsom release, only Cinder has become officially supported and Nova-volumes has subsequently been deprecated.

Independently of the block storage service component, either Cinder from Folsom or Nova-volumes in Essex, a LVM volume group (vg) has to be created. The LVM volume group can be created either temporarily, e.g. through a simple loop-back sparse disk image, or permanently, e.g. thanks to a simple file mounted as a permanent partition. The Swift component and more permanent block devices are to be preferred for more production-oriented infrastructures.

File-based storage creation

Unless you have dedicated partitions and/or block device, a sparse disk image has to be created.

Cinder volumes (from Folsom)
$> sudo mkdir -p /var/lib/cinder
$> sudo truncate --size=20G /var/lib/cinder/cinder-volumes.img
Nova volumes (deprecated)
$> sudo mkdir -p /var/lib/nova
$> sudo truncate --size=20G /var/lib/nova/nova-volumes.img

Volatile set up (to be redone after every reboot)

The newly created disk image can be mounted as a simple loop-back device.

Cinder volumes (from Folsom)
$> sudo losetup --show -f /var/lib/cinder/cinder-volumes.img
$> CINDER_VOL_DEVICE=$(losetup -a | grep "/var/lib/cinder/cinder-volumes.img" | cut -d':' -f1)
$> sudo vgcreate cinder-volumes $CINDER_VOL_DEVICE
Nova volumes (deprecated)
$> sudo losetup --show -f /var/lib/nova/nova-volumes.img
$> NOVA_VOL_DEVICE=$(losetup -a | grep "/var/lib/nova/nova-volumes.img" | cut -d':' -f1)
$> sudo vgcreate nova-volumes $NOVA_VOL_DEVICE

Permanent set up

The newly created disk image can now be mounted as a standard block device.

Cinder volumes (from Folsom)
LOOP_EXEC_DIR=/usr/libexec/cinder
LOOP_SVC=cinder-demo-disk-image.service
LOOP_EXEC=voladm
GH_SYSD_BASE_URL=https://raw.github.com/openstack-fedora/openstack-configuration/master
GH_SYSD_LOOP_SVC_URL=$GH_SYSD_BASE_URL/systemd/$LOOP_SVC
GH_SYSD_LOOP_EXEC_URL=$GH_SYSD_BASE_URL/bin/$LOOP_EXEC
mkdir -p $LOOP_EXEC_DIR
curl $GH_SYSD_LOOP_SVC_URL -o /usr/lib/systemd/system/$LOOP_SVC
curl $GH_SYSD_LOOP_EXEC_URL -o $LOOP_EXEC_DIR/$LOOP_EXEC
chmod -R a+rx $LOOP_EXEC_DIR
systemctl start $LOOP_SVC && systemctl enable $LOOP_SVC
# By construction (hard-coded in the systemd script):
CINDER_VOL_DEVICE=/dev/loop0
# Create the cinder-volumes Volume Group (VG) for the volume service:
vgcreate cinder-volumes $CINDER_VOL_DEVICE
Nova volumes (deprecated)

Do something similar as above for Nova. If someone is keen to contribute, do not hesitate (e.g., file a pull request on GitHub).

Starting the volume services

Cinder (from Folsom)

The Cinder service can now be started:

systemctl start openstack-cinder-volume.service && systemctl enable openstack-cinder-volume.service

Nova volumes (deprecated)

The Nova-volumes service can now be started:

systemctl start openstack-nova-volume.service && systemctl enable openstack-nova-volume.service

Installing without hardware acceleration / within a virtual machine (VM)

$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
$> setsebool -P virt_use_execmem on # This may take a while

Starting the Nova services

$> for svc in api objectstore compute network scheduler cert; do sudo systemctl start openstack-nova-$svc.service; done
$> for svc in api objectstore compute network scheduler cert; do sudo systemctl enable openstack-nova-$svc.service; done

Check that all the services started up correctly and look in the logs in /var/log/nova for errors. If there are none, then Nova is up and running!

Note the network service should only be started on a single node, when setting up multiple compute nodes

Initial Keystone setup

Keystone is the OpenStack identity service, providing a central place to set up OpenStack users, groups, and accounts that can be shared across all other services. This deprecates the old style user accounts manually set up with nova-manage.

Setting up Keystone is required for using the OpenStack dashboard.

  • Configure the Keystone database, similar to how we do it for nova
$> sudo openstack-db --service keystone --init
  • Set up a keystonerc file with a generated admin token and various passwords:
$> cat > keystonerc << _EOF
export ADMIN_TOKEN=$(openssl rand -hex 10)
export OS_USERNAME=admin
export OS_PASSWORD=verybadpass
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0/
export SERVICE_TOKEN=\$ADMIN_TOKEN
_EOF
$> . ./keystonerc
  • Set the administrative token in the config file
$> sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
  • Start and enable Keystone service
$> sudo systemctl start openstack-keystone.service && sudo systemctl enable openstack-keystone.service
  • Create sample Tenants, Users and Roles
$> sudo ADMIN_PASSWORD=$OS_PASSWORD SERVICE_PASSWORD=servicepass openstack-keystone-sample-data
  • Test the Keystone CLI is working
$> keystone user-list
+----------------------------------+---------+--------------------+--------+
|                id                | enabled |       email        |  name  |
+----------------------------------+---------+--------------------+--------+
| 53c7ad6f1b154754bd59cf07ffe9b0c1 |   True  | admin@example.com  | admin  |
| 75194f7ca5354f92b42d80070df15dd3 |   True  | admin@example.com  |  demo  |
| 45861e2701d24c17a57da280d3a03c3b |   True  |  nova@example.com  |  nova  |
| fc205aedf6c34b2998847b0ee3bf3bd1 |   True  | glance@example.com | glance |
+----------------------------------+---------+--------------------+--------+
  • The Fedora 17 Keystone CLI version makes use of some Python PrettyTable-related functions, which have been deprecated. Upstream has reported the bug (#996638). If you come across that bug (i.e., the output of any keystone xxx-list command returns 'printt' only), you can apply the patch suggested on their bug report:
$> pushd /usr/lib/python2.7/site-packages
$> wget https://launchpadlibrarian.net/104576486/replace-printt.diff
$> patch -p1 --dry-run < replace-printt.diff
$> # Uncomment the following line if everything seems fine:
$> # patch -p1 < replace-printt.diff
$> \rm -f replace-printt.diff
$> popd
Note that this bug affects the calculation of temporary variables within the /usr/share/openstack-keystone/sample_data.sh Shell script (itself called by the openstack-keystone-sample-data executable). In that case, not all the users will be created by that Shell script (for instance, nova and glance may be missing). So, you will have to re-start everything from Basic Setup section above, replacing systemctl start by systemctl restart where appropriate.

Configure nova to use keystone

  • Change nova configuration to use keystone:
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
$> sudo openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password servicepass
$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
$> for svc in api compute; do sudo systemctl restart openstack-nova-$svc.service; done
  • Verify that nova can talk with keystone (requires OS_* exports from previous keystone section)
$> nova flavor-list
+----+-----------+-----------+------+----------+-------+-------------+
| ID |    Name   | Memory_MB | Swap | Local_GB | VCPUs | RXTX_Factor |
+----+-----------+-----------+------+----------+-------+-------------+
| 1  | m1.tiny   | 512       |      | 0        | 1     | 1.0         |
| 2  | m1.small  | 2048      |      | 10       | 1     | 1.0         |
| 3  | m1.medium | 4096      |      | 10       | 2     | 1.0         |
| 4  | m1.large  | 8192      |      | 10       | 4     | 1.0         |
| 5  | m1.xlarge | 16384     |      | 10       | 8     | 1.0         |
+----+-----------+-----------+------+----------+-------+-------------+

Configure glance to use keystone

  • Change glance configuration to use keystone:
$> sudo openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
$> sudo openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_user glance
$> sudo openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_password servicepass
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_tenant_name service
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_user glance
$> sudo openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_password servicepass
$> for svc in api registry; do sudo systemctl restart openstack-glance-$svc.service; done
  • Verify that glance can talk with keystone (requires OS_* exports from the previous keystone section)
$> glance index

Nova Network Setup

To create the network do:

$> sudo nova-manage network create demonet 10.0.0.0/24 1 256 --bridge=demonetbr0

NB the network range here, should *not* be the one used on your existing physical network. It should be a range dedicated for the network that OpenStack will configure. So if 10.0.0.0/24 clashes with your local network, pick another range

Register an Image

To run an instance, you are going to need an image. There are prebuilt Fedora 16 JEOS (Just Enough OS) images that can be downloaded. Note this will download a 200MB image (without a progress bar)

 $> glance add name=f16-jeos is_public=true disk_format=qcow2 container_format=bare \
      copy_from=http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2

another way:

 $> glance add name=f16-jeos is_public=true disk_format=qcow2 container_format=bare < f16-x86_64-openstack-sda.qcow2

Launch an Instance

As a last step before launching, make sure the nbd kernel module is loaded so that injecting SSH key files into the filesystem on the qcow2 image works:

$> sudo modprobe nbd

Create a keypair:

$> nova keypair-add mykey > oskey.priv
$> chmod 600 oskey.priv

Launch an instance:

$> nova boot myserver --flavor 2 --key_name mykey \
     --image $(glance index | grep f16-jeos | awk '{print $1}')

And then observe the instance running, observe the KVM VM running and SSH into the instance:

$> sudo virsh list
$> nova list
$> ssh -i oskey.priv root@10.0.0.2
$> nova console-log myserver
$> nova delete myserver

Configure the OpenStack Dashboard

The OpenStack dashboard is the official web user interface for OpenStack. It should mostly work out of the box, as long as keystone has been configured properly.

  • Install the dashboard
$> sudo yum install openstack-dashboard
  • Make sure httpd is running
$> sudo systemctl restart httpd.service && sudo systemctl enable httpd.service
  • If selinux is enabled, you will have to allow httpd to access other network services (the dashboard talks to the http API of the other OpenStack services)
$> sudo setsebool -P httpd_can_network_connect=on

The dashboard should then be accessed with a web browser at http://localhost/dashboard . Account and password should be what you configured for the keystone setup.

Configure swift with keystone

These are the minimal steps required to setup a swift installation with keystone authentication, this wouldn't be considered a working swift system but at the very least will provide you with a working swift API to test clients against, most notably it doesn't include replication, multiple zones and load balancing.

Installing swift

$> sudo yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached

Ensure the keystone env variables are still setup from the previous steps

We need to create 5 configuration files

$> cat > /tmp/swift.conf <<- EOF
[swift-hash]
swift_hash_path_suffix = randomestringchangeme
EOF
$> sudo mv /tmp/swift.conf /etc/swift/swift.conf
$> cat > /tmp/proxy-server.conf <<- EOF
[DEFAULT]
bind_port = 8080
workers = 8
user = swift
[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_port = 35357
auth_host = 127.0.0.1
auth_protocol = http
admin_token = ADMINTOKEN
#  ??? Are these needed?
service_port = 5000
service_host = 127.0.0.1
service_protocol = http
auth_token = ADMINTOKEN
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = 127.0.0.1:11211
[filter:catch_errors]
use = egg:swift#catch_errors
EOF
$> sudo mv /tmp/proxy-server.conf /etc/swift/proxy-server.conf
$> cat > /tmp/account-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]
EOF
$> sudo mv /tmp/account-server.conf /etc/swift/account-server.conf
$> cat > /tmp/container-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
EOF
$> sudo mv /tmp/container-server.conf /etc/swift/container-server.conf
$> cat > /tmp/object-server.conf <<- EOF
[DEFAULT]
bind_ip = 127.0.0.1
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
EOF
$> sudo mv /tmp/object-server.conf /etc/swift/object-server.conf

So that swift can authenticate tokens we need to set the keystone Admin token in the swift proxy file

$> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN
$> sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN

Create the storage device for swift, these instructions use a loopback device but a physical device or logical volume can be used

$> truncate --size=20G /tmp/swiftstorage
$> DEVICE=$(sudo losetup --show -f /tmp/swiftstorage)
$> sudo mkfs.ext4 -I 1024 $DEVICE
$> sudo mkdir -p /srv/node/partitions
$> sudo mount $DEVICE /srv/node/partitions -t ext4 -o noatime,nodiratime,nobarrier,user_xattr

Change the working dir so that the following commands will create the *.builder files on right place.

$> cd /etc/swift

Create the ring, with 1024 partitions (only suitable for a small test environment) and 1 zone

$> sudo swift-ring-builder account.builder create 10 1 1
$> sudo swift-ring-builder container.builder create 10 1 1
$> sudo swift-ring-builder object.builder create 10 1 1

Create a device for each of the account, container and object services

$> sudo swift-ring-builder account.builder add z1-127.0.0.1:6002/partitions 100
$> sudo swift-ring-builder container.builder add z1-127.0.0.1:6001/partitions 100
$> sudo swift-ring-builder object.builder add z1-127.0.0.1:6000/partitions 100

Rebalance the ring (allocates partitions to devices)

$> sudo swift-ring-builder account.builder rebalance
$> sudo swift-ring-builder container.builder rebalance
$> sudo swift-ring-builder object.builder rebalance

make sure swift owns appropriate files

$> sudo chown -R swift:swift /etc/swift /srv/node/partitions

Added the swift service and endpoint to keystone

$> SERVICEID=$(keystone  service-create --name=swift --type=object-store --description="Swift Service" | grep "id " | cut -d "|" -f 3)
$> echo $SERVICEID # just making sure we got a SERVICEID
$> keystone endpoint-create --service_id $SERVICEID --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --internalurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s"

Start the services

$> sudo service memcached start
$> for srv in account container object proxy  ; do sudo service openstack-swift-$srv start ; done

Test the swift client and upload files

$> swift list
$> swift upload container /path/to/file

Additional Functionality

Using Eucalyptus tools

Install the Eucalyptus tools

sudo yum install euca2ools

Set up a rc file for EC2 access (this expects a prior keystone configuration)

$> . ./keystonerc
$> USER_ID=$(keystone user-list | awk '/admin / {print $2}')
$> ACCESS_KEY=$(keystone ec2-credentials-list --user $USER_ID | awk '/admin / {print $4}')
$> SECRET_KEY=$(keystone ec2-credentials-list --user $USER_ID | awk '/admin / {print $6}')
$> cat > novarc <<EOF
export EC2_URL=http://localhost:8773/services/Cloud
export EC2_ACCESS_KEY=$ACCESS_KEY
export EC2_SECRET_KEY=$SECRET_KEY
EOF
$> chmod 600 novarc
$> . ./novarc 

You should now be able to launch an image:

$> euca-run-instances f16-jeos -k mykey
$> euca-describe-instances
$> euca-get-console-output i-00000001
$> euca-terminate-instances i-00000001

Images

Rather than the prebuilt Fedora 16 JEOS image referenced above, there are other image options.

  1. Building a Fedora 16 JEOS image using Oz
  2. Downloading ttylinux based minimal images used by OpenStack developers for testing

Building Fedora 16 JEOS Images With Oz

You can very easily build an image using Oz. First, make sure it's installed:

$> sudo yum install oz

Create a template definition file called f16-jeos.tdl containing:

<template>
 <name>fedora16_x86_64</name>
 <description>My Fedora 16 x86_64 template</description>
 <os>
  <name>Fedora</name>
  <version>16</version>
  <arch>x86_64</arch>
  <install type='url'>
    <url>http://download.fedoraproject.org/pub/fedora/linux/releases/16/Fedora/x86_64/os/</url>
  </install>
 </os>
 <commands>
   <command name='setup-rc-local'>
sed -i 's/rhgb quiet/console=ttyS0/' /boot/grub/grub.conf
 
cat >> /etc/rc.local &lt;&lt; EOF
if [ ! -d /root/.ssh ]; then
  mkdir -p /root/.ssh
  chmod 700 /root/.ssh
fi
 
# Fetch public key using HTTP
ATTEMPTS=10
while [ ! -f /root/.ssh/authorized_keys ]; do
    curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/aws-key 2>/dev/null
    if [ \$? -eq 0 ]; then
        cat /tmp/aws-key >> /root/.ssh/authorized_keys
        chmod 0600 /root/.ssh/authorized_keys
        restorecon /root/.ssh/authorized_keys
        rm -f /tmp/aws-key
        echo "Successfully retrieved AWS public key from instance metadata"
    else
        FAILED=\$((\$FAILED + 1))
        if [ \$FAILED -ge \$ATTEMPTS ]; then
            echo "Failed to retrieve AWS public key after \$FAILED attempts, quitting"
            break
        fi
        echo "Could not retrieve AWS public key (attempt #\$FAILED/\$ATTEMPTS), retrying in 5 seconds..."
        sleep 5
    fi
done
EOF
   </command>
 </commands>
</template>
 

Then simply do:

$> sudo oz-install -d4 -u f16-jeos.tdl

Once built, you simply have to register the image with Nova:

$> glance add name=f16-jeos is_public=true container_format=bare disk_format=raw < /var/lib/libvirt/images/fedora16_x86_64.dsk
$> glance index

The last command should return a list of the images registered with the Glance image registry.

Downloading Existing Images

If you don't need a functioning Fedora 16 and want the smallest possible images, just download this set of images commonly used by OpenStack developers for testing and register them with Nova:

$> mkdir images
$> cd images
$> curl -L http://github.com/downloads/citrix-openstack/warehouse/tty.tgz | tar xvfzo -
$> glance add name=aki-tty disk_format=aki container_format=aki is_public=true < aki-tty/image
$> glance add name=ami-tty disk_format=ami container_format=ami is_public=true < ami-tty/image
$> glance add name=ari-tty disk_format=ari container_format=ari is_public=true < ari-tty/image

Then to start the image:

$> euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k mykey

Volumes

If you use the Chrome browser, kill it before embarking on this section, as it has been known to cause the lvcreate command to fail with 'incorrect semaphore state' errors.

Note when setting up volumes in production, make sure you don't put your volume nodes on the same network as your guests when using the default volume driver, as all the iscsi targets are discoverable and accessible without any security.

Start the SCSI target daemon

$> sudo systemctl start tgtd.service
$> sudo systemctl enable tgtd.service

Create a new 1GB volume

$> VOLUME=$(euca-create-volume -s 1 -z nova | awk '{print $2}')

View the status of the new volume, and wait for it to become 'available'

$> watch "euca-describe-volumes | grep $VOLUME | grep available"

Re-run the previously terminated instance if necessary:

$> INSTANCE=$(euca-run-instances f16-jeos -k mykey | grep INSTANCE | awk '{print $2}')

or:

$> INSTANCE=$(euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k mykey | grep INSTANCE | awk '{print $2}')

Make the storage available to the instance (note -d is the device on the compute node)

$> euca-attach-volume -i $INSTANCE -d /dev/vdc $VOLUME

ssh to the instance and verify that the vdc device is listed in /proc/partitions

$> cat /proc/partitions

Now make the device available if /dev/vdc is not already present

$> mknod /dev/vdc b 252 32

Create and mount a file system directly on the device

$> sudo mkfs.ext3 /dev/vdc
$> sudo mkdir /mnt/nova-volume
$> sudo mount /dev/vdc /mnt/nova-volume

Display some file system details

$> df -h /dev/vdc

Create a temporary file:

$> sudo su -c 'echo foo > /mnt/nova-volume/bar'

Terminate and re-run the instance, then re-attach the volume and re-mount within the instance as above. Your temporary file will have persisted:

$> cat /mnt/nova-volume/bar

Unmount the volume again:

$> sudo umount /mnt/nova-volume

Exit from the ssh session, then detach and delete the volume:

$> euca-detach-volume $VOLUME
$> euca-delete-volume $VOLUME

Floating IPs

You may carve out a block of public IPs and assign them to instances.

First thing you need to do is make sure that nova is configured with the correct public network interface. The default is eth0, but you can change it by e.g.

$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT public_interface em1
$> sudo systemctl restart openstack-nova-network.service

Then you can do e.g.

$> sudo nova-manage floating create 172.31.0.224/28
$> euca-allocate-address
$> euca-associate-address -i i-00000012 172.31.0.224
$> ssh -i nova_key.priv root@172.31.0.224
$> euca-disassociate-address 172.31.0.224
$> euca-release-address 172.31.0.224

VNC access

To setup VNC access to guests through the dashboard.

First you need to install openstack-nova-novncproxy.

$> sudo yum install --enablerepo=updates-testing openstack-nova-novncproxy

nova-novncproxy reads some parameters in /etc/nova/nova.conf file.

You need to configure your cloud controller to enable VNC.

novncproxy_host = 0.0.0.0
novncproxy_port = 6080

and in the nova compute nodes you need something like this

novncproxy_base_url=http://NOVNCPROXY_FQDN:6080/vnc_auto.html
vnc_enabled=true
vncserver_listen=COMPUTE_FQDN
vncserver_proxyclient_address=COMPUTE_FQDN

You should also make sure that openstack-nova-consoleauth and openstack-nova-novncproxy have been started on the controller node:

$ controller> sudo ln -s /usr/lib/systemd/system/openstack-nova-consoleauth.service /etc/systemd/system/multi-user.target.wants/
$ controller> sudo ln -s /usr/lib/systemd/system/openstack-nova-novncproxy.service /etc/systemd/system/multi-user.target.wants/
$ controller> sudo service openstack-nova-consoleauth restart
$ controller> sudo service openstack-nova-novncproxy restart

After restarting nova services on both nodes the newly created machines will run the qemu-kvm with a parameter -vnc compute_fqdn:display_number. Then after starting the novncproxy and connecting to the dashboard it will discover the host and point to the novncproxy with the appropriate values and connect to the VM.

Note ensure than the iptables entries for VNC ports (5900+DISPLAYNUMBER) are allowed.

Migrate and Resize

This is implemented currently by transferring the images between compute nodes over ssh. Therefore currently you need to make these adjustments on each compute node to allow that.

  • Allow logins for the nova user
 # usermod -s /bin/bash nova
 # su - nova
 $ mkdir .ssh && cd .ssh
  • Disable host identity checking by adding this to ssh config
 $ cat > config <<EOF
 Host * 
   StrictHostKeyChecking no 
   UserKnownHostsFile=/dev/null 
 EOF
  • Generate and distribute ssh key
 $ ssh-keygen -f id_rsa -b 1024 -P 
 $ scp /var/lib/nova/.ssh/id_rsa.pub root@otherHost:/var/lib/nova/.ssh/authorized_keys
 # chown nova:nova /var/lib/nova/.ssh/authorized_keys
 # semanage permissive -a sshd_t

To improve the SELinux config in future, the nova_var_lib_t context on /var/lib/nova will, need to be configured to allow search access by sshd_t. Also the ssh_home_t context will need to be associated with /var/lib/nova/.ssh

Live Migration of VM instances

First note the official OpenStack docs on the feature and a doc patch distinguishing libvirt live migration.

  • Seting NFS server
    • Make an nfs share with no_root_squash (nova uses root-wrap to chown the instance's disk to qemu:qemu)
    • Make nova user and qemu user:
nova:x:162:162::/home/nova:/bin/bash
qemu:x:107:107::/home/qemu:/bin/bash
    • chown -R nova:nova /the/nfs/share
  • Mount nfs share on each host at /var/lib/nova/instances
  • Configure libvirt
    • See the libvirt wiki as to how to create certificates.
    • Edit /etc/libvirt/libvirt.conf
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"
    • Edit /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"
  • Restart libvirtd & OpenStack compute services

Deployment

Adding a Compute Node

Okay, everything so far has been done on a single node. The next step is to add another node for running VMs.

Let's assume the machine you've set up above is called 'controller' and the new machine is called 'node'.

First, open the rabbitmq, MySQL, Nova API and iSCSI ports on controller:

$ controller> sudo lokkit -p 3306:tcp
$ controller> sudo lokkit -p 5672:tcp
$ controller> sudo lokkit -p 9292:tcp
$ controller> sudo lokkit -p 3260:tcp
$ controller> sudo service libvirtd reload

Then make sure that ntp is enabled on both machines:

$> sudo yum install -y ntp
$> sudo service ntpd start
$> sudo chkconfig ntpd on

Install libvirt and nova on node:

$ node> sudo yum install --enablerepo=updates-testing openstack-nova
$ node> sudo service libvirtd start
$ node> sudo chkconfig libvirtd on
$ node> sudo setenforce 0

Configure nova so that node can find the services on controller:

$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 172.31.0.107

(The {{{iscsi_ip_prefix}}} value is the IP address of the controller node)

Enable the compute service:

$ node> for svc in compute network; do sudo service openstack-nova-$svc start; done

Finally, you need to make sure the network is configured with a physical bridge interface:

$ controller> sudo nova-manage network create demonet 10.0.0.0/24 --bridge=demonetbr0 --bridge_interface=em1

Now everything should be running as before, except the VMs are launched either on controller or node.

Manual Setup of MySQL

As of openstack-nova-2011.3-9.el6 and openstack-nova-2011.3-8.fc16, openstack-nova is now set up to use MySQL by default. If you're updating an older installation or prefer to set up MySQL manually instead of using the openstack-db script, this section shows how to do it.

First install and enable MySQL:

$> sudo yum install -y mysql-server
$> sudo service mysqld start
$> sudo chkconfig mysqld on

Set a password for the root account and delete the anonymous accounts (via interactive prompt):

$> mysql_secure_installation

Create a database and user account specifically for nova:

mysql> create database nova;
mysql> create user 'nova'@'localhost' identified by 'nova';
mysql> create user 'nova'@'%' identified by 'nova';
mysql> grant all on nova.* to 'nova'@'%';

(If anyone can explain why nova@localhost is required even though the anonymous accounts have been deleted, I'd be very grateful :-)

Then configure nova to use the DB and install the schema:

$> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@localhost/nova
$> sudo nova-manage db sync

As a final sanity check:

$> mysql -u nova -p nova
Enter password:
mysql> select * from migrate_version;

Miscellaneous

Smoke Tests

Nova comes with a selection of fairly basic smoke tests which you can run against your installation. It can be useful to use these to sanity check your configuration.

First off, you need the nova-adminclient python library which isn't yet packaged:

$> sudo yum install python-pip
$> sudo pip-python install nova-adminclient

Then you need a user and project both named admin:

$> sudo nova-manage user admin admin
$> sudo nova-manage project create admin admin
$> sudo nova-manage project zipfile admin admin
$> unzip nova.zip
$> . ./novarc

Make sure you have the tty images imported as described above. You also need a block of floating IPs created, also as described above.

Then, run the tests from a fedpkg checkout:

$> fedpkg clone openstack-nova
$> cd openstack-nova
$> fedpkg switch-branch f16
$> fedpkg prep
$> cd nova-2011.3/smoketests
$> python ./run_tests.py

All the tests should pass.

If you run into import errors such as:

ImportError: No module named nose

or:

ImportError (No module named paramiko)

simply install the missing dependency as follows:

$> sudo yum install -y python-nose.noarch
$> sudo yum install -y python-paramiko.noarch

Cleanup

While testing OpenStack, you might want to delete everything related to OpenStack and start testing with a clean slate again.

Here's how. First, make sure to terminate all running instances:

$> euca-terminate-instances ...

Double check that you have no lingering VMs, perhaps saved to disk:

$> virsh list --all && virsh undefine
$> rm -f /var/lib/libvirt/qemu/save/instance-00000*

Then stop all the services:

$> for iii in /usr/lib/systemd/system/openstack-*.service; do sudo systemctl stop $(basename $iii); done

Delete all the packages:

$> sudo yum erase python-glance python-nova* python-keystone* openstack-swift*

Delete the nova and keystone tables from the MySQL DB:

$> mysql -u root -p -e 'drop database nova;'
$> mysql -u root -p -e 'drop database keystone;'

Delete the nova-volumes VG:

$> sudo vgchange -an nova-volumes
$> sudo losetup -d /dev/loop0
$> sudo rm -f /var/lib/nova/nova-volumes.img

Take down the bridge and kill dnsmasq:

$> sudo ip link set demonetbr0 down
$> sudo brctl delbr demonetbr0
$> sudo kill -9 $(cat /var/lib/nova/networks/nova-demonetbr0.pid)

Remove all directories left behind from the packages:

$> sudo rm -rf /etc/{glance,nova,swift,keystone,openstack-dashboard} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}

Finally, restart iptables to clear out all rules added by Nova. You also need to reload libvirt's iptables rules:

$> sudo service iptables restart
$> sudo service libvirtd restart