Configuring an OpenStack Havana Lab Step-by-step


Configuring an OpenStack Havana Lab Step-by-step

In this demonstration, we’re going to configure a basic OpenStack cluster on RedHat/CentOS 6.5.

Hardware

We’ll need at least two servers: a controller node to host the various OpenStack services, and a compute node to run our virtual instances. I always like to start with the fundamentals and then add complexity incrementally, so we’ll start with just these two servers and utilize local storage. We can always add a SAN later.

Controller node
12 GB RAM, 146 GB

Compute node
32 GB RAM, 146 GB system, 294 GB local storage 192.168.1.211/24

Operating System
We’ll be using RedHat Enterprise Linux 6.5 for everything. CentOS 6.5 will work too.

Networking
We need two subnets, one for management and OpenStack service communication, and another for the instances to use. The latter is analogous to an Amazon EC2 Virtual Private Cloud (VPC) subnet. We’ll call these the Instance and OpenStack networks, respectively.

Instance Network – 192.168.2.208/28 OpenStack external 192.168.2.222 gw (for VMs)
OpenStack Network – 192.168.1.0/24 OpenStack internal 192.168.1.8 gw

Controller Node Setup

First, we’ll name the controller node osctl01 and give it an IP of 192.168.1.210/24.

OpenStack services require a database. They don’t have to be on the same database, or even the same server, but for the sake of simplicity, we’ll use one MySQL server for everything. We also need NTP to ensure the time is synchronized.

Install MySQL server, NTP, and MySQL-python:

yum -y install ntp mysql mysql-server MySQL-python

If you don’t want to use the default NTP servers that ship with RedHat, configure /etc/ntp.conf and add a server directive to point to the server you want to receive time from.

Start NTP and set it to start at boot:

service ntpd start
chkconfig ntpd on

If you’re using DNS, you can skip this step and just add the hosts to DNS. Otherwise, change /etc/hosts to add the controller and compute nodes for name resolution:
192.168.1.210 osctl01
192.168.1.211 oscompute01

Configure mysql
Add to /etc/my.cnf
bind-address = 192.168.1.210

Start MySQL and set to start at boot:

service mysqld start
chkconfig mysqld on

Run the MySQL Secure Installation utility to disable test databases and change the root password. We’ll change the password to 0p3nR0ot!!

mysql_secure_installation

If you have a RedHat subscription and your system is registered, you can skip this step. Otherwise, install the Extra Packages for Enterprise Linux (EPEL) repository.

yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Set up the OpenStack repo and install Openstack packages

yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-7.noarch.rpm
yum install openstack-utils
yum install openstack-selinux

Ensure everything is up-to-date, then reboot.

yum upgrade
reboot

Message Broker

We’re going to install the Apache qpid daemon. qpid is a messaging broker that the OpenStack services use to communicate with each other.

yum -y install qpid-cpp-server memcached

Qpid has some pretty advanced security features that can be configured for high-risk environments. Since we’re in a lab setting, we can just disable this:
edit /etc/qpidd.conf and set auth=no

Start the Qpid daemon (qpidd) and enable it:

service qpidd start
chkconfig qpidd on

Keystone Identity Service

Keystone is at the heart of OpenStack’s security (hence the name). Configuration can be a little unwieldy, so make use of copy-and-paste and type carefully!

Install Keystone:

yum -y install openstack-keystone python-keystoneclient

Configure Keystone to use a MySQL database:

openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:b973b5a7e8247adcb628@osctl01/keystone
openstack-db --init --service keystone --password b973b5a7e8247adcb628

We need to create an admin token which will allow us to initially authenticate to OpenStack. Note that when you echo the $ADMIN_TOKEN variable, you will get a different result because it’s just a random hex string generated by OpenSSL.

ADMIN_TOKEN=$(openssl rand -hex 10)
echo $ADMIN_TOKEN
0cda5342682b495ecf9e
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN

Now we need to generate SSL certificates. The OpenStack folks recommend using a certificate issued by a certification authority (CA), but for our lab this will do just fine:

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log

Start and enable the Keystone service:

service openstack-keystone start
chkconfig openstack-keystone on

Adding Tenants
Remember the admin token we created just a couple steps ago? We need to use that now to authenticate to Keystone so we can finish configuring it.

Set the OS_SERVICE_TOKEN environment variable to the same value as the admin token from earlier.

export OS_SERVICE_TOKEN=0cda5342682b495ecf9e

Set the OS_SERVICE_ENDPOINT variable. The format is “http://[servername]:35357/v2.0”:

export OS_SERVICE_ENDPOINT=http://osctl01:35357/v2.0

Keystone tenants are simply containers that hold users. These are sometimes represented as “projects” as you’ll see later after we install the OpenStack dashboard.

Let’s create an admin tenant for our administrator user:

keystone tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Admin Tenant |
| enabled | True |
| id | ffea20d6a4ae483bb62a46c8246d1a1b |
| name | admin |
+-------------+----------------------------------+

Now let’s create a service tenant:

keystone tenant-create --name=service --description="Service Tenant"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | 3fd349e09927432f95fbd00cc8534744 |
| name | service |
+-------------+----------------------------------+

Adding Users and Roles
Now we need to do three things: create the admin user, create the admin role, and then add them to the admin tenant container. Make a note of the username and password because you’ll need it later to login.

keystone user-create --name=admin --pass=s7@ck@dmyn20 --email=alerts@benpiper.com
keystone role-create --name=admin
keystone user-role-add --user=admin --tenant=admin --role=admin

Keystone Service and API Endpoint
Even though we’ve done a lot of configuration, one thing we have not yet done is created the Keystone service. Let’s do that now:

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Keystone Identity Service |
| id | 61f9e9edba4141ca8395fd116a038315 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+

Other OpenStack services authenticate to the Keystone service using an endpoint API. Let’s create an endpoint and link it to the service. Note the service-id matches the id from the last step. The publicurl and internalurl format is “http://[servername]:5000/v2.0”. The value of adminurl is the same as the OS_SERVICE_ENDPOINT variable from earlier.

keystone endpoint-create --service-id=61f9e9edba4141ca8395fd116a038315 --publicurl=http://osctl01:5000/v2.0 --internalurl=http://osctl01:5000/v2.0 --adminurl=http://osctl01:35357/v2.0
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://osctl01:35357/v2.0 |
| id | a88018578c724b36903d6b3913c302fc |
| internalurl | http://osctl01:5000/v2.0 |
| publicurl | http://osctl01:5000/v2.0 |
| region | regionOne |
| service_id | 61f9e9edba4141ca8395fd116a038315 |
+-------------+----------------------------------+

Verify

Now we’re ready to check our work. First, we need to unset the variables we set at the beginning. That way we know we’re not cheating.

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

Now run the following commands to fetch two tokens. The tokens should be long strings. If you see long strings, it means Keystone authentication is working as expected:

keystone --os-username=admin --os-password=s7@ck@dmyn20 --os-auth-url=http://osctl01:35357/v2.0 token-get
keystone --os-username=admin --os-password=s7@ck@dmyn20 --os-tenant-name=admin --os-auth-url=http://osctl01:35357/v2.0 token-get

Administering Keystone

At this point, you have to authenticate to Keystone before you can manage it. What we saw in the last step is an example of this. Of course, having to tack the admin username, password, tenant name, and URL onto every command isn’t gonna fly. We can avoid this by setting some environment variables. These can be incorporated into a Linux user’s bashrc or just set ad hoc:

export OS_USERNAME=admin
export OS_PASSWORD=s7@ck@dmyn20
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://osctl01:35357/v2.0

Glance Image Service

Glance is the service that manages instance images. An image is just a binary file with a pre-configured operating system like Ubuntu, CoreOS, or Windows. Why are we installing Glance on the controller node and not the compute node? Well, in a production environment, the other OpenStack services are generally separated from the Nova service on the compute node. Having images stored on one server and the instances running on another creates an interesting dynamic, because you end up streaming the images across the network. Learning to troubleshoot the problems that may arise doing that is a great skill to hone in the lab before trying to implement OpenStack in production.

Installing Glance

yum -y install openstack-glance

The following commands modify the glance-api.conf and glance-registry.conf files to point to the MySQL server. The “glance” in “mysql://glance…” stanza is the username, and the string after the colon is the password. The glance user and password don’t exist in MySQL yet. We’ll create that in the next step.

openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:8bc00a75df06c01fa106@osctl01/glance
openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection mysql://glance:8bc00a75df06c01fa106@osctl01/glance

I told you we’d create them:)

openstack-db --init --service glance --password 8bc00a75df06c01fa106

Now we need to create a glance user in Keystone. Let’s give this one a different password, not just because it’s good security, but because it will help us distinguish the glance Keystone user from the glance MySQL user:

keystone user-create --name=glance --pass=7a9915b6b5f5b6784cf4 --email=alerts@benpiper.com

Add the new glance user to the service tenant container and admin role.

keystone user-role-add --user=glance --tenant=service --role=admin

Now we need to tell Glance about the Keystone endpoint. Glance can actually can talk to the database through two different paths, the API or the registry. Leaving out the details, let’s configure both:

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://osctl01:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host osctl01
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password 7a9915b6b5f5b6784cf4
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf auth_uri http://osctl01:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host osctl01
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password 7a9915b6b5f5b6784cf4
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

Modify the /etc/glance/glance-api-paste.ini and glance-registry-paste.ini files to add the following, changing parameters as needed:

auth_host=osctl01
admin_user=glance
admin_tenant_name=service
admin_password=7a9915b6b5f5b6784cf4
keystone service-create --name=glance --type=image --description="Glance Image Service"
keystone endpoint-create --service-id=718aeb6749554deaa2bf7d8421fe95a3 --publicurl=http://osctl01:9292 --internalurl=http://osctl01:9292 --adminurl=http://osctl01:9292

Start the Glance API and registry services and enable both:

service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

Creating an Image

Now, if you don’t have a /var/lib/glance/images directory, now is the time to create it. If you’re feeling adventurous and want to use a SAN for image storage, you can create a mount point here. Make sure the glance user has +rw permissions to /var/lib/glance/images.

We’ll grab the CirrOS image which is a small image used for testing cloud deployments.

wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

Add the image to Glance:

glance image-create --name=cirros-0.3.1 --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | d972013792949d0d3ba628fbe8685bce |
| container_format | bare |
| created_at | 2014-07-25T20:31:47 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | 0c71c5bb-51cd-4afe-880f-6c616b89bfcb |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-0.3.1 |
| owner | ffea20d6a4ae483bb62a46c8246d1a1b |
| protected | False |
| size | 13147648 |
| status | active |
| updated_at | 2014-07-25T20:31:48 |
+------------------+--------------------------------------+

Compute Node Setup

The compute node needs two network interfaces.

OpenStack Network – oscompute01 192.168.1.211/24 internal eth2
Instance Network – 192.168.2.211/28 external eth3

Remember to modify the hosts file on the compute node if you’re not using DNS.

On the compute node, we’re going to install the Nova service. Nova manages the compute resources. It’s what actually provisions the individual instances.

Install NTP and the MySQL client:

yum -y install ntp mysql MySQL-python

Configure NTP if necessary, start it, and enable it.

Install Nova:

yum -y install openstack-nova-compute openstack-nova python-novaclient

Configure Nova to use Keystone:

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host osctl01
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password 6a7d44ef52d1fe0848fb

Configure Nova to use Qpid:

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname osctl01

Enable VNC so we can VNC to our instances:

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.1.211
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.1.211
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://osctl01:6080/vnc_auto.html

Set the default Glance host:

openstack-config --set /etc/nova/nova.conf DEFAULT glance_host osctl01

KVM is the hypervisor we’ll use to launch our instances. Let’s enable it now:

chkconfig libvirtd on

Enable the messagebus and openstack-nova-compute services:

chkconfig messagebus on
chkconfig openstack-nova-compute on

Configure Nova to use MySQL and Qpid:

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:530c8af9db415ff16819@osctl01/nova
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname osctl01
openstack-db --init --service nova --password 530c8af9db415ff16819

Create the nova user in Keystone and add to the service tenant container:

keystone user-create --name=nova --pass=6a7d44ef52d1fe0848fb --email=alerts@benpiper.com
keystone user-role-add --user=nova --tenant=service --role=admin

Configure Nova to use Keystone:

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host osctl01
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password 6a7d44ef52d1fe0848fb

Modify /etc/nova/api-paste.ini as follows:

auth_host = osctl01
auth_port = 35357
auth_protcol = http
auth_uri = http://osctl01:5000/v2.0
admin_tenant_name = service
admin_user = nova
admin_password = 6a7d44ef52d1fe0848fb

Make sure /etc/nova/nova.cfg has the directive api_paste_config=/etc/nova/api-paste.ini

Create the nova service in Keystone:

keystone service-create --name=nova --type=compute --description="Nova Compute service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Nova Compute service |
| id | 4624b2b75a6b475895412a2149999001 |
| name | nova |
| type | compute |
+-------------+----------------------------------+

Register the Nova endpoint with Keystone:

keystone endpoint-create --service-id=4624b2b75a6b475895412a2149999001 --publicurl=http://osctl01:8774/v2/%\(tenant_id\)s --internalurl=http://osctl01:8774/v2/%\(tenant_id\)s --adminurl=http://osctl01:8774/v2/%\(tenant_id\)s
+-------------+----------------------------------------+
| Property | Value |
+-------------+----------------------------------------+
| adminurl | http://osctl01:8774/v2/%(tenant_id)s |
| id | e92f90f26d504e699fb4df2c5291f967 |
| internalurl | http://osctl01:8774/v2/%(tenant_id)s |
| publicurl | http://osctl01:8774/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 4624b2b75a6b475895412a2149999001 |
+-------------+----------------------------------------+

Enable everything:

chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on

Networking

The legacy architecture OpenStack used for provisioning networks for instances is called Nova Network. It has been phased out and replaced by Neutron, but we’re going to use Nova Network because it’s simple and does the job for a lab environment. Just don’t use it in production.

Install Nova Network

yum -y install openstack-nova-network

We want to set up a flat network, which means one subnet that all our instances will share. Since our instance subnet is 192.168.2.208 subnet has a 28 bit netmask, we have 14 usable host addresses.

openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT network_size 14
openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface eth3
openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br100
openstack-config --set /etc/nova/nova.conf DEFAULT public_interface eth3

Now let’s install the Nova API, start it, and enable it:

yum -y install openstack-nova-api
chkconfig openstack-nova-metadata-api on
chkconfig openstack-nova-network on

Let’s create a network (similar to an EC2 VPC) that the instances will use:

nova network-create vmnet --fixed-range-v4=192.168.2.208/28 --bridge=br100 --multi-host=T

Now let’s create a public-private keypair:

ssh-keygen

Add the keypair to Nova:

nova keypair-add --pub_key id_rsa.pub mykey

Add rules to allow ICMP and SSH to the instances:

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

And finally, let’s boot up the CirrOS image:

nova boot --flavor 2 --key_name mykey --image 0c71c5bb-51cd-4afe-880f-6c616b89bfcb --security_group default cirrOS

+--------------------------------------+-----------------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | y4LrfrH8oW4h |
| config_drive | |
| created | 2014-07-27T06:05:01Z |
| flavor | m1.small (2) |
| hostId | |
| id | 08513282-e861-4f85-b31f-deddc865e7f2 |
| image | cirros-0.3.1 (0c71c5bb-51cd-4afe-880f-6c616b89bfcb) |
| key_name | mykey |
| metadata | {} |
| name | cirrOS |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | ffea20d6a4ae483bb62a46c8246d1a1b |
| updated | 2014-07-27T06:05:01Z |
| user_id | 7538cee43aa8428bbb26c9ab28a2adca |
+--------------------------------------+-----------------------------------------------------+

Once it’s built, we can SSH to the IP address. Congratulations! You’ve launched your very first instance on OpenStack!

Tags: , , , ,

Got Something To Say:

Your email address will not be published. Required fields are marked *

*