Setting up an OpenStack Cloud using Ansible

I use Ansible and OpenStack quite a bit on a daily basis, so I wanted to check out the work the community has done with the openstack-ansible project and get an OpenStack environment set up in my lab. I encourage you to read through the documentation as it is really detailed. Let’s do this!

img

My Lab Environment

My setup consists of:

4 x Dell PowerEdge R720s with 128GB of RAM
Quad 10G Intel NICs
Cisco Nexus 3k switches

I set aside one of the nodes for deployment and the other three were going to be used as targets. openstack-ansible currently supports Ubuntu 14.04 LTS (Trusty) so the first order of business was to install the OS to the servers. Future support for 16.04 LTS (xenial) and CentOS 7 may be coming down at some point as well.

Setting up Networking

Once the OS was installed, the first thing to do was to set up the initial networking config in /etc/network/interfaces. For my setup, I’ll be assigning networks to vlans for my setup.

Add some initial packages on the target host and enable some modules:

apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \
lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan
echo 'bonding' >> /etc/modules
echo '8021q' >> /etc/modules

Drop your interfaces file onto all of your hosts you’ll be deploying and reboot them to apply the changes so that they set up all of the bridges for your containers and instances. In my example this configuration sets up dual bonds, VLANs, and bridges that OpenStack Ansible will plug everything into.

Initial Bootstrap

You’ll want to use one server as the deployment host so log into that server, check out openstack-ansible, and run the initial Ansible bootstrap:

git clone https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
cd /opt/openstack-ansible
git checkout stable/mitaka
scripts/bootstrap-ansible.sh

The bootstrap-ansible.sh script will generate keys so make sure to copy the contents of the public key file on the deployment host to the /root/.ssh/authorized_keys file on each target host.

Copy the example openstack_deploy directory to /etc/:

cp -R /opt/openstack-ansible/etc/openstack_deploy /etc/openstack_deploy
cp /etc/openstack_deploy/openstack_user_config.yml.example /etc/openstack_deploy/openstack_user_config.yml

Modify the openstack_user_config.yml for the settings you want. You’ll need to specify which servers you want each role to do. The openstack_user_config.yml is pretty well commented and provides lots of docs to get started.

My config:

If you have enough memory and CPU on the hosts, you can also reuse the infrastructure nodes as compute_nodes to avoid having to set up dedicated nodes for compute.

Credentials

cd /opt/openstack-ansible/scripts
python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml

User Variables

We’ll start with just the basics for now to get operational. Make sure to enable at least a few options in the /etc/openstack_deploy/user_variables.yml otherwise it will have a hard time assembling the variables (these haven’t made it into the mitaka stable branch yet):

## Debug and Verbose options.
debug: false
verbose: false

Run the playbooks

cd /opt/openstack-ansible/playbooks
openstack-ansible setup-hosts.yml
openstack-ansible haproxy-install.yml
openstack-ansible setup-infrastructure.yml
openstack-ansible setup-openstack.yml

If there are no errors, then the initial cluster should be setup. The playbooks are all idempotent so you can rerun them at anytime.

Using the Cluster

Once these playbooks complete, you should have a functional OpenStack Cluster. To get started, you can log into Horizon with either the external vip IP you set up in openstack_user_config.yml or by hitting the server directly.

You’ll use the user name “admin” and the password will be in your /etc/openstack_deploy/user_secrets.yml file that you generated earlier:

grep keystone_auth_admin_password /etc/openstack_deploy/user_secrets.yml
keystone_auth_admin_password: 4lkwtwtpmasldfqsdf

Each target node will have a utility container that you can ssh into to grab the openstack client credentials or run the client from the container. You can find it by doing an:

root@osa-node1:~# lxc-ls | grep -i util
node1_utility_container-860a6cd9
root@osa-node1:~# ssh root@node1_utility_container-860a6cd9
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-85-generic x86_64)
root@node1-utility-container-860a6cd9:~# openstack server list
+--------------------------------------+-------------+--------+----------------------+
| ID                                   | Name        | Status | Networks             |
+--------------------------------------+-------------+--------+----------------------+
| 1b7f1a7f-db87-47fe-a884-c66875ceed00 | my-instance | ACTIVE | Public=192.168.20.165|
+--------------------------------------+-------------+--------+----------------------+

Create and Setup Your Network

In Horizon under the System tab, select networks and then “+Create network”. The main thing to note is depending on the network you are setting up, make sure to specify that type in the Physical Network box as well. In my case, I set up a vlan network, so I made sure to set:

Name: Public
Project: admin
Provider Network Type: VLAN
Physical Network: vlan
Admin State: UP

Once the network is created, click on the Network Name and click “+Create Subnet”. Add your:

Subnet Name: VLAN_854
Network Address: 10.127.95.0/24
Gateway IP: 10.127.95.1
Allocation Pools: <Start Address>,<End Address>
DNS Name Servers: <DNS Servers>

Add Images to Glance

You’ll need to add some images to get up and running. You can find a list of supported images that include Cloud-Init here.

Name: Image Name
Image Source: Image Location
Image Location: Enter in URL of Cloud Image
Format: QCOW2, RAW, or whatever the image format may be
Architecture: x86_64 or whatever hardware you might be using
Public: Checked

Security Groups

By default security groups are enabled, so you’ll want to enable some ingress rules like SSH and ICMP by default so you can connect to your instance.

Start an Instance

Under the instances tab, click “Launch Instance”. Fill in your desired options, including boot from image, add any keypairs you might want, and make sure to select the Security Group you set up previously. You’ll also want to make sure you are plugged into the right network as well. Once all of those things are set up, you should be able to launch the instance and attempt to connect to it.

Things to Note

Cluster Recovery

The target hosts are in a DB cluster so if you need to reboot them, make sure to stagger them, so that the cluster doesn’t fail. If you find the DB is not coming up, you can run the galera-bootstrap playbook which should bring the cluster back up (docs):

openstack-ansible galera-install.yml --tags galera-bootstrap

If you run into any issues running through this, please let me know in the comments or ping me in #openstack-ansible on Freenode as antonym.

  • Hi Antonym, as per Openstack Ansible installation guide (http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-hostlist.html), we should assign IP address of the br-mgmt container management bridge to shared-infra_hosts/os-infra_hosts/identity_hosts/log_hosts/repo-infra_hosts/storage_hosts, in your configuration br-mgmt container management bridge IPs are 172.29.236.xxx. But you using host management IPs 10.127.5.xxx directly – any thoughts on that?

  • 4ntonym

    The 172.29.236.xxx are used for local communication between the servers while the mangement IPs set for 10.127.5.xxx were for routing to my lab since it was remote.

  • Thanks. I guess the Openstack Ansible installation guide is misleading. I.e. we should assign host management IPs to shared-infra_hosts/os-infra_hosts/identity_hosts/log_hosts/repo-infra_hosts/storage_hosts, not using IP address of the br-mgmt container management bridge.

  • jorejarena

    Not working here …. no matter what I try (3 control hosts, 1 control host, without keepalived haproxy, with haproxy) when I am executing “openstack-ansible setup-infrastructure.yml” the script fails with the error

    TASK: [galera_client | Install pip packages] **********************************
    failed: [node1_galera_container-7bc5f239] => (item=MySQL-python) => {“attempts”: 5, “cmd”: “/usr/local/bin/pip install MySQL-python”, “failed”: true, “item”: “MySQL-python”}
    msg: Task failed as maximum retries was encountered

    infra nodes, repo nodes, all configured correctly.

  • 4ntonym

    A quick search for the error pops up this: https://bugs.launchpad.net/openstack-ansible/+bug/1497695 You might take a look at that and if that doesn’t help, open a bug with openstack-ansible, or ask in the IRC channel.

  • Yacoub

    Hi Antonym, at the last step of OSAD trying to deploy openstack services (openstack-ansible setup-openstack.yml) get this error,
    Did you even see it during your lab ? Any idea of what is the problem ?

    TASK: [os_keystone | Ensure service tenant] ***********************************
    ESTABLISH CONNECTION FOR USER: root
    skipping: [infra2_keystone_container-c25f1768]
    skipping: [infra3_keystone_container-775eea6d]
    REMOTE_MODULE keystone login_project_name=admin login_password=VALUE_HIDDEN command=ensure_tenant insecure=False tenant_name=service login_user=admin description=’Keystone Identity Service’ endpoint=http://192.168.1.50:35357/v3
    EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=”/root/.ansible/cp/ansible-ssh-%h-%p-%r” -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=120 192.168.1.15 /bin/sh -c ‘LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python’
    REMOTE_MODULE keystone login_project_name=admin login_password=VALUE_HIDDEN command=ensure_tenant insecure=False tenant_name=service login_user=admin description=’Keystone Identity Service’ endpoint=http://192.168.1.50:35357/v3
    EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=”/root/.ansible/cp/ansible-ssh-%h-%p-%r” -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=120 192.168.1.15 /bin/sh -c ‘LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python’
    Result from run 1 is: {‘msg’: ‘OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014rndebug1: Reading configuration data /etc/ssh/ssh_configrndebug1: /etc/ssh/ssh_config line 19: Applying options for *rndebug1: auto-mux: Trying existing masterrndebug2: fd 3 setting O_NONBLOCKrndebug2: mux_client_hello_exchange: master version 4rndebug3: mux_client_forwards: request forwardings: 0 local, 0 remoterndebug3: mux_client_request_session: enteringrndebug3: mux_client_request_alive: enteringrndebug3: mux_client_request_alive: done pid = 28892rndebug3: mux_client_request_session: session request sentrndebug1: mux_client_request_session: master session id: 2rnTraceback (most recent call last):n File “”, line 2958, in n File “”, line 1340, in mainn File “”, line 472, in command_routern File “”, line 652, in ensure_tenantn File “”, line 662, in ensure_projectn File “”, line 588, in _authenticaten File “/usr/local/lib/python2.7/dist-packages/keystoneclient/v3/client.py”, line 226, in __init__n self.authenticate()n File “/usr/local/lib/python2.7/dist-packages/positional/__init__.py”, line 94, in innern return func(*args, **kwargs)n File “/usr/local/lib/python2.7/dist-packages/keystoneclient/httpclient.py”, line 584, in authenticaten resp = self.get_raw_token_from_identity_service(**kwargs)n File “/usr/local/lib/python2.7/dist-packages/keystoneclient/v3/client.py”, line 311, in get_raw_token_from_identity_servicen _(‘Authorization failed: %s’) % e)nkeystoneauth1.exceptions.auth.AuthorizationFailure: Authorization failed: Gateway Timeout (HTTP 504)ndebug3: mux_client_read_packet: read header failed: Broken piperndebug2: Received exit status from master 1rn’, ‘failed’: True, ‘attempts’: 1, ‘parsed’: False}

  • dimtheo

    Hello
    how would you set up the networks for each target host if you were not allowed to use vlan tags and all machines had one nic ? is this even possible ?