Part 1: Openstack TripleO Architecture and Step By Step Guide for installation of undercloud and overcloud nodes (compute, controller, ceph-storage)

In the article we will cover below topics

You can also configure a HA cluster on your Openstack setup and move your keystone service behind a load balancer or HA cluster.

The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of OpenStack components to install a fully operational OpenStack environment.

The Red Hat OpenStack Platform director uses two main concepts:

  • undercloud
  • overcloud

The undercloud installs and configures the overcloud.

Undercloud

The undercloud is the main director node. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud).

The primary objectives of undercloud are as below:

  • Discover the bare-metal servers on which the deployment of Openstack Platform has been deployed
  • Serve as the deployment manager for the software to be deployed on these nodes
  • Define complex network topology and configuration for the deployment
  • Rollout of software updates and configurations to the deployed nodes
  • Reconfigure an existing undercloud deployed environment
  • Enable high availability support for the openstack nodes

Overcloud

  • The overcloud is the resulting Red Hat OpenStack Platform environment created using the undercloud.
  • This includes different nodes roles which you define based on the OpenStack Platform environment you aim to create.

TripleO architecture

TripleO is a friendly name for "OpenStack-On-OpenStack" and is a deployment and management tool for deployment, configuration and automation used by the undercloud. TripleO creates a production cloud known as the "overcloud" and the underlying deployment cloud known as the "undercloud".

Before being able to deploy the "overcloud", the "undercloud" needs to be deployed.

TripleO leverages several OpenStack services like Nova, Ironic, Neutron, Heat, Glance and Ceilometer to deploy the overcloud on bare-metal hardware.

  • Ironic: Provisions physical hardware and leverages technologies like PXE and IPMI to present it for the deployment of the overcloud.
  • Nova: Provisions the overcloud nodes like compute, controller, and Ceph storage nodes.
  • Neutron: Provides the networking environment in which to deploy the overcloud.
  • Glance: Provides the repository for images that are used by "ironic" for bare metal provisioning and while deploying overcloud nodes.
  • Heat: Provides the wat to orchestrate the "overcloud" deployment with complex architecture
  • Ceilometer: Helps collect metrics about the overcloud nodes
  • Cinder: Provides the volumes needed to deploy the "overcloud" nodes
  • Horizon: Provides the web user interface
  • Keystone: Provides authentication for all the overcloud nodes.

NOTE: TripleO is supporting only the following operating systems:

  • RHEL 7 x86_64
  • CentOS 7 x86_64

My setup details
I will be hosting my overcloud and undercloud on virtual environment which will be running on a physical blade

My physical blade KVM host

IP       : 10.43.138.12
Netmask  : 255.255.255.224
Gateway  : 10.43.138.30
CPU      : 32
RAM      : 128GB
Disk     : 900GB

NOTE: The baremetal machines must meet the following minimum specifications:
8 core CPU
12 GB memory
60 GB free disk space

DNS Server (my DNS server is running on the KVM host)

IP       : 10.43.138.12
Netmask  : 255.255.255.224
Gateway  : 10.43.138.30

Undercloud Director Virtual Machine

NOTE: The provisioning network NIC should not be the same NIC that you are using for remote connectivity to the undercloud machine. During the undercloud installation, a openvswitch bridge will be created for Neutron and the provisioning NIC will be bridged to the openvswitch bridge. As such, connectivity would be lost if the provisioning NIC was also used for remote connectivity to the undercloud machine.

eth0     : Management Network
eth1     : Provisioning Network

[eth0]
IP       : 10.43.138.37
Netmask  : 255.255.255.224
Gateway  : 10.43.138.30

[eth1]
This interface will be configured automatically as a bridge when we do the undercloud configuration but I plan to use below config for eth1
IP       : 192.168.122.30
Netmask  : 255.255.255.0
Gateway  : 192.168.122.1

NOTE: The undercloud is intended to work correctly with SELinux enforcing. Installatoins with the permissive/disabled SELinux are not recommended. The undercloud_enable_selinux config option controls that setting.

Validate FQDN of the undercloud director

The director requires a fully qualified domain name for its installation and configuration process. This means you may need to set the hostname of your director’s host.

Ensure that there is a FQDN hostname set and that the $HOSTNAME environment variable matches that value. The easiest way to do this is to set the "undercloud_hostname" option in undercloud.conf before running the install. This will allow the installer to configure all of the hostname- related settings appropriately.

Alternatively the hostname settings can be configured manually, but this is strongly discouraged. The manual steps are as follows:

# hostnamectl set-hostname undercloud-director.example

Since I am using DNS, I need not worry about this

[root@undercloud-director nova]# nslookup undercloud-director
Server:         10.43.138.12
Address:        10.43.138.12#53

Name:   undercloud-director.example
Address: 192.168.122.30

[root@undercloud-director nova]# nslookup undercloud-director.example
Server:         10.43.138.12
Address:        10.43.138.12#53

Name:   undercloud-director.example
Address: 192.168.122.30

[root@undercloud-director nova]# nslookup 192.168.122.30
Server:         10.43.138.12
Address:        10.43.138.12#53

30.122.168.192.in-addr.arpa     name = undercloud-director.example.

An entry for the system’s FQDN hostname is also needed in /etc/hosts. For example, if the system is named myhost.mydomain, /etc/hosts should have an entry like:

127.0.0.1   undercloud-director.example undercloud-director

Overcloud Controller

This will have single interface (eth0)

vCPU     : 4
RAM      : 10240
Disk1    : 50GB
NIC MAC  : 52:54:00:87:37:1f

Overcloud Compute

This will have single interface (eth0)

vCPU     : 4
RAM      : 10240
Disk1    : 50GB
NIC MAC  : 52:54:00:64:36:c6

Overcloud Ceph-Storage

This will have single interface (eth0)

vCPU     : 4
RAM      : 10240
Disk1    : 50GB
Disk2    : 50GB
Disk3    : 50GB
NIC MAC  : 52:54:00:6f:3f:47

Register your undercloud node

You must register your node to proper Red Hat subscription to get all the required rpms for the deployment

[root@undercloud-director ~]# subscription-manager register

Look out for the repository which has openstack contents

[root@undercloud-director ~]# subscription-manager list --available --all --matches="*OpenStack*"

Get the pool id of the openstack repo and attach it to your system

[root@undercloud-director ~]# subscription-manager attach --pool=<pool_id>
Successfully attached a subscription for: Self-Supported Red Hat OpenStack Platform

Next enable below list of repositories which will be needed for the overall tripleo deployment.

[root@undercloud-director ~]# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-openstack-10-rpms --enable=rhel-7-server-satellite-tools-6.2-rpms --enable=rhel-7-server-openstack-10-devtools-rpms
Repository 'rhel-7-server-openstack-10-devtools-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-tools-6.2-rpms' is enabled for this system.
Repository 'rhel-7-server-rh-common-rpms' is enabled for this system.
Repository 'rhel-7-server-openstack-10-rpms' is enabled for this system.
Repository 'rhel-ha-for-rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-extras-rpms' is enabled for this system.

My network config

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.43.138.27  netmask 255.255.255.224  broadcast 10.43.138.31
        inet6 fe80::5054:ff:fe43:55dd  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:43:55:dd  txqueuelen 1000  (Ethernet)
        RX packets 78110  bytes 114498833 (109.1 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 33869  bytes 2765910 (2.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 52:54:00:46:8f:ee  txqueuelen 1000  (Ethernet)
        RX packets 374  bytes 19460 (19.0 KiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 9  bytes 662 (662.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Installing the Director Packages

Use the following command to install the required command line tools for director installation and configuration:

[root@undercloud-director ~]# yum install -y python-tripleoclient openstack-keystone openstack-dashboard openstack-tempest openstack-utils

Creating user for undercloud deployment

The undercloud and overcloud deployment must be done as a normal user and not the root user so we will create a "stack" user for this purpose.

[root@undercloud-director ~]# useradd stack

[root@undercloud-director network-scripts]# echo redhat | passwd --stdin stack
Changing password for user stack.
passwd: all authentication tokens updated successfully.

[root@undercloud-director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
stack ALL=(root) NOPASSWD:ALL

[root@undercloud-director ~]# chmod 0440 /etc/sudoers.d/stack

[root@undercloud-director ~]# su - stack

Configure undercloud deployment parameters

Copy the sample undercloud.conf file to the home directory of "stack" user

[stack@undercloud-director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf

Update the below variables in your undercloud.conf based on your setup. These variables will be used to setup your undercloud node.

[stack@undercloud-director ~]$ egrep -v '^#|^$' undercloud.conf
[DEFAULT]
local_ip = 192.168.122.30/24
network_gateway = 192.168.122.30
undercloud_public_vip = 192.168.122.100
undercloud_admin_vip = 192.168.122.101
local_interface = eth1
network_cidr = 192.168.122.0/24
masquerade_network = 192.168.122.0/24
dhcp_start = 192.168.122.150
dhcp_end = 192.168.122.160
inspection_iprange = 192.168.122.170,192.168.122.180
[auth]

The terminology are explained below

  • local_ip : The IP address defined for the director’s Provisioning NIC. This is also the IP address the director uses for its DHCP and PXE boot services
  • network_gateway : The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network.
  • undercloud_public_vip : The IP address defined for the director’s Public API. Use an IP address on the Provisioning network that does not conflict with any other IP addresses or address ranges.
  • undercloud_admin_vip : The IP address defined for the director’s Admin API. Use an IP address on the Provisioning network that does not conflict with any other IP addresses or address ranges.
  • local_interface : The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for its DHCP and PXE boot services. Change this value to your chosen device.
  • network_cidr : The network that the director uses to manage overcloud instances. This is the Provisioning network, which the undercloud’s neutron service manages.
  • masquerade_network : Defines the network that will masquerade for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that it has external access through the director.
  • dhcp_start; dhcp_end : The start and end of the DHCP allocation range for overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.
  • inspection_iprange : A range of IP address that the director’s introspection service uses during the PXE boot and provisioning process. Use comma-separated values to define the start and end of this range. Make sure this range contains enough IP addresses for your nodes and does not conflict with the range for dhcp_start and dhcp_end.
NOTE: The [auth] section contains the following parameters:
undercloud_db_password; undercloud_admin_token; undercloud_admin_password; undercloud_glance_password; etc

The remaining parameters are the access details for all of the director’s services. No change is required for the values. The director’s configuration script automatically generates these values if blank in undercloud.conf. You can retrieve all values after the configuration script completes.

Deploy the undercloud

Undercloud deployment is completely automated and uses "puppet" manifest provided by TripleO. Below command will start the undercloud installation and configuration.

NOTE: This will take some time for complete configuration.
[stack@undercloud-director ~]$ openstack undercloud install
********Once the step is complete you should see message like below**********

os-refresh-config completed successfully
Generated new ssh key in ~/.ssh/id_rsa
Created flavor "baremetal" with profile "None"
Created flavor "control" with profile "control"
Created flavor "compute" with profile "compute"
Created flavor "ceph-storage" with profile "ceph-storage"
Created flavor "block-storage" with profile "block-storage"
Created flavor "swift-storage" with profile "swift-storage"

#############################################################################
Undercloud install complete.

The file containing this installation's passwords is at
/home/stack/undercloud-passwords.conf.

There is also a stackrc file at /home/stack/stackrc.

These files are needed to interact with the OpenStack services, and should be
secured.

#############################################################################

NOTE: If you face any issues while configuring the undercloud node, check "/home/stack/.instack/install-undercloud.log" file for the installation related logs

The configuration is performed using python script "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py"

The undercloud installation is divided into two distinct phases:

  • Instack: It deploys diskimage-builder elements locally. These elements describe the package installations or system configuration to be done on the undercloud node. All runnable scripts are located under the "install.d" directory under "/usr/share/instack-undercloud/puppet-stack-config/"
  • os-refresh-config: It provides the mechanism for staging the deployment of system configuration and uses "os-apply-config" to apply the configuration. It manages the in-instance process of responding to changes in configuration parameters which helps to refresh configurations for any changes made during postdeloyment.