[RHOSP] Red Hat Openstack 11 Deployment on Nested KVM Infrastructure


This type of deployment is mainly helpful for POC or demo where all required opensatck components can be integrated to see how their functionality will work.

I have a KVM host installed with RHEL 7.5 having the following resources:


[root@kvmhost ~]# free -h
                 total      used      free          shared    buff/cache   available
Mem:       70G       57G      339M        234M       12G          12G
Swap:      4.0G      734M    3.3G


[root@kvmhost ~]# cat /proc/cpuinfo | egrep processor
processor       : 0
processor       : 1
processor       : 2
processor       : 3
processor       : 4
processor       : 5
processor       : 6
processor       : 7
processor       : 8
processor       : 9
processor       : 10
processor       : 11
processor       : 12
processor       : 13
processor       : 14
processor       : 15

[root@kvmhost ~]# fdisk -l /dev/sda

Disk /dev/sda: 898.3 GB, 898319253504 bytes, 1754529792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c5083

   Device Boot      Start         End               Blocks          Id      System
/dev/sda1   *        2048        2099199        1048576        83      Linux
/dev/sda2            2099200   1754529791   876215296   8e      Linux LVM


I have used the HA based configuration. Hence the number of components nodes are as follows:

UnderCloud:
    - Director: 1

OverCloud:
    - Controllers: 3
    - Computes: 3
    - Ceph Node: 3

Here is the high level architecture of this deployment that I have used to integrate the nodes and their respective components:

                    Fig-01: High Level Architecture Diagram of RHOSP 11

Make the KVM host ready for the deployment:

a. Create the 'external' network by executing the below command:

[root@kvmhost ~]# cat > /tmp/external.xml <<EOF
<network>
   <name>external</name>
   <forward mode='nat'>
      <nat> <port start='1024' end='65535'/>
      </nat>
   </forward>
   <ip address='192.168.130.1' netmask='255.255.255.0'>
   </ip>
</network>
EOF

[root@kvmhost ~]# virsh net-define /tmp/external.xml
[root@kvmhost ~]# virsh net-autostart external
[root@kvmhost ~]# virsh net-start external

b. Create the 'provision' network by using the below command:

[root@kvmhost ~]# cat > /tmp/provisioning.xml <<EOF
<network>
   <name>provisioning</name>
   <ip address='192.168.140.254' netmask='255.255.255.0'>
   </ip>
</network>
EOF

[root@kvmhost ~]# virsh net-define /tmp/provisioning.xml
[root@kvmhost ~]# virsh net-autostart provisioning
[root@kvmhost ~]# virsh net-start provisioning

Check all the newly create network by using the below command:

[root@kvmhost ~]# virsh  net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 external             active     yes           yes
 provisioning         active     yes           yes

c. Enable access on KVM hypervisor host so that Ironic can control VMs:

[root@kvmhost ~]# cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

d. Load the RHOSP 11 repo on the KVM host and install the 'openvswitch' package:

[root@kvmhost ~]# cat > /etc/yum.repos.d/rhosp11.repo <<_EOF_
[rhel-7-server-openstack-11-rpms]
name=Red Hat OpenStack Platform 11 RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-openstack-11-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
_EOF_

[root@kvmhost ~]# yum install openvswitch -y

Start and enable openvswitch  service:

[root@kvmhost ~]# systemctl start openvswitch.service
[root@kvmhost ~]# systemctl enable openvswitch.service

[root@kvmhost ~]# ovs-vsctl -V
ovs-vsctl (Open vSwitch) 2.7.3
DB Schema 7.15.0

Create a ovs bridge 'br2':
[root@kvmhost ~]# ovs-vsctl add-br br2

Create the following interfaces with respective vlan tag and subnet:

Name             |        Network          | Vlan ID
------------------------------------------------------
storage          |   192.0.5.250/24        |  40
storage_mgmt     |   192.0.6.250/24        |  50
tenant           |   192.0.7.250/24        |  60
api              |   192.0.8.250/24        |  70


[root@kvmhost ~]# ovs-vsctl add-port br2 storage tag=40 -- set Interface storage type=internal
[root@kvmhost ~]# ovs-vsctl add-port br2 storage_mgmt tag=50 -- set Interface storage_mgmt type=internal
[root@kvmhost ~]# ovs-vsctl add-port br2 tenant tag=60 -- set Interface tenant type=internal
[root@kvmhost ~]# ovs-vsctl add-port br2 api tag=70 -- set Interface api type=internal

[root@kvmhost ~]# ifconfig storage 192.0.5.250/24
[root@kvmhost ~]# ifconfig storage_mgmt 192.0.6.250/24
[root@kvmhost ~]# ifconfig tenant 192.0.7.250/24
[root@kvmhost ~]# ifconfig api 192.0.8.250/24

To make these interfaces UP and running, make the following entry at the end under '/etc/rc.local' of the KVM host:

### For OVS network
ifconfig storage 192.0.5.250/24
ifconfig storage_mgmt 192.0.6.250/24
ifconfig tenant 192.0.7.250/24
ifconfig api 192.0.8.250/24

To check each interface:

[root@kvmhost ~]# ip a s storage
7: storage: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7e:77:9b:e8:eb:84 brd ff:ff:ff:ff:ff:ff
    inet 192.0.5.250/24 brd 192.0.5.255 scope global storage
       valid_lft forever preferred_lft forever
    inet6 fe80::7c77:9bff:fee8:eb84/64 scope link
       valid_lft forever preferred_lft forever

[root@kvmhost ~]# ip a s storage_mgmt
11: storage_mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether de:6e:65:99:f2:f3 brd ff:ff:ff:ff:ff:ff
    inet 192.0.6.250/24 brd 192.0.6.255 scope global storage_mgmt
       valid_lft forever preferred_lft forever
    inet6 fe80::dc6e:65ff:fe99:f2f3/64 scope link
       valid_lft forever preferred_lft forever

[root@kvmhost ~]# ip a s tenant
8: tenant: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 1e:fc:09:2c:8b:03 brd ff:ff:ff:ff:ff:ff
    inet 192.0.7.250/24 brd 192.0.7.255 scope global tenant
       valid_lft forever preferred_lft forever
    inet6 fe80::1cfc:9ff:fe2c:8b03/64 scope link
       valid_lft forever preferred_lft forever

[root@kvmhost ~]# ip a s api
9: api: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 2a:39:18:ec:ba:69 brd ff:ff:ff:ff:ff:ff
    inet 192.0.8.250/24 brd 192.0.8.255 scope global api
       valid_lft forever preferred_lft forever
    inet6 fe80::2839:18ff:feec:ba69/64 scope link
       valid_lft forever preferred_lft forever

[root@kvmhost ~]# ovs-vsctl show
0dce8aab-4de2-45f2-9ecf-0283401245e1
    Bridge "br2"
        Port api
            tag: 70
            Interface api
                type: internal
        Port tenant
            tag: 60
            Interface tenant
                type: internal
        Port storage_mgmt
            tag: 50
            Interface storage_mgmt
                type: internal
        Port "br2"
            Interface "br2"
                type: internal
        Port storage
            tag: 40
            Interface storage
                type: internal
    ovs_version: "2.7.3"


Undercloud Node Deployment:

Following specs have been used to install the Director node:
  • CPU: 6
  • Memory: 10GiB
  • HDD: 60GB
  • OS: Red Hat Enterprise Linux 7.5
  • Network Interface for:
    • Management Network
    • Provision Network
    • External Network
Once the RHEL 7.5 OS installation is done, make sure that interfaces are marked as ethX in the director node. To fix the biosdevname using the below configuration:
  • Open /etc/default/grub file in the director node.
  • Add 'net.ifnames=0 biosdevname=0' at the end of the line for 'GRUB_CMDLINE_LINUX'
  • Rebuild the grub config file by using the below command:
    • # grub2-mkconfig -o /boot/grub2/grub.cfg
eth0: This interface will be used as Management interface to login to the director node from KVM host.
eth1: This interface will be used as provisioning interface. Hence no need to configure any IP in it. Undercloud deployer will do that automatically.
eth2: This interface will be used as the external interface and provide connectivity to the overcloud nodes over the external network from director node.

1. Configuration of eth0 and eth2 interfaces are:

[root@rhospdirector ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Generated by dracut initrd
NAME="eth0"
DEVICE="eth0"
ONBOOT=yes
NETBOOT=yes
UUID="f526558c-6c2b-49cc-9528-92e44ffe9f87"
BOOTPROTO=none
TYPE=Ethernet
IPADDR=192.168.122.11
NETMASK=255.255.255.0
GATEWAY=192.168.122.1
DNS=192.168.122.1

[root@rhospdirector ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
IPADDR=192.168.130.5
NETMASK=255.255.255.0
HWADDR=52:54:00:85:76:c1
ONBOOT=yes
BOOTPROTO=none

After configuring all the above, reboot the node once to reflect the correct interface name and then it will load the configs accordingly. 

2. To get all required packages either you can register the systems to Red Hat Network(RHN) or if you have locally managed RHOSP 11 repo, that can be used. In our case we have a local server to sync packages from RHN and that can be used for the configuration of yum repo.

[root@rhospdirector ~] # cat > /etc/yum.repos.d/rhosp11.repo <<EOL
[rhel-7-server-rpms]
name=Red Hat Enterprise Linux 7 RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-extras-rpms]
name=Red Hat Enterprise Linux 7 Extra RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-extras-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-openstack-11-rpms]
name=Red Hat OpenStack Platform 11 RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-openstack-11-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-rh-common-rpms]
name=Red Hat Enterprise Linux 7 Common RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-rh-common-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-rhceph-2-mon-rpms]
name=Red Hat CEPH Storage MON RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-rhceph-2-mon-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-rhceph-2-osd-rpms]
name=Red Hat CEPH Storage RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-rhceph-2-osd-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-ha-for-rhel-7-server-rpms]
name=Red Hat Enterprise Linux 7 HA RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-ha-for-rhel-7-server-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-rhceph-2-tools-rpms]
name=Red Hat CEPH Storage Tools RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-rhceph-2-tools-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-7-server-openstack-11-optools-rpms]
name=Red Hat OpenStack Optional tools RPMS
baseurl=http://rhospyum.example.com/RHOSP-11/rhel-7-server-openstack-11-optools-rpms
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
EOL

Run the yum update and reboot the director node:
[root@rhospdirector ~] # yum clean all
[root@rhospdirector ~] # yum repolist
[root@rhospdirector ~] # yum update -y
[root@rhospdirector ~] # reboot

3. Create 'stack' user and set password for it and configure sudo for this user:
[root@rhospdirector ~] # useradd stack
[root@rhospdirector ~] # echo RedHat123$ | passwd --stdin stack
[root@rhospdirector ~] # echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
[root@rhospdirector ~] # chmod 0440 /etc/sudoers.d/stack

Login as 'stack' user and create the templates and images directory:
[root@rhospdirector ~] # su - stack
[stack@rhospdirector ~] $ mkdir ~/images
[stack@rhospdirector ~] $ mkdir ~/templates

4. Install the director package:

[stack@rhospdirector ~] $ sudo yum install -y python-tripleoclient

5. Configure underclod.conf file with the following entries:

[stack@rhospdirector ~] $ cat > ~/undercloud.conf <<EOF
[DEFAULT]
local_ip = 192.168.140.11/24
network_gateway = 192.168.140.1
undercloud_public_host = 192.168.140.12
undercloud_admin_host = 192.168.140.13
generate_service_certificate = true
certificate_generation_ca = local
local_interface = eth1
network_cidr = 192.168.140.0/24
masquerade_network = 192.168.140.0/24
dhcp_start = 192.168.140.230
dhcp_end = 192.168.140.250
inspection_interface = br-ctlplane
inspection_iprange = 192.168.140.210,192.168.140.229
undercloud_ntp_servers = 192.168.122.1
enable_telemetry = true
enable_ui = true

[auth]
undercloud_admin_password = RedHat123$
EOF

Start the undercloud installation using the below command:

[stack@rhospdirector ~] $ openstack undercloud install

Once the undercloud deployment will be done, its UI can be accessed using the below URL and login credentials:

Undercloud URL: https://192.168.140.12
Uname: admin
Password: RedHat123$


6. As its a virtual environment deploying the whole overcloud may cause some performance issue. Hence build the overcloud nodes in small bunch of 3. 

[root@rhospdirector ~] # egrep -i 'max_concurrent_builds' /etc/nova/nova.conf
                                            max_concurrent_builds=3

Then restart the nova service:
 [root@rhospdirector ~] # systemctl restart openstack-nova-*

7. Install the required images rpm:

[stack@rhospdirector ~]$ sudo yum install rhosp-director-images rhosp-director-images-ipa -y

8. Extract the image and upload to director:

 [stack@rhospdirector ~]$ cd ~/images
 [stack@rhospdirector ~]$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-11.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-11.0.tar; do tar -xvf $i; done

9. Copy 'overcloud-full.qcow2'  image to the KVM host and set the password. So that in future if remote login to the overcloud wont work, you can try the console based login using this password. But for that first install 'libguestfs-tools-c' package on the KVM host which will come from base RHEL 7 yum repo:

 [root@kvmhost ~]# yum install libguestfs-tools-c -y
 [root@kvmhost ~]# virt-customize -a /tmp/overcloud-full.qcow2 --root-password password:RedHat123!
[   0.0] Examining the guest ...
[  32.7] Setting a random seed
[  32.7] Setting the machine ID in /etc/machine-id
[  32.7] Setting passwords
[  35.2] Finishing off

Then copy it back to the images folder in director node and upload all the components using the below command:

[stack@rhospdirector ~]$ scp root@192.168.122.1:/tmp/overcloud-full.qcow2 /home/stack/images
    root@192.168.122.1's password:
    overcloud-full.qcow2                                                         100% 1260MB  57.3MB/s   00:21


[stack@rhospdirector ~]$ openstack overcloud image upload --image-path /home/stack/images/
[stack@rhospdirector ~]$ openstack image list
+--------------------------------------+------------------------+--------+
| ID                                   | Name                   | Status |
+--------------------------------------+------------------------+--------+
| c7f0d5c5-7b9d-4b1d-adc0-de0925f0dd15 | bm-deploy-kernel       | active |
| 67510093-4233-4033-bdad-90db400009bf | bm-deploy-ramdisk      | active |
| d6a434a5-6406-4255-9335-83d19599d9d7 | overcloud-full         | active |
| 1f26babe-85f7-4b6c-a34f-e6219868ed87 | overcloud-full-initrd  | active |
| 7f2216b7-5379-4eb0-8435-f309b7e08743 | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+

10. List the introspection images:

[stack@rhospdirector ~]$ ls -l /httpboot

11. Set the DNS server for the 'ctlplane-subnet':

[stack@rhospdirector ~]$ source stackrc
[stack@rhospdirector ~]$ openstack subnet list
[stack@rhospdirector ~]$ openstack subnet show <subnet_id>
[stack@rhospdirector ~]$ openstack subnet set --dns-nameserver 192.168.122.1 <subnet_id>


NOTE: Director node is mainly used to deploy the overcloud nodes and to upgrade them with all minor and major releases of openstack. Once the deployment is done, resources like cpu and memory can be reduced to 2 and 2GiB each to keep the OS in the running condition.


Overcloud Node Deployment:

1. Run the following commands on the KVM host to create the nodes with required specs:

[root@kvmhost ~]# for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/overcloud-controller$i.qcow2 60G; done
[root@kvmhost ~]# for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/overcloud-compute$i.qcow2 60G; done
[root@kvmhost ~]# for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/overcloud-ceph$i.qcow2 60G; done

For 3 ceph nodes, create two more disks. One for OSD(80GiB) and one for Ceph Journal(60GiB):

[root@kvmhost ~]# for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/overcloud-ceph-OSD$i.qcow2 80G; done
[root@kvmhost ~]# for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/overcloud-ceph-journal$i.qcow2 60G; done


2. On my KVM host, CPU is from 'Nehalem' family. So accordingly need prepare the following commands:

[root@kvmhost ~]# for i in {1..3}; do virt-install --name overcloud-controller$i -r 8192 --vcpus 6 --os-variant rhel7 --disk path=/var/lib/libvirt/images/overcloud-controller$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external   --cpu Nehalem --dry-run --print-xml > /tmp/overcloud-controller$i.xml; virsh define --file /tmp/overcloud-controller$i.xml; done

[root@kvmhost ~]# for i in {1..3}; do virt-install --name overcloud-compute$i -r 8192 --vcpus 6 --os-variant rhel7 --disk path=/var/lib/libvirt/images/overcloud-compute$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external   --cpu Nehalem --dry-run --print-xml > /tmp/overcloud-compute$i.xml; virsh define --file /tmp/overcloud-compute$i.xml; done

 [root@kvmhost ~]# for i in {1..3}; do virt-install --name overcloud-ceph$i -r 8192 --vcpus 6 --os-variant rhel7 --disk path=/var/lib/libvirt/images/overcloud-ceph$i.qcow2,device=disk,bus=virtio,format=qcow2 --disk path=/var/lib/libvirt/images/overcloud-ceph-OSD$i.qcow2,device=disk,bus=virtio,format=qcow2 --disk path=/var/lib/libvirt/images/overcloud-ceph-journal$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --cpu Nehalem --dry-run --print-xml > /tmp/overcloud-ceph$i.xml; virsh define --file /tmp/overcloud-ceph$i.xml; done


Run 'virsh list --all' to check all the available nodes:

[root@kvmhost ~]# virsh list --all
Id    Name                           State
----------------------------------------------------
1     director                       running
-     overcloud-ceph1                shut off
-     overcloud-ceph2                shut off
-     overcloud-ceph3                shut off
-     overcloud-compute1             shut off
-     overcloud-compute2             shut off
-     overcloud-compute3             shut off
-     overcloud-controller1          shut off
-     overcloud-controller2          shut off
-     overcloud-controller3          shut off


3. Now enable the nested flag for the compute nodes by following nested-kvmthe-ability-to-run-kvm-on-kvm

For the validation of the attached disks to the ceph nodes, you can use the following command on the KVm host:

[root@kvmhost ~]# for i in {1..3}; do virsh domblklist overcloud-ceph$i; done
Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/overcloud-ceph1.qcow2
vdb        /var/lib/libvirt/images/overcloud-ceph-OSD1.qcow2
vdc        /var/lib/libvirt/images/overcloud-ceph-journal1.qcow2

Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/overcloud-ceph2.qcow2
vdb        /var/lib/libvirt/images/overcloud-ceph-OSD2.qcow2
vdc        /var/lib/libvirt/images/overcloud-ceph-journal2.qcow2

Target     Source
------------------------------------------------
vda        /var/lib/libvirt/images/overcloud-ceph3.qcow2
vdb        /var/lib/libvirt/images/overcloud-ceph-OSD3.qcow2
vdc        /var/lib/libvirt/images/overcloud-ceph-journal3.qcow2


4. Create a 'stack' user on the kvm host and set some password:

[root@kvmhost ~]# useradd stack
[root@kvmhost ~]# echo 'Avaya123$' | passwd --stdin stack

Configure password less SSH from stack user in undercloud node to stack user in KVM host:

[stack@rhospdirector ~]$ ssh-copy-id -i .ssh/id_rsa.pub stack@192.168.122.1

Validate if the password less SSH is working or not:

[stack@rhospdirector ~]$ ssh 'stack@192.168.122.1'
Last login: Fri Jun 29 12:32:11 2018 from 192.168.122.11
[stack@kvmhost ~]$

5. Now collect the provisioning interface's MAC id of the nodes and store in a file by running the following commands from undercloud node:

[stack@rhospdirector ~]$ for i in {1..3}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-controller$i | awk '$3 == "provisioning" {print $5};' >> /tmp/controller.txt; done
[stack@rhospdirector ~]$ for i in {1..3}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-compute$i | awk '$3 == "provisioning" {print $5};' >> /tmp/compute.txt; done
[stack@rhospdirector ~]$ for i in {1..3}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-ceph$i | awk '$3 == "provisioning" {print $5};' >> /tmp/ceph.txt; done

Go to the KVM host and attach the 3rd interface with 'br2' OVS bridge:

[root@kvmhost ~]# vms=`virsh  list --all  | egrep over* | awk '{print $2}'`
[root@kvmhost ~]# for i in $vms; do echo "===: $i :==="; virsh domiflist $i; done
===: overcloud-ceph1 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:30:44:1d
-          network    external   virtio      52:54:00:b2:0f:bb

===: overcloud-ceph2 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:cf:30:8a
-          network    external   virtio      52:54:00:c4:ed:b5

===: overcloud-ceph3 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:79:43:32
-          network    external   virtio      52:54:00:cf:4e:42

===: overcloud-compute1 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:e4:19:c3
-          network    external   virtio      52:54:00:a2:53:ea

===: overcloud-compute2 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:dc:8c:b5
-          network    external   virtio      52:54:00:e6:ac:6e

===: overcloud-compute3 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:fe:70:db
-          network    external   virtio      52:54:00:56:63:e9

===: overcloud-controller1 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:8e:35:eb
-          network    external   virtio      52:54:00:8b:5d:ca

===: overcloud-controller2 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:f0:d7:b6
-          network    external   virtio      52:54:00:05:30:5a

===: overcloud-controller3 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:38:de:e2
-          network    external   virtio      52:54:00:91:cb:96


[root@kvmhost ~]# for i in $vms; do virsh attach-interface --domain $i --source br2 --type bridge --model virtio --config; done
Interface attached successfully

Interface attached successfully

Interface attached successfully

Interface attached successfully

Interface attached successfully

Interface attached successfully

Interface attached successfully

Interface attached successfully

Interface attached successfully

[root@kvmhost ~]# for i in $vms; do echo "===: $i :==="; virsh domiflist $i; done
===: overcloud-ceph1 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:30:44:1d
-          network    external   virtio      52:54:00:b2:0f:bb
-          bridge     br2        virtio      52:54:00:3e:85:6b

===: overcloud-ceph2 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:cf:30:8a
-          network    external   virtio      52:54:00:c4:ed:b5
-          bridge     br2        virtio      52:54:00:86:30:51

===: overcloud-ceph3 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:79:43:32
-          network    external   virtio      52:54:00:cf:4e:42
-          bridge     br2        virtio      52:54:00:ef:7b:e8

===: overcloud-compute1 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:e4:19:c3
-          network    external   virtio      52:54:00:a2:53:ea
-          bridge     br2        virtio      52:54:00:e1:23:4e

===: overcloud-compute2 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:dc:8c:b5
-          network    external   virtio      52:54:00:e6:ac:6e
-          bridge     br2        virtio      52:54:00:1d:80:26

===: overcloud-compute3 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:fe:70:db
-          network    external   virtio      52:54:00:56:63:e9
-          bridge     br2        virtio      52:54:00:34:83:97

===: overcloud-controller1 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:8e:35:eb
-          network    external   virtio      52:54:00:8b:5d:ca
-          bridge     br2        virtio      52:54:00:e9:d0:4c

===: overcloud-controller2 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:f0:d7:b6
-          network    external   virtio      52:54:00:05:30:5a
-          bridge     br2        virtio      52:54:00:43:31:bd

===: overcloud-controller3 :===
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    provisioning virtio      52:54:00:38:de:e2
-          network    external   virtio      52:54:00:91:cb:96
-          bridge     br2        virtio      52:54:00:88:3d:22

[root@kvmhost ~]# for i in $vms; do virt-xml $i --edit 3 --network virtualport_type=openvswitch; done
Domain 'overcloud-ceph1' defined successfully.
Domain 'overcloud-ceph2' defined successfully.
Domain 'overcloud-ceph3' defined successfully.
Domain 'overcloud-compute1' defined successfully.
Domain 'overcloud-compute2' defined successfully.
Domain 'overcloud-compute3' defined successfully.
Domain 'overcloud-controller1' defined successfully.
Domain 'overcloud-controller2' defined successfully.
Domain 'overcloud-controller3' defined successfully.

6. Create 'instackenv.json' file all the required provisioning details:

[stack@rhospdirector ~]$ jq . << EOF > ~/instackenv.json
{
  "ssh-user": "stack",
  "ssh-key": "$(cat ~/.ssh/id_rsa)",
  "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager",
  "host-ip": "192.168.122.1",
  "arch": "x86_64",
  "nodes": [
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Controller01",
      "mac": [
        "$(sed -n 1p /tmp/controller.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Controller02",
      "mac": [
        "$(sed -n 2p /tmp/controller.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Controller03",
      "mac": [
        "$(sed -n 3p /tmp/controller.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Compute01",
      "mac": [
        "$(sed -n 1p /tmp/compute.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Compute02",
      "mac": [
        "$(sed -n 2p /tmp/compute.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Compute03",
      "mac": [
        "$(sed -n 3p /tmp/compute.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Ceph01",
      "mac": [
        "$(sed -n 1p /tmp/ceph.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Ceph02",
      "mac": [
        "$(sed -n 2p /tmp/ceph.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "name":"Ceph03",
      "mac": [
        "$(sed -n 3p /tmp/ceph.txt)"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    }
  ]
}
EOF

This will generate the 'instackenv.json' with all required information. Here is the sample config for one node:

[stack@rhospdirector ~] $ cat instackenv.json
{
  "nodes": [
    {
      "pm_user": "stack",
      "arch": "x86_64",
      "pm_addr": "192.168.122.1",
      "pm_password": "-----BEGIN RSA PRIVATE KEY-----\n <actual_key> \n-----END RSA PRIVATE KEY-----",
      "pm_type": "pxe_ssh",
      "name": "Controller01",
      "mac": [
        "52:54:00:b6:8b:00"
      ],
      "cpu": "6",
      "memory": "8192",
      "disk": "60"
    },

7. To load all the nodes to inventory of director:

[stack@rhospdirector ~]$ openstack overcloud node import ~/instackenv.json

8. To check the connectivity and to pull the hardware info:

[stack@rhospdirector ~]$ openstack overcloud node introspect --all-manageable --provide

Once the above steps done, you can see all the nodes are in poweroff but in available state by running the below command:

[stack@rhospdirector ~]$ openstack baremetal node list

Now time to set the correct profiles so that respective packages can be deployed on the nodes:

[stack@rhospdirector ~]$ openstack flavor list

For Controller nodes:
[stack@rhospdirector ~]$ openstack baremetal node set --property capabilities='profile:control,boot_option:local' <node_uuid>

For Computes nodes:
[stack@rhospdirector ~]$ openstack baremetal node set --property capabilities='profile:compute,boot_option:local' <node_uuid>

For Ceph nodes:
[stack@rhospdirector ~]$ openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' <node_uuid>

Now the below command can give a updated view with some values in 'Current Profile' column:

[stack@rhospdirector ~]$ openstack overcloud profiles list


Prepare all the required templates for Overcloud:

1. Create the node-info.yaml file under /home/stack/templates folder to provide the exact nodes for deployment:

[stack@rhospdirector ~]$ cd ~/templates
[stack@rhospdirector ~]$ cat nodeinfo.yaml
parameter_defaults:
  OvercloudControlFlavor: control
  OvercloudComputeFlavor: compute
  OvercloudCephStorageFlavor: ceph-storage
  ControllerCount: 3
  ComputeCount: 3
  CephStorageCount: 3

2. Collect the system UUID of ceph nodes for their disk mapping:

[stack@rhospdirector ~]$ openstack  baremetal node list

Collect the UUID for Ceph nodes and execute the below command:

[stack@rhospdirector ~]$ openstack baremetal introspection data save <UUID_Ceph01> | jq .extra.system.product.uuid

[stack@rhospdirector ~]$ openstack baremetal introspection data save <UUID_Ceph02> | jq .extra.system.product.uuid

[stack@rhospdirector ~]$ openstack baremetal introspection data save <UUID_Ceph03> | jq .extra.system.product.uuid

3. Create 'ceph_node_disk.yaml' file under '/home/stack/templates' location with the following configuration:

resource_registry:
  OS::TripleO::CephStorageExtraConfigPre: /home/stack/templates/openstack-tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml

parameter_defaults:
  NodeDataLookup: |
   {"<SYSTEM_PRODUCT_UUID_Ceph01>":
     {"ceph::profile::params::osds":
       {"/dev/vdb": {"journal": "/dev/vdc"}
       }
     },
    "<SYSTEM_PRODUCT_UUID_Ceph02>":
     {"ceph::profile::params::osds":
       {"/dev/vdb": {"journal": "/dev/vdc"}
       }
     },
    "<SYSTEM_PRODUCT_UUID_Ceph03>":
     {"ceph::profile::params::osds":
       {"/dev/vdb": {"journal": "/dev/vdc"}
       }
     }
   }

4. Go to the 'templates' folder in the stack user's home directory and copy the default tripleo templates there:

[stack@rhospdirector ~]$ cd ~/templates
[stack@rhospdirector ~]$ cp -r /usr/share/openstack-tripleo-heat-templates .
[stack@rhospdirector ~]$ cd openstack-tripleo-heat-templates
[stack@rhospdirector ~]$ pwd
/home/stack/templates/openstack-tripleo-heat-templates
[stack@rhospdirector ~]$ ll
total 168
-rw-r--r--. 1 stack stack  1034 Jul  4 16:33 all-nodes-validation.yaml
-rw-r--r--. 1 stack stack   578 Jul  4 16:33 bootstrap-config.yaml
-rw-r--r--. 1 stack stack 25240 Jul  4 16:33 capabilities-map.yaml
drwxr-xr-x. 6 stack stack    90 Jul  4 16:33 ci
-rw-r--r--. 1 stack stack   676 Jul  4 16:33 default_passwords.yaml
drwxr-xr-x. 3 stack stack  4096 Jul  4 16:33 deployed-server
drwxr-xr-x. 4 stack stack   106 Jul  4 16:33 docker
drwxr-xr-x. 5 stack stack  4096 Aug 13 16:42 environments
drwxr-xr-x. 8 stack stack   113 Jul  4 16:33 extraconfig
drwxr-xr-x. 2 stack stack   273 Jul  6 17:49 firstboot
-rw-r--r--. 1 stack stack   958 Jul  4 16:33 hosts-config.yaml
-rw-r--r--. 1 stack stack   325 Jul  4 16:33 j2_excludes.yaml
-rw-r--r--. 1 stack stack  2227 Jul  4 16:33 net-config-bond.yaml
-rw-r--r--. 1 stack stack  1626 Jul  4 16:33 net-config-bridge.yaml
-rw-r--r--. 1 stack stack  2379 Jul  4 16:33 net-config-linux-bridge.yaml
-rw-r--r--. 1 stack stack  1239 Jul  4 16:33 net-config-noop.yaml
-rw-r--r--. 1 stack stack  2883 Jul  4 16:33 net-config-static-bridge-with-external-dhcp.yaml
-rw-r--r--. 1 stack stack  2891 Jul  4 16:33 net-config-static-bridge.yaml
-rw-r--r--. 1 stack stack  2628 Jul  4 16:33 net-config-static.yaml
-rw-r--r--. 1 stack stack  2423 Jul  4 16:33 net-config-undercloud.yaml
drwxr-xr-x. 6 stack stack  4096 Jul  4 16:33 network
-rw-r--r--. 1 stack stack 25885 Jul  4 16:33 overcloud.j2.yaml
-rw-r--r--. 1 stack stack 16537 Jul  4 16:33 overcloud-resource-registry-puppet.j2.yaml
drwxr-xr-x. 5 stack stack  4096 Jul  4 16:33 puppet
-rw-r--r--. 1 stack stack  1372 Jul  4 16:33 roles_data_undercloud.yaml
-rw-r--r--. 1 stack stack  8433 Jul  4 16:33 roles_data.yaml
drwxr-xr-x. 2 stack stack    29 Jul  4 16:33 scripts
drwxr-xr-x. 2 stack stack  4096 Jul  4 16:33 tools
drwxr-xr-x. 2 stack stack    26 Jul  4 16:33 validation-scripts

5. Important configurations in the template files:

Content of 'network-isolation.yaml':

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/environments/network-isolation.yaml
# Enable the creation of Neutron networks for isolated Overcloud
# traffic and configure each role to assign ports (related
# to that role) on these networks.
resource_registry:
  OS::TripleO::Network::External: ../network/external.yaml
  OS::TripleO::Network::InternalApi: ../network/internal_api.yaml
  OS::TripleO::Network::StorageMgmt: ../network/storage_mgmt.yaml
  OS::TripleO::Network::Storage: ../network/storage.yaml
  OS::TripleO::Network::Tenant: ../network/tenant.yaml
  # Management network is optional and disabled by default.
  # To enable it, include environments/network-management.yaml
  #OS::TripleO::Network::Management: ../network/management.yaml

  # Port assignments for the VIPs
  OS::TripleO::Network::Ports::ExternalVipPort: ../network/ports/external.yaml
  OS::TripleO::Network::Ports::InternalApiVipPort: ../network/ports/internal_api.yaml
  OS::TripleO::Network::Ports::StorageVipPort: ../network/ports/storage.yaml
  OS::TripleO::Network::Ports::StorageMgmtVipPort: ../network/ports/storage_mgmt.yaml
  OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml

  # Port assignments for the controller role
  OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external.yaml
  OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api.yaml
  OS::TripleO::Controller::Ports::StoragePort: ../network/ports/storage.yaml
  OS::TripleO::Controller::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml
  OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant.yaml
  #OS::TripleO::Controller::Ports::ManagementPort: ../network/ports/management.yaml

  # Port assignments for the compute role
  OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
  OS::TripleO::Compute::Ports::InternalApiPort: ../network/ports/internal_api.yaml
  OS::TripleO::Compute::Ports::StoragePort: ../network/ports/storage.yaml
  OS::TripleO::Compute::Ports::StorageMgmtPort: ../network/ports/noop.yaml
  OS::TripleO::Compute::Ports::TenantPort: ../network/ports/tenant.yaml
  #OS::TripleO::Compute::Ports::ManagementPort: ../network/ports/management.yaml

  # Port assignments for the ceph storage role
  OS::TripleO::CephStorage::Ports::ExternalPort: ../network/ports/external.yaml
  OS::TripleO::CephStorage::Ports::InternalApiPort: ../network/ports/noop.yaml
  OS::TripleO::CephStorage::Ports::StoragePort: ../network/ports/storage.yaml
  OS::TripleO::CephStorage::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml
  OS::TripleO::CephStorage::Ports::TenantPort: ../network/ports/noop.yaml
  #OS::TripleO::CephStorage::Ports::ManagementPort: ../network/ports/management.yaml


Content of 'network-environment.yaml' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/environments/network-environment.yaml
#This file is an example of an environment file for defining the isolated
#networks and related parameters.
resource_registry:
  # Network Interface templates to use (these files must exist)
  #OS::TripleO::BlockStorage::Net::SoftwareConfig:
  #  ../network/config/multiple-nics/cinder-storage.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/openstack-tripleo-heat-templates/network/config/multiple-nics/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/openstack-tripleo-heat-templates/network/config/multiple-nics/controller.yaml
  #OS::TripleO::ObjectStorage::Net::SoftwareConfig:
  #  ../network/config/multiple-nics/swift-storage.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/openstack-tripleo-heat-templates/network/config/multiple-nics/ceph-storage.yaml

parameter_defaults:
  # This section is where deployment-specific configuration is done
  # CIDR subnet mask length for provisioning network
  ControlPlaneSubnetCidr: '24'
  # Gateway router for the provisioning network (or Undercloud IP)
  ControlPlaneDefaultRoute: 192.168.140.11
  EC2MetadataIp: 192.168.140.11  # Generally the IP of the Undercloud
  # Customize the IP subnets to match the local environment
  InternalApiNetCidr: 192.0.8.0/24
  StorageNetCidr: 192.0.5.0/24
  StorageMgmtNetCidr: 192.0.6.0/24
  TenantNetCidr: 192.0.7.0/24
  ExternalNetCidr: 192.168.130.0/24
  # Customize the VLAN IDs to match the local environment
  InternalApiNetworkVlanID: 70
  StorageNetworkVlanID: 40
  StorageMgmtNetworkVlanID: 50
  TenantNetworkVlanID: 60
  #ExternalNetworkVlanID: 10
  # Customize the IP ranges on each network to use for static IPs and VIPs
  InternalApiAllocationPools: [{'start': '192.0.8.10', 'end': '192.0.8.200'}]
  StorageAllocationPools: [{'start': '192.0.5.10', 'end': '192.0.5.200'}]
  StorageMgmtAllocationPools: [{'start': '192.0.6.10', 'end': '192.0.6.200'}]
  TenantAllocationPools: [{'start': '192.0.7.10', 'end': '192.0.7.200'}]
  # Leave room if the external network is also used for floating IPs
  ExternalAllocationPools: [{'start': '192.168.130.10', 'end': '192.168.130.50'}]
  # Gateway router for the external network
  ExternalInterfaceDefaultRoute: 192.168.130.1
  ### Manually added by uskar@avaya.com
  #PublicVirtualFixedIPs: [{'ip_address':'192.168.130.10'}]
  #
  # Uncomment if using the Management Network (see network-management.yaml)
  # ManagementNetCidr: 10.0.1.0/24
  # ManagementAllocationPools: [{'start': '10.0.1.10', 'end': '10.0.1.50'}]
  # Use either this parameter or ControlPlaneDefaultRoute in the NIC templates
  # ManagementInterfaceDefaultRoute: 10.0.1.1
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["8.8.8.8","192.168.130.1"]
  # List of Neutron network types for tenant networks (will be used in order)
  NeutronNetworkType: 'vxlan,vlan,flat'
  # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
  NeutronTunnelTypes: 'vxlan'
  # Neutron VLAN ranges per network, for example 'datacentre:1:499,tenant:500:1000':
  NeutronNetworkVLANRanges: 'datacentre:1:1000'
  # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 miimon=100"
  # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS active/backup.
  BondInterfaceOvsOptions: "bond_mode=active-backup"


Content of 'storage-environment.yaml' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/environments/storage-environment.yaml
## A Heat environment file which can be used to set up storage
## backends. Defaults to Ceph used as a backend for Cinder, Glance and
## Nova ephemeral storage.
resource_registry:
  OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
  OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
  OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml
  OS::TripleO::NodeUserData: /home/stack/templates/openstack-tripleo-heat-templates/firstboot/wipe-disks.yaml

parameter_defaults:

  #### BACKEND SELECTION ####

  ## Whether to enable iscsi backend for Cinder.
  CinderEnableIscsiBackend: false
  ## Whether to enable rbd (Ceph) backend for Cinder.
  CinderEnableRbdBackend: true
  ## Cinder Backup backend can be either 'ceph' or 'swift'.
  CinderBackupBackend: ceph
  ## Whether to enable NFS backend for Cinder.
  # CinderEnableNfsBackend: false
  ## Whether to enable rbd (Ceph) backend for Nova ephemeral storage.
  NovaEnableRbdBackend: true
  ## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'.
  GlanceBackend: rbd
  ## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'.
  GnocchiBackend: rbd


  #### CINDER NFS SETTINGS ####

  ## NFS mount options
  # CinderNfsMountOptions: ''
  ## NFS mount point, e.g. '192.168.122.1:/export/cinder'
  # CinderNfsServers: ''


  #### GLANCE NFS SETTINGS ####

  ## Make sure to set `GlanceBackend: file` when enabling NFS
  ##
  ## Whether to make Glance 'file' backend a NFS mount
  # GlanceNfsEnabled: false
  ## NFS share for image storage, e.g. '192.168.122.1:/export/glance'
  ## (If using IPv6, use both double- and single-quotes,
  ## e.g. "'[fdd0::1]:/export/glance'")
  # GlanceNfsShare: ''
  ## Mount options for the NFS image storage mount point
  # GlanceNfsOptions: 'intr,context=system_u:object_r:glance_var_lib_t:s0'


  #### CEPH SETTINGS ####

  ## When deploying Ceph Nodes through the oscplugin CLI, the following
  ## parameters are set automatically by the CLI. When deploying via
  ## heat stack-create or ceph on the controller nodes only,
  ## they need to be provided manually.

  ## Number of Ceph storage nodes to deploy
  # CephStorageCount: 0
  ## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
  # CephClusterFSID: ''
  ## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ=='
  # CephMonKey: ''
  ## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ=='
  # CephAdminKey: ''
  ## Ceph client key, e.g 'AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw=='
  # CephClientKey: ''


Content of 'wipe-disks.yaml' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/firstboot/wipe-disks.yaml
heat_template_version: 2014-10-16
description: >
  Wipe and convert all disks to GPT (except the disk containing the root file system)
resources:
  userdata:
    type: OS::Heat::MultipartMime
    properties:
      parts:
      - config: {get_resource: wipe_disk}
  wipe_disk:
    type: OS::Heat::SoftwareConfig
    properties:
      config: {get_file: wipe-disk.sh}
outputs:
  OS::stack_id:
    value: {get_resource: userdata}

Content of 'wipe-disk.sh' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/firstboot/wipe-disk.sh
#!/bin/bash
if [[ `hostname` = *"ceph"* ]]
then
  echo "Number of disks detected: $(lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}' | wc -l)"
  for DEVICE in `lsblk -no NAME,TYPE,MOUNTPOINT | grep "disk" | awk '{print $1}'`
  do
    ROOTFOUND=0
    echo "Checking /dev/$DEVICE..."
    echo "Number of partitions on /dev/$DEVICE: $(expr $(lsblk -n /dev/$DEVICE | awk '{print $7}' | wc -l) - 1)"
    for MOUNTS in `lsblk -n /dev/$DEVICE | awk '{print $7}'`
    do
      if [ "$MOUNTS" = "/" ]
      then
        ROOTFOUND=1
      fi
    done
    if [ $ROOTFOUND = 0 ]
    then
      echo "Root not found in /dev/${DEVICE}"
      echo "Wiping disk /dev/${DEVICE}"
      sgdisk -Z /dev/${DEVICE}
      sgdisk -g /dev/${DEVICE}
    else
      echo "Root found in /dev/${DEVICE}"
    fi
  done
fi

This file should have executable permission:

[stack@rhospdirector ~]$ sudo chmod 755 /home/stack/templates/openstack-tripleo-heat-templates/firstboot/wipe-disk.sh

Content of 'multiple-nics/compute.yaml' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/network/config/multiple-nics/compute.yaml
heat_template_version: ocata
description: >
  Software Config to drive os-net-config to configure multiple interfaces for the compute role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  ExternalNetworkVlanID:
    default: 10
    description: Vlan ID for the external network traffic.
    type: number
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMgmtNetworkVlanID:
    default: 40
    description: Vlan ID for the storage mgmt network traffic.
    type: number
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 60
    description: Vlan ID for the management network traffic.
    type: number
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  ControlPlaneDefaultRoute: # Override this via parameter_defaults
    description: The default route of the control plane network.
    type: string
  ExternalInterfaceDefaultRoute: # Not used by default in this template
    default: 10.0.0.1
    description: The default route of the external network.
    type: string
  ManagementInterfaceDefaultRoute: # Commented out by default in this template
    default: unset
    description: The default route of the management network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: ../../scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: interface
                name: nic1
                use_dhcp: false
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                - ip_netmask: 169.254.169.254/32
                  next_hop:
                    get_param: EC2MetadataIp
              - type: ovs_bridge
                name: bridge_name
                use_dhcp: false
                dns_servers:
                  get_param: DnsServers
                addresses:
                - ip_netmask:
                    get_param: ExternalIpSubnet
                routes:
                - default: true
                  next_hop:
                    get_pram: ExternalInterfaceDefaultRoute
                members:
                - type: interface
                  name: nic2
                  primary: true
              - type: ovs_bridge
                name: br-isolated
                use_dhcp: false
                members:
                - type: interface
                  name: nic3
                  primary: true
                - type: vlan
                  vlan_id:
                    get_param: InternalApiNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: InternalApiIpSubnet
                - type: vlan
                  vlan_id:
                    get_param: StorageNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageIpSubnet
                - type: vlan
                  vlan_id:
                    get_param: TenantNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: TenantIpSubnet
            # If setting default route on the Management interface, comment
            # out the default route on the Control Plane.
            #-
            #  type: interface
            #  name: nic7
            #  use_dhcp: false
            #  addresses:
            #    -
            #      ip_netmask: {get_param: ManagementIpSubnet}
            #  routes:
            #    -
            #      default: true
            #      next_hop: {get_param: ManagementInterfaceDefaultRoute}
outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

Content of 'multiple-nics/controller.yaml' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/network/config/multiple-nics/controller.yaml
heat_template_version: ocata
description: >
  Software Config to drive os-net-config to configure VLANs for the controller role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  ExternalNetworkVlanID:
    default: 10
    description: Vlan ID for the external network traffic.
    type: number
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMgmtNetworkVlanID:
    default: 40
    description: Vlan ID for the storage mgmt network traffic.
    type: number
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 60
    description: Vlan ID for the management network traffic.
    type: number
  ControlPlaneDefaultRoute: # Override this via parameter_defaults
    description: The default route of the control plane network.
    type: string
  ExternalInterfaceDefaultRoute:
    default: 10.0.0.1
    description: default route for the external network
    type: string
  ManagementInterfaceDefaultRoute: # Commented out by default in this template
    default: unset
    description: The default route of the management network.
    type: string
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: ../../scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: interface
                name: nic1
                use_dhcp: false
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                - ip_netmask: 169.254.169.254/32
                  next_hop:
                    get_param: EC2MetadataIp
              - type: ovs_bridge
                name: bridge_name
                use_dhcp: false
                dns_servers:
                  get_param: DnsServers
                addresses:
                - ip_netmask:
                    get_param: ExternalIpSubnet
                routes:
                - default: true
                  next_hop:
                    get_pram: ExternalInterfaceDefaultRoute
                members:
                - type: interface
                  name: nic2
                  # force the MAC address of the bridge to this interface
                  primary: true
              - type: ovs_bridge
                name: br-isolated
                use_dhcp: false
                members:
                - type: interface
                  name: nic3
                  primary: true
                - type: vlan
                  vlan_id:
                    get_param: InternalApiNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: InternalApiIpSubnet
                - type: vlan
                  vlan_id:
                    get_param: StorageNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageIpSubnet
                - type: vlan
                  vlan_id:
                    get_param: StorageMgmtNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageMgmtIpSubnet
                - type: vlan
                  vlan_id:
                    get_param: TenantNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: TenantIpSubnet
                # Uncomment when including environments/network-management.yaml
                # If setting default route on the Management interface, comment
                # out the default route on the External interface. This will
                # make the External API unreachable from remote subnets.
                #-
                #  type: vlan
                #  vlan_id: {get_param: ManagementNetworkVlanID}
                #  addresses:
                #    -
                #      ip_netmask: {get_param: ManagementIpSubnet}
                #  routes:
                #    -
                #      default: true
                #      next_hop: {get_param: ManagementInterfaceDefaultRoute}
outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

Content of 'multiple-nics/ceph-storage.yaml' file:

[stack@rhospdirector ~]$ cat /home/stack/templates/openstack-tripleo-heat-templates/network/config/multiple-nics/ceph-storage.yaml
heat_template_version: ocata
description: >
  Software Config to drive os-net-config to configure multiple interfaces for the ceph role.
parameters:
  ControlPlaneIp:
    default: ''
    description: IP address/subnet on the ctlplane network
    type: string
  ExternalIpSubnet:
    default: ''
    description: IP address/subnet on the external network
    type: string
  InternalApiIpSubnet:
    default: ''
    description: IP address/subnet on the internal API network
    type: string
  StorageIpSubnet:
    default: ''
    description: IP address/subnet on the storage network
    type: string
  StorageMgmtIpSubnet:
    default: ''
    description: IP address/subnet on the storage mgmt network
    type: string
  TenantIpSubnet:
    default: ''
    description: IP address/subnet on the tenant network
    type: string
  ManagementIpSubnet: # Only populated when including environments/network-management.yaml
    default: ''
    description: IP address/subnet on the management network
    type: string
  ExternalNetworkVlanID:
    default: 10
    description: Vlan ID for the external network traffic.
    type: number
  InternalApiNetworkVlanID:
    default: 20
    description: Vlan ID for the internal_api network traffic.
    type: number
  StorageNetworkVlanID:
    default: 30
    description: Vlan ID for the storage network traffic.
    type: number
  StorageMgmtNetworkVlanID:
    default: 40
    description: Vlan ID for the storage mgmt network traffic.
    type: number
  TenantNetworkVlanID:
    default: 50
    description: Vlan ID for the tenant network traffic.
    type: number
  ManagementNetworkVlanID:
    default: 60
    description: Vlan ID for the management network traffic.
    type: number
  ControlPlaneSubnetCidr: # Override this via parameter_defaults
    default: '24'
    description: The subnet CIDR of the control plane network.
    type: string
  ControlPlaneDefaultRoute: # Override this via parameter_defaults
    description: The default route of the control plane network.
    type: string
  ExternalInterfaceDefaultRoute: # Not used by default in this template
    default: 10.0.0.1
    description: The default route of the external network.
    type: string
  ManagementInterfaceDefaultRoute: # Commented out by default in this template
    default: unset
    description: The default route of the management network.
    type: string
  DnsServers: # Override this via parameter_defaults
    default: []
    description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
    type: comma_delimited_list
  EC2MetadataIp: # Override this via parameter_defaults
    description: The IP address of the EC2 metadata server.
    type: string
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: ../../scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: interface
                name: nic1
                use_dhcp: false
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                - ip_netmask: 169.254.169.254/32
                  next_hop:
                    get_param: EC2MetadataIp
              - type: ovs_bridge
                name: br-ex
                use_dhcp: false
                dns_servers:
                  get_param: DnsServers
                addresses:
                - ip_netmask:
                    get_param: ExternalIpSubnet
                routes:
                - default: true
                  next_hop:
                    get_pram: ExternalInterfaceDefaultRoute
                members:
                - type: interface
                  name: nic2
                  primary: true
              - type: ovs_bridge
                name: br-isolated
                use_dhcp: false
                members:
                - type: interface
                  name: nic3
                  primary: true
                - type: vlan
                  vlan_id:
                    get_param: StorageNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageIpSubnet
                - type: vlan
                  vlan_id:
                    get_param: StorageMgmtNetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: StorageMgmtIpSubnet
            # If setting default route on the Management interface, comment
            # out the default route on the Control Plane.
            #-
            #  type: interface
            #  name: nic7
            #  use_dhcp: false
            #  addresses:
            #    -
            #      ip_netmask: {get_param: ManagementIpSubnet}
            #  routes:
            #    -
            #      default: true
            #      next_hop: {get_param: ManagementInterfaceDefaultRoute}
outputs:
  OS::stack_id:
    description: The OsNetConfigImpl resource.
    value:
      get_resource: OsNetConfigImpl

Remaining configuration under 'openstack-tripleo-heat-templates' folder need to be remain same.

6. To start the overcloud deployment, run the below command:

[stack@rhospdirector ~]$ openstack overcloud deploy --templates \
-e /home/stack/templates/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/templates/openstack-tripleo-heat-templates/environments/network-environment.yaml \
-e /home/stack/templates/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e /home/stack/templates/nodeinfo.yaml \
-e /home/stack/templates/ceph_node_disk.yaml \
--ntp-server 192.168.122.1


The above IP can be used from the KVM host to open the horizon dashboard. And admin and its password can be used to login.



After successful login you can start doing most of the activity in this platform but better to use the Cirros image only. As its light weight and having minimal network level OS level activity. 

Comments

Popular posts from this blog

How to build Ubuntu Server 20.04 LTS OVA with vAPP Properties ?

VNC Configuration using Ansible in CentOS 7