Openstack installation : Compute Service – Nova (On Controller Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create databases, service credentials, and API endpoints.

Create the nova_api, nova, nova_cell0, and placement databases:

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY {password};

Create the Compute service credentials and create the nova user

$ . admin-openrc 
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the admin role to the nova user:

$ openstack role add --project service --user nova admin

Create the nova service entity:

$ openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+

Create the Compute API service endpoints:

$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+

Create a Placement service user

$ openstack user create --domain default --password-prompt placement 
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the Placement user to the service project with the admin role:

$ openstack role add --project service --user placement admin

Create the Placement API entry in the service catalog:

$ openstack service create --name placement --description "Placement API" placement 
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+

Create the Placement API service endpoints:

$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+

Install and configure components

Install the packages:

apt install --assume-yes nova-api nova-conductor nova-novncproxy nova-scheduler nova-placement-api

Edit the /etc/nova/nova.conf file and complete the following actions:

In the [DEFAULT] section

  • Due to a packaging bug, remove the log_dir option
  • Configure RabbitMQ message queue access
  • Configure the my_ip option to use the management interface IP address of the controller node
  • Enable support for networking. By default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
[DEFAULT]
...
transport_url = rabbit://openstack:{password}@controller
...

my_ip = 10.0.0.31
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

In the [api_database], [database], and [placement_database] sections, configure database access:

[api_database]
...
connection = mysql+pymysql://nova:{password}@controller/nova_api
...
[database]
connection = mysql+pymysql://nova:{password}@controller/nova
. . .
[placement_database]

connection = mysql+pymysql://placement:{password}@controller/placement

In the [api] and [keystone_authtoken] sections, configure Identity service access:
Comment out or remove any other options in the [keystone_authtoken] section.

[api]
. . .
auth_strategy = keystone
. . .
[keystone_authtoken]
. . .
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {password}

In the [vnc] section, configure the VNC proxy to use the management interface IP address of the controller node:

[vnc]
...
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

In the [glance] section, configure the location of the Image service API:

[glance]
...
api_servers = http://controller:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

In the [placement] section, configure the Placement API:
Comment out any other options in the [placement] section.

[placement]
. . .
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = {password}

Populate the nova-api and placement databases:

su -s /bin/sh -c "nova-manage api_db sync" nova

Register the cell0 database:

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

Create the cell1 cell:

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova 
109e1d4b-536a-40d0-83c6-5f121b82b650

Populate the nova database:

su -s /bin/sh -c "nova-manage db sync" nova

Verify nova cell0 and cell1 are registered correctly:

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova 
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+

Finalize installation

service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Compute Service – Nova (On Controller Node)

Openstack installation : Image Services – Glance.

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create the database

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '{password}';

As admin user create service credentials for glance user.

$ . admin-openrc

Create the glance user:

$ openstack user create --domain default --password-prompt glance

Add the admin role to the glance user and service project:

$ openstack role add --project service --user glance admin

Create the glance service entity:

$ openstack service create --name glance --description "OpenStack Image" image

Create the Image service API endpoints:

$ openstack endpoint create --region RegionOne image public http://controller:9292
$ openstack endpoint create --region RegionOne image internal http://controller:9292
$ openstack endpoint create --region RegionOne image admin http://controller:9292

Install and configure components

apt install --assume-yes glance

Edit the /etc/glance/glance-api.conf file and complete the following actions:
In the [database] section, configure database access:

[database]
connection = mysql+pymysql://glance:{password}@controller/glance
...

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access: (Comment out or remove any other options in the [keystone_authtoken] section.)

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = {password}

[paste_deploy]
...
flavor = keystone

In the [glance_store] section, configure the local file system store and location of image files:

[glance_store]
...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

Edit the /etc/glance/glance-registry.conf file and complete the following actions:

Note : The Glance Registry Service and its APIs have been DEPRECATED in the Queens release and are subject to removal at the beginning of the ‘S’ development cycle, following the OpenStack standard deprecation policy.

In the [database] section, configure database access:

[database]
...
connection = mysql+pymysql://glance:{password}@controller/glance

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access: ( Comment out or remove any other options in the [keystone_authtoken] section.)

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = {password}

[paste_deploy]
...
flavor = keystone

flavor = keystone

Populate the Image service database: (Ignore any deprecation messages)

#su -s /bin/sh -c "glance-manage db_sync" glance

Restart the Image services:

#service glance-registry restart
#service glance-api restart

Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack deployment. Source the admin credentials to gain access to admin-only CLI commands

 $ . admin-openrc 

Download the source image:

$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

Upload the image to the Image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:

$ openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public

$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 7a53530c-020e-4b44-8248-3cf041609f82 | cirros | active |
+--------------------------------------+--------+--------+
Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack installation : Image Services – Glance.

Openstack installation : Keystone (Authentication Services)

Content from “openstack.org”, listed here with minor changes – just noting down what I did – online notes.

Create database for keystone services

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '{password}';

Apache HTTP server is used to handle authentication requests, install it along with keystone

apt install --assume-yes keystone  apache2 libapache2-mod-wsgi

Update the configuration file : Edit /etc/keystone/keystone.conf file and complete the following actions . Note the section names where to make the configuration updates

[database]
# ...
connection = mysql+pymysql://keystone:{password}@controller/keystone


[token]
# ...
provider = fernet

Note : Comment out any other connection option in database section.

Populate the identity service database and initialize fernet key repositories

# su -s /bin/sh -c "keystone-manage db_sync" keystone
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Bootstrap the identity service (default domain gets created)

keystone-manage bootstrap --bootstrap-password {password} --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

Edit /etc/apache2/apache2.conf file and configure the ServerName option to reference the controller node – Add if not already found

ServerName controller

Restart apache and keystone

# service apache2 restart

When using the openstack client to perform operations invariably we need to pass the username, password, authentication URL, domain, etc., as command line parameters. Alternately we can have openstack client specific environment variables hold the value for the same – to avoid providing as command line parameters every time. For convenience create openstack client environment script ‘admin-openrc’ with the following contents

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD={password}
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

Load / Set the client environment variables (not as root user)

$ . admin-openrc

Create a project ‘service’ that contains a unique user for each service that will be added to the environment

sandeep@controller:~$ openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 7a9d86ac1bde48eea52ebb562599c9d3 |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+

Verify the functioning of keystone services

sandeep@controller:~$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2019-01-14T13:36:17+0000 |
| id | gAAAAABcPIJBUrlkBTiqVwkyhmKirFSx5Wnod-4YFeMAAayv2tr_W0nNJgmy_ThI0zyFb0HJ7SweBewFYxlYinymw0DA8iIQIyGU3tqm-9JNj7ZZUS8t4Gr3ndOCzccRYi9NdLXZOhlq8Ye6L1uGqyA0bQjbGZSSSkE_iqunWyysWRjNDTgo9UQ |
| project_id | fa8d2cf9a9ca4ed79c3379de4f215a30 |
| user_id | 580026fd75d3441c9d10c247e1bdf814 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Posted in Installation / How To, Notes | Tagged , , , , , , | Comments Off on Openstack installation : Keystone (Authentication Services)

Openstack installation : Minimal services and Controller – Pre-requisites

Content from “openstack.org”, listed here with minor changes – just noting down what I did – online notes.

The OpenStack system consists of several key services that are separately installed. These services work together depending on your cloud needs and include the Compute, Identity, Networking, Image, Block Storage, Object Storage, Telemetry, Orchestration, and Database services. You can install any of these projects separately and configure them stand-alone or as connected entities.

For the home lab, will require the following services

On the controller node :

Keystone : Identity Service
Glance : Image services
Nova : Compute services (All except nova-compute)
Neutron : Networking services
Cinder : Block storage (All except cinder-volumes)
Horizon : Dashboard / Management UI

On the compute node :

Nova    : Compute services (nova-compute only)
Neutron : Networking services (linux-bridge-agent only)
Cinder : Block storage (cinder-volumes only)

Note : In view many services, there will be number of passwords to be maintained to be configured for accessing the services. For ease of learning I had preferred to use a single password in all the places – This is fine for learning period. Nevertheless any reference to data surrounded by curly-braces need to be replaced with actual value.

All of the services require a database for managing service entities and end points, and all the services are managed typically by APIs exposed on the controller node and hence a SQL database needs to be installed in controller. For the home lab using MySQL. Install and configure MySQL on controller.

apt install --assume-yes mariadb-server python-pymysql

Create /etc/mysql/mariadb.conf.d/99-openstack.cnf with following contents :

[mysqld]
bind-address = {management-ip-address-of-controller-node}
default-storage-engine = innodb
innodb_buffer_pool_size = 1536000000
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

Had preferred to configure 1.5G of buffer pool size based on the fact that my controller node has 8G only. No specific data points based on sizing 1.5G. Restart MySQL service on controller node and also secure the installation by setting a password for ‘root’ user on MySQL.

# service mysql restart
# mysql_secure_installation

Communication between services happen via message queues. Install Rabbit MQ on controller and create a user ‘openstack’ with password {message-queue-password} and set required (configuration, read, write) permissions – in our case all. Also tag the user account as administrator account.

# apt install --assume-yes rabbitmq-server
# rabbitmqctl add_user openstack {password-for-rabbit-mq}
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmqctl set_user_tags openstack administrator

Though the following is not mandatory, preferred to enable the Management Plugins (management via browser UI)

#rabbitmq-plugins enable rabbitmq_management

After restart of the node, had observed in logs that mq srever does not come up successfully, since the IP addresses were not assigned to the interface. Reviewing the systemd service (/lib/systemd/system/rabbitmq-server.service) script for MQ server, noticed the dependency “After=network.target”. Changing the same to “After=network-online.target” helped solve the issue. Also had observed that stopping of rabbitmq service will not complete. Searching for known issues, had come across a solution to change the value for ExecStop to “/usr/sbin/rabbitmqctl shutdown” which did work.

The authentication service uses memcached to cache tokens. So need to install memcached on controller node. (Security aspects wrt using memcached not considered at this stage of learning).

#apt --assume-yes install memcached python-memcache

Edit the binding interface in conf (/etc/memcached.conf) file

-l 10.0.0.15 #Ip Address of controller

Restart memcached after configuration change

#service memcached restart

Openstack services may use etcd – install the same

#apt --assume-yes install etcd

Update the etcd configuration file (/etc/default/etcd) (Note the usage of controller node management IP address)

ETCD_NAME="controller"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.15:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.15:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.15:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.15:2379"

Enable and start etcd

# systemctl enable etcd
# systemctl start etcd

Posted in Installation / How To, Notes | Tagged , , , , , , | Comments Off on Openstack installation : Minimal services and Controller – Pre-requisites

Openstack installation : Installing OS, configure pre-requisites and install openstack client

Content from “openstack.org”, listed here with minor changes – just noting down what I did – online notes.

In the course of learning installation of Openstack found that Canonical team maintain separate repositories for ‘Rocky’ release of Openstack. so decided to install Openstack on Ubuntu 18.04 server. One other benefit : no need to worry about installing dirvers.

Did the following post installation of OS, in controller and compute node.

Never end up with issues association failure at application level, due to insufficient file descriptors. edit /etc/security/limits.conf and add the following at end.

*  hard nofile 262140
* soft nofile 262140

Add ‘Rocky’ cloud-archive repository, update repository information and upgrade all that can be

apt install --assume-yes software-properties-common
add-apt-repository cloud-archive:rocky
apt update && apt dist-upgrade

Enable root login – This is something I do in all my systems @ home lab – Not mandatory. edit /etc/ssh/sshd_config, uncomment the following configuration and configure as follows

PermitRootLogin yes

Need to ensure that controller and compute nodes are reachable using hostnames. Edit /etc/hosts and add necessary entries

10.0.0.15       controller
10.0.0.41 iserver
10.0.0.50 aserver

Configure local time

unlink /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Kolkata /etc/localtime

Enable bridge net filtering – will be required prior to having neutron (networking services) configured. Edit /etc/modules-load.d/bridge.conf and add the following line so that the module gets loaded on node startup.

br_netfilter

Create /etc/sysctl.d/bridge.conf and add the following lines

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge-nf-call-arptables = 1

Disable app-armor and remove

systemctl stop apparmor
systemctl disable apparmor
apt purge --assume-yes apparmor

snapd not required. Also we will manage network configurations manually, do not need any network manager – uninstall netplan and snapd and install ifupdown.

apt purge --assume-yes snapd ubuntu-core-launcher squashfs-tools
apt-get --assume-yes purge nplan netplan.io
rm -rf /etc/netplan
rm -rf /usr/share/netplan/netplan/cli/commands/
apt install --assume-yes ifupdown

Configure one interface for overlay network (typically management) and one for provider network via which the guest VMs will get access to external networks. (Should be possible to use more than one interface – yet to learn how-to). Edit /etc/network/interfaces and add the following. Note : interface names could vary in different hosts.

source /etc/network/interfaces.d/*

#The loopback network interface
auto lo
iface lo inet loopback

#Interface for overlay/management
allow-hotplug eno1
auto eno1
iface eno1 inet static
address 10.0.0.31
netmask 255.255.255.0
broadcast 10.0.0.255
gateway 10.0.0.1
dns-nameservers 8.8.8.8 8.8.4.4

#Provider interface
auto eno2
iface eno2 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

Had some challenges while installing neutron services, had to disable IPv6 for the provider network interface after which it got solved. (Yet to validate if disabling is required), add the following entry in /etc/sysctl.conf.

#eno2 is the interface name
net.ipv6.conf.eno2.disable_ipv6 = 1

Unmask and enable networking service, disable and mask systemd-networkd

systemctl unmask networking

systemctl enable networking

systemctl stop systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

systemctl disable systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

systemctl mask systemd-networkd.socket systemd-networkd networkd-dispatcher systemd-networkd-wait-online

Edit /etc/systemd/resolved.conf, uncomment DNS= entry and add the DNS server IP

DNS=8.8.4.4

On every ssh login to node, a small delay was observed, figured out that disabling motd-news would help – this could be Ubuntu specific. Edit /etc/default/motd-news and set ENABLED=0

Reboot the system so the networking changes take effect.

Install NTP services on all nodes

apt install --assume-yes chrony

Let one node (controller) synchronize with servers on internet and other nodes synchronize with controller. Edit /etc/chrony/chrony.conf and comment out all pool entries. On controller node update the configuration as below

#pool ntp.ubuntu.com        iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2

server 0.asia.pool.ntp.org iburst
allow 10.0.0.0/24

On compute nodes update as below

#pool ntp.ubuntu.com        iburst maxsources 4
#pool 0.ubuntu.pool.ntp.org iburst maxsources 1
#pool 1.ubuntu.pool.ntp.org iburst maxsources 1
#pool 2.ubuntu.pool.ntp.org iburst maxsources 2

server controller iburst

Restart the NTP service after configuration changes

service chrony restart

Verify the services on controller by checking the output of ‘chronyc sources’ and ‘chronyc clients’ and on compute nodes the output of ‘chronyc sources’.

Install openstack client

apt install --assume-yes python-openstackclient


Posted in Installation / How To, Notes | Tagged , , , , , , | Comments Off on Openstack installation : Installing OS, configure pre-requisites and install openstack client

Openstack Installation : My Requirements

Note : Most of the contents here are from “openstack.org” site.  I have listed in sequence what I did to have open stack up and running in my home lab.  It is just my preference to have the notes online.

Preamble

Until now, had been using KVM for my home lab needs. Working VMs hosted on Openstack environment had observed quite a few issues when testing the application, more specifically clustering. Not observing the same when testing on VMs hosted on KVM, wanted to know more about issues related to using Openstack for hosting computes. Of course liked the ‘networking’ (neutron) services features. So decided to migrate from KVM to Openstack.

Being a Debian fan, had tried to have it installed on Debian stretch. However when I came to know that ‘cloud-archive’ repositories were maintained by Canonical, quickly shifted over to Ubuntu 18.04. The follow sequence of posts are steps I had followed to have a basic functioning Openstack environment at home.

Simple lab setup

Based on my observations working on Openstack environment, had decided to have a dedicated controller node and two compute nodes. Of course the choice was not to install it on VMs, but on physical servers. One other reason to go for dedicated controller, was to make use of a single board computer I had in my inventory. Pretty interesting piece of hardware.

Intel N4200 based SBC, 8 GB DDR4, 128 GB EMMC, 250G SSD, 2 x Gigabit Ethernet Port

For compute node, had decided to used Dell Power Edge R430, 12 Core 24 Threads, 64G DDR4, 600G SAS, 1.8T HDD.

Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack Installation : My Requirements

Debian – Qualcom 6174 Wifi not working

On my new Acer Nitro laptop, after installing Debian 9.4, found that Wifi was not working.

Looking at dmesg output looked like the Qualcom Athreos 10K was not found.

Installed firmware-linux package, but still did not work.

So visited the site as below

 

Clicked on the commit message on master

 

Clicked the download link fir linux-firmware-master.tar.gz

Expanded the file and copied the ath10k folder to /lib/firware/

Restarted the system and Wifi started working.

Posted in Uncategorized | Tagged , , , , , , | Comments Off on Debian – Qualcom 6174 Wifi not working

Open Day Light – Toaster Tutorial App

Few of my contacts were finding it difficult to get going on with the Open Day Light tutorial app.

Reported that they were not able to cross the first step of creating a simple ‘Example’ project using Maven and an archetype called ‘opendaylight-startup-archetype’.

Tried the following and it worked.

Installed JDK and updated JAVA_HOME environment path variable.
Installed Maven and update MAVEN_HOME and M2_HOME environment variable. Updated PATH environment variable to include Maven bin path.

In home folder created .m2 folder and created settings.xml with the following contents

<?xml version="1.0" encoding="UTF-8"?>
<!-- vi: set et smarttab sw=2 tabstop=2: -->
<!--
 Copyright (c) 2014, 2015 Cisco Systems, Inc. and others. All rights reserved.

This program and the accompanying materials are made available under the
 terms of the Eclipse Public License v1.0 which accompanies this distribution,
 and is available at http://www.eclipse.org/legal/epl-v10.html
-->
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">

<profiles>
 <profile>
 <id>opendaylight-release</id>
 <repositories>
 <repository>
 <id>opendaylight-mirror</id>
 <name>opendaylight-mirror</name>
 <url>https://nexus.opendaylight.org/content/repositories/public/</url>
 <releases>
 <enabled>true</enabled>
 <updatePolicy>never</updatePolicy>
 </releases>
 <snapshots>
 <enabled>false</enabled>
 </snapshots>
 </repository>
 </repositories>
 <pluginRepositories>
 <pluginRepository>
 <id>opendaylight-mirror</id>
 <name>opendaylight-mirror</name>
 <url>https://nexus.opendaylight.org/content/repositories/public/</url>
 <releases>
 <enabled>true</enabled>
 <updatePolicy>never</updatePolicy>
 </releases>
 <snapshots>
 <enabled>false</enabled>
 </snapshots>
 </pluginRepository>
 </pluginRepositories>
 </profile>

<profile>
 <id>opendaylight-release</id>
 <repositories>
 <repository>
 <id>opendaylight-release</id>
 <name>opendaylight-release</name>
 <url>https://nexus.opendaylight.org/content/repositories/opendaylight.release/</url>
 <releases>
 <enabled>true</enabled>
 </releases>
 <snapshots>
 <enabled>false</enabled>
 </snapshots>
 </repository>
 </repositories>
 <pluginRepositories>
 <pluginRepository>
 <id>opendaylight-release</id>
 <name>opendaylight-release</name>
 <url>https://nexus.opendaylight.org/content/repositories/opendaylight.release/</url>
 <releases>
 <enabled>true</enabled>
 </releases>
 <snapshots>
 <enabled>false</enabled>
 </snapshots>
 </pluginRepository>
 </pluginRepositories>
 </profile>
 </profiles>

<activeProfiles>
 <activeProfile>opendaylight-release</activeProfile>
 </activeProfiles>
</settings>

Created sample project using Maven as follows

mvn archetype:generate -DarchetypeGroupId=org.opendaylight.controller -DarchetypeArtifactId=opendaylight-startup-archetype -DarchetypeCatalog=remote -DarchetypeVersion=1.2.3-Boron-SR3

For selecting the archetypeVersion – access the following link in browser and picked one which had the startup-archetype as artificat Id.

https://nexus.opendaylight.org/content/repositories/opendaylight.release/archetype-catalog.xml

Posted in Uncategorized | Tagged , , , , | Comments Off on Open Day Light – Toaster Tutorial App

Setting up KVM on my dev system

Have a AMD Desktop with 1950X processor (16 core / 32 threads) with 64 GB RAM, with 9 1G interfaces and 2 x 1 TB HDD for storage apart from 250GB SSD for OS and host installation.  Wanted to have VMs with dedicated NICs and dedicated physical partitions on the disk allocated to the VMs.

Installed Debian 9.8.x using netinst – intentional – wanted to have only the bare minimum required files to be installed.  Did not want NetworkManager like services to be installed.

Post installation first thing I do always is to update the upgrade the packages :

apt update
apt dist-upgrade

Wanted xserver to be available

apt install xorg

Had installed additional cards requiring realtek drivers.  Updated /etc/apt/sources.list by adding ‘contrib non-free’ as required for firmware packages.

apt install firmware-realtek
apt install nvidia-driver

Checked if nvidia driver is used

hwinfo --gfxcard

It was not, but got suggestion to have it activated with modprobe – did the same.

modprobe nvidia_current

Verified that nvidia driver is active

hwinfo --gfxcard

Installed minimal required packages for KVM

apt install -y qemu-kvm libvirt0 virt-manager bridge-utils

Rebooted the system.

Edited and updated the contents of /etc/network/interfaces (For one physical interface bridged to br0 interface)

auto enp5s0
iface enp5s0 inet manual

auto br0
iface br0 inet static
        address 192.168.0.4
        network 192.168.0.0
        netmask 255.255.255.0
        broadcast 192.168.0.255
        bridge_ports enp5s0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off

Rebooted the system.

Added ‘root’ (user account that would be accessing the virt-manager) to libvirt group.

gpasswd libvirt -a root

Copied the downloaded Debian (required guest OS) iso image to /var/lib/libvirt/images folder

Changed the ownership information for iso images

chown libvirt-qemu:libvirt /var/lib/libvirt/images/*

Started the virt-manager, selected File->New Virtual Machine and performed the following actions

Clicked on Browse and selected the image and continue with “Choose Volume”

Note : Wanted dedicated physcial partition. Did not select Manage and directly typed the partition information.

Note : As had decided to allocate dedicated NIC, selected appropriate macvtap option.

Completed all installation steps.

Once the installation was over and VM started – Manually shut down the same to update the boot options in virt-manager.

After shutting down, Selected View->Details from Menu.

Updated the boot options as follows

After applying the changes.

That’s it now after starting the host, the VMs are started and are ready for accessing them.

Posted in Uncategorized | Tagged , , , , , | Comments Off on Setting up KVM on my dev system

Build Linux Kernel

Had a need to build Linux Kernel. There were multiple sites which detailed the how-to. Finally to me it boiled down to the following

After installing debian in the system, install required packages for building the kernel.

[ Note : I had attempted it as root user and hence did not use sudo ]

apt-get update
apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc bison flex libelf-dev

Down the latest stable (or required) version of linux kernel (Downloaded from browser so switching to Downloads folder). At the time of typing this the latest stable was 4.14.3

cd Downloads/
tar xf linux-4.14.3.tar.xz
cd linux-4.14.3

To make life simpler, just copied the module selection of current kernel with following commands. Note : I did not change the module selections as the purpose was to test building linux kernel.

cp /boot/config-$(uname -r) .config
make menuconfig

Now build the kernel (Note : 32 is the number of threads available. In my system it was 32 (output of nproc command)

Note : The update-initramfs command below may not be required – but no harm.

make -j 32
make modules_install -j 32
make install -j 32
reboot

After reboot, could see the option of kernel 4.14.3 listed in boot options.

Posted in Uncategorized | Tagged , , , , | Comments Off on Build Linux Kernel