Openstack installation : Create Provider / Self service Networks

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Posted in Installation / How To, Notes | Tagged , , , , , , | Comments Off on Openstack installation : Create Provider / Self service Networks

Openstack installation : Block storage – Cinder (on Compute Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Wanted to get rid of all partition info in /dev/sdb so that I can use it for block storage.

#dd if=/dev/zero of=/dev/sdb bs=1k count=1000000
#blockdev --rereadpt /dev/sdb

Install the supporting utility packages

#apt install --assume-yes lvm2 thin-provisioning-tool

Create the LVM physical volume /dev/sdb

# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.

Create the LVM volume group cinder-volumes (Note the volume group name)

# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created

Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the /dev directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. Reconfigure LVM to scan only the devices that contain the cinder-volumes volume group. Edit /etc/lvm/lvm.conf and in the devices section un-comment the filter configuration and amend it to allow /dev/sdb to be scanned and not others.

devices {
. . .
filter = [ "a/sdb/", "r/.*/"]
. . .
}

Install the cinder packages

#apt install --assume-yes cinder-volume

Edit the /etc/cinder/cinder.conf – Required configurations – add sections if not found

[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
enabled_backends = lvm
glance_api_servers = http://controller:9292
transport_url = rabbit://openstack:{pasword}@controller
auth_strategy = keystone

[database]
connection = mysql+pymysql://cinder:{password}@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = {password}

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

Finalize : Restart services

#service tgt restart
#service cinder-volume restart


Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack installation : Block storage – Cinder (on Compute Node)

Openstack installation : Block Storage – Cinder (On Controller Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create database, service credentials, and API endpoints.

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '{password}';

Create service credentials : Create a cinder user

$ . admin-openrc
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the admin role to the cinder user:

$ openstack role add --project service --user cinder admin

Create the cinderv2 and cinderv3 service entities

$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+

$ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+

Create the Block Storage service API endpoints:

$ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

Install the packages:

apt install --assume-yes cinder-api cinder-scheduler

Edit the /etc/cinder/cinder.conf file and complete the following actions:

In the [database] section, configure database access:

[database]
#...
connection = mysql+pymysql://cinder:{password}@controller/cinder

In the [DEFAULT] section

  • Configure RabbitMQ message queue access
  • Configure Identity service access
  • Configure the my_ip option to use the management interface IP address of the controller node
[DEFAULT]
#...
transport_url = rabbit://openstack:midlodza@controller
#...
auth_strategy = keystone
#...
my_ip = 10.0.0.15

In [keystone_authtoken] sections, configure Identity service access. Comment out or remove any other options in the [keystone_authtoken] section.

[keystone_authtoken]
#...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = midlodza

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
#...
lock_path = /var/lib/cinder/tmp

Populate the Block Storage database. Ignore any deprecation messages in this output.

su -s /bin/sh -c "cinder-manage db sync" cinder

Edit the /etc/nova/nova.conf file and add the following to it:

[cinder]
#...
os_region_name = RegionOne

Finalize installation : Restart the Compute API service.

#service nova-api restart

Restart the Block Storage services:

#service cinder-scheduler restart
#service apache2 restart
Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack installation : Block Storage – Cinder (On Controller Node)

Openstack installation : Dashboard – Horizon

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Install the packages

apt install --assume-yes openstack-dashboard

Edit the /etc/openstack-dashboard/local_settings.py file and complete the following actions:

Configure the dashboard to use OpenStack services on the controller node:

OPENSTACK_HOST = "controller"

In the Dashboard configuration section (Not Ubuntu configuration section), allow your hosts to access Dashboard:

ALLOWED_HOSTS = ['*']

Configure the memcached session storage service (Comment out any other session storage configuration.)

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

Enable the Identity API version 3:

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

Enable support for domains:

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

Configure API versions:

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

Configure Default as the default domain for users that you create via the dashboard:

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

Configure user as the default role for users that you create via the dashboard:

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

If you chose networking option 1, disable support for layer-3 networking services (Not applicable in my case)

OPENSTACK_NEUTRON_NETWORK = {
. . .
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}

Optionally, configure the time zone:

TIME_ZONE = "Asia/Kolkata"

Add the following line to /etc/apache2/conf-available/openstack-dashboard.conf if not included.

WSGIApplicationGroup %{GLOBAL}

Finalize installation : Reload the web server configuration:

service apache2 reload

Verify by accessing : http://controller/horizon, Login with Domain = default, User = admin, password = {password} (whatever was configured)

Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Dashboard – Horizon

Openstack installation : Networking service – Neutron (On Compute Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

The compute node handles connectivity and security groups for instances. Install the components

apt install --assume-yes neutron-linuxbridge-agent

The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.

Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, comment out any connection options because compute nodes do not directly access the database.

In [oslo_concurrency] section :

[oslo_concurrency]
...
lock_path = /var/log/neutron/tmp

In the [DEFAULT] section, configure RabbitMQ message queue access and authentication strategy

[DEFAULT]
...
transport_url = rabbit://openstack:{password}@controller
...
auth_strategy = keystone

Configure authentication config. Comment out or remove any other options in the [keystone_authtoken] section.

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = {password}

Configure the Linux bridge agent, builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Note : eno2 is the network interface I had planned for provider networks

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:

[linux_bridge]
...
physical_interface_mappings = provider:eno2

In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:

[vxlan]
...
enable_vxlan = true
local_ip = 10.0.0.41
l2_population = true

In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure the Compute service to use the Networking service

Edit the /etc/nova/nova.conf file and in the [neutron] section, configure access parameters:

[neutron]
...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {password}

Finalize installation. Restart the Compute service:

service nova-compute restart

Restart the Linux bridge agent:

service neutron-linuxbridge-agent restart

Verify neutron installation : List agents to verify successful launch of the neutron agents:

$ openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 3fe3723d-cdd8-4b38-906c-0faf433a5f1e | L3 agent | controller | nova | 🙂 | UP | neutron-l3-agent |
| 47e059c2-1d18-4b64-8af9-c6c6d36048d9 | Linux bridge agent | icompute | None | 🙂 | UP | neutron-linuxbridge-agent |
| 5cf3c2e9-a714-4446-9d39-e87b300c1cbf | Metadata agent | controller | None | 🙂 | UP | neutron-metadata-agent |
| a9222b67-0f4a-4271-acfe-e234f161f570 | Linux bridge agent | controller | None | 🙂 | UP | neutron-linuxbridge-agent |
| c5f4e10b-67a5-4ee0-92ae-a4463ee5b582 | Linux bridge agent | acompute | None | 🙂 | UP | neutron-linuxbridge-agent |
| ca5e5486-9d8e-4063-b683-adedb9cc1b9d | DHCP agent | controller | nova | 🙂 | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

The output should indicate four agents on the controller node and one agent on each compute node.

Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Networking service – Neutron (On Compute Node)

Openstack installation : Networking service – Neutron (On Controller Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create database, service credentials, and API endpoints.

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '{password}';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '{password}';

Create the service credentials. Create the neutron user:

$ . admin-openrc
$ openstack user create --domain default --password-prompt neutron

User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the admin role to the neutron user:

$ openstack role add --project service --user neutron admin

Create the neutron service entity:

$ openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+

Create the Networking service API endpoints:

$ openstack endpoint create --region RegionOne network public http://controller:9696 
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne network internal http://controller:9696

+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne network admin http://controller:9696

+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

Install and configure the Networking components

apt install --assume-yes neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, configure database access:
Comment out or remove any other connection options in the [database] section.

[database]
...
connection = mysql+pymysql://neutron:midlodza@controller/neutron

In the [DEFAULT] section

  • Enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses
  • Configure RabbitMQ message queue access
  • Configure Identity service access
  • Configure Networking to notify Compute of network topology changes

[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
...
transport_url = rabbit://openstack:midlodza@controller
...
auth_strategy = keystone
...

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

In the [keystone_authtoken] section, configure Identity service access:
Comment out or remove any other options

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = {password}

In [nova] section :

[nova]
...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = {password}

[oslo_concurrency]
...
lock_path = /var/log/neutron/tmp

Configure the Modular Layer 2 (ML2) plug-in, uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:

Note : The Linux bridge agent only supports VXLAN overlay networks.

In the [ml2] section

  • Enable flat, VLAN, and VXLAN networks:
  • Enable the port security extension driver
[ml2]
...
type_drivers = flat,vlan,vxlan
...
tenant_network_types = vxlan
...
mechanism_drivers = linuxbridge,l2population
...
extension_drivers = port_security

In the [ml2_type_flat] section, configure the provider virtual network as a flat network:

[ml2_type_flat]
...
flat_networks = provider

In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for self-service networks:

[ml2_type_vxlan]
...
vni_ranges = 1:1000

In the [securitygroup] section, enable ipset to increase efficiency of security group rules:

[securitygroup]
...
enable_ipset = true

Configure the Linux bridge agent, builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface: [enp3s0 is the interface name in my system, the second physical interface i would like to configure for provider networks]

[linux_bridge]
...
physical_interface_mappings = provider:enp3s0

In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:
OVERLAY_IP_ADDRESS – IP address associated with enp2s0 in my system.

[vxlan]
...
enable_vxlan = true
local_ip = 10.0.0.15
l2_population = true

In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure the layer-3 agent, provides routing and NAT services for self-service virtual networks. Edit the /etc/neutron/l3_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the Linux bridge interface driver and external network bridge:

[DEFAULT]
...
interface_driver = linuxbridge

Configure the DHCP agent, provides DHCP services for virtual networks.

Edit the /etc/neutron/dhcp_agent.ini file : In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:

[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

Configure the metadata agent, provides configuration information such as credentials to instances.

Edit the /etc/neutron/metadata_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the metadata host and shared secret:

[DEFAULT]
...
nova_metadata_host = controller
metadata_proxy_shared_secret = {secret}

Configure the Compute service to use the Networking service
Edit the /etc/nova/nova.conf file and perform the following actions:

Edit the /etc/nova/nova.conf file and perform the following actions:

In the [neutron] section, configure access parameters, enable the metadata proxy, and configure the secret:

[neutron]
...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {password}
service_metadata_proxy = true
metadata_proxy_shared_secret = {secret}

Finalize installation : Populate the database

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Restart the Compute API service:

service nova-api restart

Restart the Networking services.

service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart


Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Networking service – Neutron (On Controller Node)

Openstack installation : Compute Service – Nova (On Compute Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Install the package

apt install --assume-yes nova-compute

Edit the /etc/nova/nova.conf file and complete the following actions:

In the [DEFAULT] section

  • Due to a packaging bug, remove the log_dir option
  • Configure RabbitMQ message queue access
  • Configure the my_ip option to use the management interface IP address of the compute node
  • Enable support for networking. By default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
[DEFAULT]
...
transport_url = rabbit://openstack:{password}@controller
...
my_ip = 10.0.0.31
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

In the [api] and [keystone_authtoken] sections, configure Identity service access:
Comment out or remove any other options in the [keystone_authtoken] section.

[api]
...
auth_strategy = keystone
...
[keystone_authtoken]
...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {password}

In the [vnc] section, enable and configure remote console access:

[vnc]
...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

In the [glance] section, configure the location of the Image service API:

[glance]
...
api_servers = http://controller:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

In the [placement] section, configure the Placement API:
Comment out any other options in the [placement] section.

[placement]
...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = {password}

Finalize installation

Since my compute node supports hardware acceleration no config update required to [libvirt] section, virt_type = qemu.

Restart compute service.

service nova-compute restart

Add the compute node to the cell database (On the controller node 🙂

$ . admin-openrc
$ openstack compute service list --service nova-compute
+----+-------+--------------+------+-------+---------+----------------------------+
| ID | Host | Binary | Zone | State | Status | Updated At |
+----+-------+--------------+------+-------+---------+----------------------------+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+----+-------+--------------+------+-------+---------+----------------------------+

Discover compute hosts (done whenever a new compute node is added)

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

Verify operation of the Compute service.

$ openstack compute service list 
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |
| 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+

This output should indicate three service components enabled on the controller node and one service component enabled on the compute node.

sandeep@controller:~$ openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
| nova | compute | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | |
| keystone | identity | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
+-----------+-----------+-----------------------------------------+
sandeep@controller:~$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 7a53530c-020e-4b44-8248-3cf041609f82 | cirros | active |
+--------------------------------------+--------+--------+

root@controller:/var/log/nova# nova-status upgrade check
Deprecated: Option "enable" from group "cells" is deprecated for removal (Cells v1 is being replaced with Cells v2.). Its value may be silently ignored in the future.
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: API Service Version |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Request Spec Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Console Auths |
| Result: Success |
| Details: None |
+--------------------------------+

Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Compute Service – Nova (On Compute Node)

Openstack installation : Compute Service – Nova (On Controller Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create databases, service credentials, and API endpoints.

Create the nova_api, nova, nova_cell0, and placement databases:

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY {password};
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY {password};

Create the Compute service credentials and create the nova user

$ . admin-openrc 
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the admin role to the nova user:

$ openstack role add --project service --user nova admin

Create the nova service entity:

$ openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+

Create the Compute API service endpoints:

$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+

Create a Placement service user

$ openstack user create --domain default --password-prompt placement 
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the Placement user to the service project with the admin role:

$ openstack role add --project service --user placement admin

Create the Placement API entry in the service catalog:

$ openstack service create --name placement --description "Placement API" placement 
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+

Create the Placement API service endpoints:

$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+

Install and configure components

Install the packages:

apt install --assume-yes nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api

Edit the /etc/nova/nova.conf file and complete the following actions:

In the [DEFAULT] section

  • Due to a packaging bug, remove the log_dir option
  • Configure RabbitMQ message queue access
  • Configure the my_ip option to use the management interface IP address of the controller node
  • Enable support for networking. By default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
[DEFAULT]
...
transport_url = rabbit://openstack:{password}@controller
...

my_ip = 10.0.0.31
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

In the [api_database], [database], and [placement_database] sections, configure database access:

[api_database]
...
connection = mysql+pymysql://nova:{password}@controller/nova_api
...
[database]
connection = mysql+pymysql://nova:{password}@controller/nova
. . .
[placement_database]

connection = mysql+pymysql://placement:{password}@controller/placement

In the [api] and [keystone_authtoken] sections, configure Identity service access:
Comment out or remove any other options in the [keystone_authtoken] section.

[api]
. . .
auth_strategy = keystone
. . .
[keystone_authtoken]
. . .
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {password}

In the [vnc] section, configure the VNC proxy to use the management interface IP address of the controller node:

[vnc]
...
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

In the [glance] section, configure the location of the Image service API:

[glance]
...
api_servers = http://controller:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp

In the [placement] section, configure the Placement API:
Comment out any other options in the [placement] section.

[placement]
. . .
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = {password}

Populate the nova & nova-api database (Ignore any deprecation messages in this output)

su -s /bin/sh -c "nova-manage db sync" nova

Populate the nova-api and placement databases:

su -s /bin/sh -c "nova-manage api_db sync" nova

Register the cell0 database:

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

Create the cell1 cell:

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova 
109e1d4b-536a-40d0-83c6-5f121b82b650

Populate the nova database:

su -s /bin/sh -c "nova-manage db sync" nova

Verify nova cell0 and cell1 are registered correctly:

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova 
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+

Finalize installation

service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Compute Service – Nova (On Controller Node)

Openstack installation : Image Services – Glance.

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create the database

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '{password}';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

As admin user create service credentials for glance user.

$ . admin-openrc

Create the glance user:

$ openstack user create --domain default --password-prompt glance

Add the admin role to the glance user and service project:

$ openstack role add --project service --user glance admin

Create the glance service entity:

$ openstack service create --name glance --description "OpenStack Image" image

Create the Image service API endpoints:

$ openstack endpoint create --region RegionOne image public http://controller:9292
$ openstack endpoint create --region RegionOne image internal http://controller:9292
$ openstack endpoint create --region RegionOne image admin http://controller:9292

Install and configure components

apt install --assume-yes glance

Edit the /etc/glance/glance-api.conf file and complete the following actions:
In the [database] section, configure database access:

[database]
connection = mysql+pymysql://glance:{password}@controller/glance
...

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access: (Comment out or remove any other options in the [keystone_authtoken] section.)

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = {password}

[paste_deploy]
...
flavor = keystone

In the [glance_store] section, configure the local file system store and location of image files:

[glance_store]
...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

Edit the /etc/glance/glance-registry.conf file and complete the following actions:

Note : The Glance Registry Service and its APIs have been DEPRECATED in the Queens release and are subject to removal at the beginning of the ‘S’ development cycle, following the OpenStack standard deprecation policy.

In the [database] section, configure database access:

[database]
...
connection = mysql+pymysql://glance:{password}@controller/glance

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access: ( Comment out or remove any other options in the [keystone_authtoken] section.)

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = {password}

[paste_deploy]
...
flavor = keystone

flavor = keystone

Populate the Image service database: (Ignore any deprecation messages)

#su -s /bin/sh -c "glance-manage db_sync" glance

Restart the Image services:

#service glance-registry restart
#service glance-api restart

Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack deployment. Source the admin credentials to gain access to admin-only CLI commands

 $ . admin-openrc 

Download the source image:

$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

Upload the image to the Image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:

$ openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public

$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 7a53530c-020e-4b44-8248-3cf041609f82 | cirros | active |
+--------------------------------------+--------+--------+
Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack installation : Image Services – Glance.

Openstack installation : Keystone (Authentication Services)

Content from “openstack.org”, listed here with minor changes – just noting down what I did – online notes.

Create database for keystone services

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '{password}';

Apache HTTP server is used to handle authentication requests, install it along with keystone

apt install --assume-yes keystone  apache2 libapache2-mod-wsgi

Update the configuration file : Edit /etc/keystone/keystone.conf file and complete the following actions . Note the section names where to make the configuration updates

[database]
# ...
connection = mysql+pymysql://keystone:{password}@controller/keystone


[token]
# ...
provider = fernet

Note : Comment out any other connection option in database section.

Populate the identity service database and initialize fernet key repositories

# su -s /bin/sh -c "keystone-manage db_sync" keystone
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Bootstrap the identity service (default domain gets created)

keystone-manage bootstrap --bootstrap-password midlodza --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

Edit /etc/apache2/apache2.conf file and configure the ServerName option to reference the controller node – Add if not already found

ServerName controller

Restart apache and keystone

# service apache2 restart

When using the openstack client to perform operations invariably we need to pass the username, password, authentication URL, domain, etc., as command line parameters. Alternately we can have openstack client specific environment variables hold the value for the same – to avoid providing as command line parameters every time. For convenience create openstack client environment script ‘admin-openrc’ with the following contents

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD={password}
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

Load / Set the client environment variables (not as root user)

$ . admin-openrc

Create a project ‘service’ that contains a unique user for each service that will be added to the environment

sandeep@controller:~$ openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 7a9d86ac1bde48eea52ebb562599c9d3 |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+

Verify the functioning of keystone services

sandeep@controller:~$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2019-01-14T13:36:17+0000 |
| id | gAAAAABcPIJBUrlkBTiqVwkyhmKirFSx5Wnod-4YFeMAAayv2tr_W0nNJgmy_ThI0zyFb0HJ7SweBewFYxlYinymw0DA8iIQIyGU3tqm-9JNj7ZZUS8t4Gr3ndOCzccRYi9NdLXZOhlq8Ye6L1uGqyA0bQjbGZSSSkE_iqunWyysWRjNDTgo9UQ |
| project_id | fa8d2cf9a9ca4ed79c3379de4f215a30 |
| user_id | 580026fd75d3441c9d10c247e1bdf814 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Posted in Installation / How To, Notes | Tagged , , , , , , | Comments Off on Openstack installation : Keystone (Authentication Services)