Hello World with Java and ArangoDB

Posted in Uncategorized | Comments Off on Hello World with Java and ArangoDB

Installing ArangoDB

Note : Most of the how-to contents are from internet – Just recording here for me to consolidate what to do for my requirements.

Created a VM, Debian 9.8, with 4 vCPU and 8GB RAM – Did not check out recommended values. Will be updating this post as I proceed with my testing. At the time for writing this post, the latest version was ArangoDB 3.4.4-1.

Note : In the initial stages of getting hang of stuffs, I prefer to login as ‘root’ user and work. So ‘sudo’ usage might not appear in the post.apt-key add – < Release.key

Install curl (not installed by default), and then add the repository key – a per-requesite for installing ArangoDB.

apt install curl
curl -OL https://download.arangodb.com/arangodb34/DEBIAN/Release.key
apt-key add - < Release.key

Install ArangoDB

echo 'deb https://download.arangodb.com/arangodb34/DEBIAN/ /' |  tee /etc/apt/sources.list.d/arangodb.list 
apt install apt-transport-https
apt update
apt install arangodb3=3.4.4-1

Install the debug symbols package (not required by default)

apt install arangodb3-dbg=3.4.4-1

Observed the following

Preparing to unpack …/arangodb3_3.4.4-1_amd64.deb …
Unpacking arangodb3 (3.4.4-1) …
Processing triggers for systemd (232-25+deb9u9) …
Processing triggers for man-db (2.7.6.1-2) …
Setting up arangodb3 (3.4.4-1) …
2019-03-24T05:10:23Z [2038] INFO {syscall} file-descriptors nofiles) hard limit is 1048576, soft limit is 1048576
2019-03-24T05:10:23Z [2038] WARNING {threads} --server.threads (64) is more than eight times the number of cores (4), this might overload the server
2019-03-24T05:10:25Z [2038] INFO {startup} server will now shut down due to upgrade, database initialization or admin restoration.
Database files are up-to-date.
Created symlink /etc/systemd/system/multi-user.target.wants/arangodb3.service → /lib/systemd/system/arangodb3.service.

Fix for file-descriptors – add the following to /etc/security/limits.conf

 *    hard    nofile    262140
* soft nofile 262140

Fix for –server-threads warning : Increase vCPUs to 16 and RAM to 16 and update the following config in /etc/arangodb3/arangod.conf

#number of maximal server threads. use 0 to make arangod determine the
#number of threads automatically, based on available CPUs
maximal-threads = 0

Updated the end point from 127.0.0.1 to 0.0.0.0 in /etc/arangodb3/arangod.conf

endpoint = tcp://0.0.0.0:8529 

Applied the recommended fix for WARNING {memory} maximum number of memory mappings per process is 65530, which seems too low. Added the following to /etc/sysctl.conf

vm.max_map_count=1024000

Not required – Updated kernel to 5.0.4. Check the status of arangodb3

root@db1:~# systemctl status arangodb3
● arangodb3.service - ArangoDB database server
Loaded: loaded (/lib/systemd/system/arangodb3.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-03-24 12:17:36 IST; 11s ago
Process: 490 ExecStartPre=/usr/bin/env chmod 700 /var/lib/arangodb3-apps (code=exited, status=0/SUCCESS)
Process: 487 ExecStartPre=/usr/bin/env chown -R arangodb:arangodb /var/lib/arangodb3-apps (code=exited, status=0/SUCCESS)
Process: 484 ExecStartPre=/usr/bin/env chmod 700 /var/lib/arangodb3 (code=exited, status=0/SUCCESS)
Process: 481 ExecStartPre=/usr/bin/env chown -R arangodb:arangodb /var/lib/arangodb3 (code=exited, status=0/SUCCESS)
Process: 479 ExecStartPre=/usr/bin/env chmod 700 /var/log/arangodb3 (code=exited, status=0/SUCCESS)
Process: 476 ExecStartPre=/usr/bin/env chown -R arangodb:arangodb /var/log/arangodb3 (code=exited, status=0/SUCCESS)
Process: 473 ExecStartPre=/usr/bin/install -g arangodb -o arangodb -d /var/run/arangodb3 (code=exited, status=0/SUCCESS)
Process: 469 ExecStartPre=/usr/bin/install -g arangodb -o arangodb -d /var/tmp/arangodb3 (code=exited, status=0/SUCCESS)
Main PID: 494 (arangod)
Tasks: 51 (limit: 131072)
CGroup: /system.slice/arangodb3.service
└─494 /usr/sbin/arangod --uid arangodb --gid arangodb --pid-file /var/run/arangodb3/arangod.pid --temp.path /var/tmp/arangodb3 --log.foreground-tty true
Mar 24 12:17:36 db1 systemd[1]: Started ArangoDB database server.
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO ArangoDB 3.4.4 [linux] 64bit, using jemalloc, build tags/v3.4.4-0-gc6abdf4449, VPack 0.1.33, RocksDB 5.16.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.1
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO detected operating system: Linux version 5.0.4 (root@db1) (gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)) #1 SMP Sun Mar 24 11:33:26 IST 20
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO {authentication} Jwt secret not specified, generating…
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO using storage engine rocksdb
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO {cluster} Starting up with role SINGLE
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO {syscall} file-descriptors (nofiles) hard limit is 131072, soft limit is 131072
Mar 24 12:17:37 db1 arangod[494]: 2019-03-24T06:47:37Z [494] INFO {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
Mar 24 12:17:41 db1 arangod[494]: 2019-03-24T06:47:41Z [494] INFO using endpoint 'http+tcp://0.0.0.0:8529' for non-encrypted requests
Mar 24 12:17:42 db1 arangod[494]: 2019-03-24T06:47:42Z [494] INFO ArangoDB (version 3.4.4 [linux]) is ready for business. Have fun!
root@db1:~#
Posted in Installation / How To, Notes | Tagged , , | Comments Off on Installing ArangoDB

Openstack installation : Launch VM

Note : I preferred to use horizon for launching a VM rather than using the CLI commands. The following snapshots show the sequence/wizard flow in launching a VM which is self explanatory.


From Horizon dashboard, Choose Project -> Compute -> Launch Instance and provide the name and description
Plan was to use the Cirros Image for the new VM. Wanted have a new volume created in Cinder and have it deleted when the VM instance is deleted.
Select the flavor, typically only one would have been created at this stage – if this guide was followed and then click on Next
Select the self service network and click on Next.
VM creation / launch starts
While the VM is being launched, lets allocate one floating ip from the provider network. From dashboard choose Project -> Floating Ips -> Allocate to project.
One free IP from provider network pool gets allocated.
Dashboard -> Project -> Compute -> Instances – Select Associate Floating IP option for the recently created VM so that we can access the VM from external networks.
Select the allocated IP, Ports to be associated will list the port (interface of VM)
IP from provider network pool gets associated with the VM. Now we can access the VM form external network.
Access the VM from external network. Note : for Cirros image the default username is cirros and password is ‘gocubsgo’


Posted in Installation / How To, Notes | Tagged , , , | Comments Off on Openstack installation : Launch VM

Openstack installation : Create flavor, Generate SSH keypair, Add security group rules

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

For testing with cirros it is enough to provide 64 MB RAM and 1 G storage. On the controller node – source the admin credentials and

openstack flavor create --id 0 --vcpus 1 --ram 128 --disk 8   m1.nano

Before launching an instance, you must add a public key to the Compute service.

$ . admin-openrc

sandeep@controller:~$ ssh-keygen -q -N ""
Enter file in which to save the key (/home/sandeep/.ssh/id_rsa):
sandeep@controller:~$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 9e:db:2d:bd:d2:c5:6d:6f:e6:3f:9d:80:f0:8a:ad:b3 |
| name | mykey |
| user_id | d10bef25b03b46019572c0cb926ff314 |
+-------------+-------------------------------------------------+
sandeep@controller:~$ openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 9e:db:2d:bd:d2:c5:6d:6f:e6:3f:9d:80:f0:8a:ad:b3 |
+-------+-------------------------------------------------+


By default, the default security group applies to all instances and includes firewall rules that deny remote access to instances. For Linux images it is recommend allowing at least ICMP (ping) and secure shell (SSH).

sandeep@controller:~$ openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 3be16f2f-900d-4afd-808d-cb113b30fa9a | default | Default security group | af6a01fcea844196a20c2d3a6b3bd70e | [] |
| 88f93b72-63d2-417b-8f0a-8f02db79989e | default | Default security group | | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
sandeep@controller:~$ openstack security group rule create --proto icmp 3be16f2f-900d-4afd-808d-cb113b30fa9a
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2019-03-14T16:09:48Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 0d02b39e-b748-4eb5-b977-16d69536f998 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | af6a01fcea844196a20c2d3a6b3bd70e |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 3be16f2f-900d-4afd-808d-cb113b30fa9a |
| updated_at | 2019-03-14T16:09:48Z |
+-------------------+--------------------------------------+
sandeep@controller:~$ openstack security group rule create --proto tcp --dst-port 22 3be16f2f-900d-4afd-808d-cb113b30fa9a
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2019-03-14T16:10:14Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | d199d2a5-eea8-4135-b41b-f1c5c43e4ecd |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | af6a01fcea844196a20c2d3a6b3bd70e |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 3be16f2f-900d-4afd-808d-cb113b30fa9a |
| updated_at | 2019-03-14T16:10:14Z |
+-------------------+--------------------------------------+
sandeep@controller:~$ openstack security group rule create --proto icmp 88f93b72-63d2-417b-8f0a-8f02db79989e
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2019-03-14T16:10:35Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | d3a1fef2-8c78-4b73-9f7b-d79304fa8812 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | af6a01fcea844196a20c2d3a6b3bd70e |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 88f93b72-63d2-417b-8f0a-8f02db79989e |
| updated_at | 2019-03-14T16:10:35Z |
+-------------------+--------------------------------------+
sandeep@controller:~$ openstack security group rule create --proto tcp --dst-port 22 88f93b72-63d2-417b-8f0a-8f02db79989e
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2019-03-14T16:10:46Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 32cfe889-6b67-426f-b0fe-010e8a97df6b |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | af6a01fcea844196a20c2d3a6b3bd70e |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 88f93b72-63d2-417b-8f0a-8f02db79989e |
| updated_at | 2019-03-14T16:10:46Z |
+-------------------+--------------------------------------+
sandeep@controller:~$
Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Create flavor, Generate SSH keypair, Add security group rules

Openstack installation : Create Provider / Self service Networks

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

On the controller node, source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc

Create the provider (external) network

$ openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider

The –share option allows all projects to use the virtual network.

The –external option defines the virtual network to be external. (Default is –internal).

The –provider-physical-network provider and –provider-network-type flat options connect the flat virtual network to the flat (native/untagged) physical network on the enp3s0 interface on the host using information from the following files:

ml2_conf.ini:

[ml2_type_flat]
flat_networks = provider

linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:enp3s0

Create a subnet on the network (Wanted to use 10.0.0.65 to 10.0.0.124 range of IP in the subnet 10.0.0.0/24 for floating ips)

$ openstack subnet create --network provider --allocation-pool start=10.0.0.65,end=10.0.0.124 --dns-nameserver 8.8.8.8 --gateway 10.0.0.1 --subnet-range 10.0.0.0/24 provider

Non-privileged users typically cannot supply additional parameters to this command. The service automatically chooses parameters using information from the following files:

ml2_conf.ini:

[ml2]
tenant_network_types = vxlan
[ml2_type_vxlan]
vni_ranges = 1:1000

Create the self-service network

$ openstack network create selfservice

Create a subnet on the network:

$ openstack subnet create --network selfservice --dns-nameserver 8.8.4.4 --gateway 172.16.1.1 --subnet-range 172.16.1.0/24 selfservice

Create a router

Self-service networks connect to provider networks using a virtual router that typically performs bidirectional NAT. Each router contains an interface on at least one self-service network and a gateway on a provider network.

The provider network must include the router:external option to enable self-service routers to use it for connectivity to external networks such as the Internet. The admin or other privileged user must include this option during network creation or add it later.
In this case, the router:external option was set by using the –external parameter when creating the provider network.

$ openstack router create router

Add the self-service network subnet as an interface on the router:

$ openstack router add subnet router selfservice

Set a gateway on the provider network on the router:

$ openstack router set router --external-gateway provider

Verify operation. List network namespaces. You should see one qrouter namespace and two qdhcp namespaces.

$ ip netns
qrouter-89dd2083-a160-4d75-ab3a-14239f01ea0b
qdhcp-7c6f9b37-76b4-463e-98d8-27e5686ed083
qdhcp-0e62efcd-8cee-46c7-b163-d8df05c3c5ad

List ports on the router to determine the gateway IP address on the provider network:

$ openstack port list --router router
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+
| 1bad25c9-9d67-46dd-9293-c9be75c85969 | | fa:16:3e:04:52:54 | ip_address='10.0.0.69', subnet_id='9621ec8b-31fb-4143-b24f-c4976025e900' | ACTIVE |
| c50cfa6b-ffee-452b-a3d3-45e6a692bc59 | | fa:16:3e:be:ae:4a | ip_address='172.16.1.1', subnet_id='56ec9e7a-6c13-4139-9d17-95532ade4c5b' | ACTIVE |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+

Ping this IP address from the controller node or any host on the physical provider network:

$ ping -c 4 10.0.0.69
PING 10.0.0.69 (10.0.0.69) 56(84) bytes of data.
64 bytes from 10.0.0.69: icmp_seq=1 ttl=64 time=0.583 ms
64 bytes from 10.0.0.69: icmp_seq=2 ttl=64 time=0.449 ms
64 bytes from 10.0.0.69: icmp_seq=3 ttl=64 time=0.359 ms
64 bytes from 10.0.0.69: icmp_seq=4 ttl=64 time=0.388 ms
Posted in Installation / How To, Notes | Tagged , , , , , , | Comments Off on Openstack installation : Create Provider / Self service Networks

Openstack installation : Block storage – Cinder (on Compute Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Wanted to get rid of all partition info in /dev/sdb so that I can use it for block storage.

#dd if=/dev/zero of=/dev/sdb bs=1k count=1000000
#blockdev --rereadpt /dev/sdb

Install the supporting utility packages

#apt install --assume-yes lvm2 thin-provisioning-tools

Create the LVM physical volume /dev/sdb

# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.

Create the LVM volume group cinder-volumes (Note the volume group name)

# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created

Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the /dev directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. Reconfigure LVM to scan only the devices that contain the cinder-volumes volume group. Edit /etc/lvm/lvm.conf and in the devices section un-comment the filter configuration and amend it to allow /dev/sdb to be scanned and not others.

devices {
. . .
filter = [ "a/sdb/", "r/.*/"]
. . .
}

Install the cinder packages

#apt install --assume-yes cinder-volume

Edit the /etc/cinder/cinder.conf – Required configurations – add sections if not found

[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
enabled_backends = lvm
glance_api_servers = http://controller:9292
transport_url = rabbit://openstack:{pasword}@controller
auth_strategy = keystone

[database]
connection = mysql+pymysql://cinder:{password}@controller/cinder

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = {password}

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

Finalize : Restart services

#service tgt restart
#service cinder-volume restart


Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack installation : Block storage – Cinder (on Compute Node)

Openstack installation : Block Storage – Cinder (On Controller Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create database, service credentials, and API endpoints.

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '{password}';

Create service credentials : Create a cinder user

$ . admin-openrc
$ openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the admin role to the cinder user:

$ openstack role add --project service --user cinder admin

Create the cinderv2 and cinderv3 service entities

$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+

$ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+-------------+----------------------------------+

Create the Block Storage service API endpoints:

$ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
$ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

+--------------+------------------------------------------+
| Field | Value |
+--------------+------------------------------------------+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

Install the packages:

apt install --assume-yes cinder-api cinder-scheduler

Edit the /etc/cinder/cinder.conf file and complete the following actions:

In the [database] section, configure database access:

[database]
#...
connection = mysql+pymysql://cinder:{password}@controller/cinder

In the [DEFAULT] section

  • Configure RabbitMQ message queue access
  • Configure Identity service access
  • Configure the my_ip option to use the management interface IP address of the controller node
[DEFAULT]
#...
transport_url = rabbit://openstack:{password}@controller
#...
auth_strategy = keystone
#...
my_ip = 10.0.0.15

In [keystone_authtoken] sections, configure Identity service access. Comment out or remove any other options in the [keystone_authtoken] section.

[keystone_authtoken]
#...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = {password}

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]
#...
lock_path = /var/lib/cinder/tmp

Populate the Block Storage database. Ignore any deprecation messages in this output.

su -s /bin/sh -c "cinder-manage db sync" cinder

Edit the /etc/nova/nova.conf file and add the following to it:

[cinder]
#...
os_region_name = RegionOne

Finalize installation : Restart the Compute API service.

#service nova-api restart

Restart the Block Storage services:

#service cinder-scheduler restart
#service apache2 restart
Posted in Installation / How To, Notes | Tagged , , , , , , , | Comments Off on Openstack installation : Block Storage – Cinder (On Controller Node)

Openstack installation : Dashboard – Horizon

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Install the packages

apt install --assume-yes openstack-dashboard

Edit the /etc/openstack-dashboard/local_settings.py file and complete the following actions:

Configure the dashboard to use OpenStack services on the controller node:

OPENSTACK_HOST = "controller"

In the Dashboard configuration section (Not Ubuntu configuration section), allow your hosts to access Dashboard:

ALLOWED_HOSTS = ['*']

Configure the memcached session storage service (Comment out any other session storage configuration.)

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

Enable the Identity API version 3:

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

Enable support for domains:

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

Configure API versions:

OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}

Configure Default as the default domain for users that you create via the dashboard:

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

Configure user as the default role for users that you create via the dashboard:

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

If you chose networking option 1, disable support for layer-3 networking services (Not applicable in my case)

OPENSTACK_NEUTRON_NETWORK = {
. . .
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}

Optionally, configure the time zone:

TIME_ZONE = "Asia/Kolkata"

Add the following line to /etc/apache2/conf-available/openstack-dashboard.conf if not included.

WSGIApplicationGroup %{GLOBAL}

Finalize installation : Reload the web server configuration:

service apache2 reload

Verify by accessing : http://controller/horizon, Login with Domain = default, User = admin, password = {password} (whatever was configured)

Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Dashboard – Horizon

Openstack installation : Networking service – Neutron (On Compute Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

The compute node handles connectivity and security groups for instances. Install the components

apt install --assume-yes neutron-linuxbridge-agent

The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.

Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, comment out any connection options because compute nodes do not directly access the database.

In [oslo_concurrency] section :

[oslo_concurrency]
...
lock_path = /var/log/neutron/tmp

In the [DEFAULT] section, configure RabbitMQ message queue access and authentication strategy

[DEFAULT]
...
transport_url = rabbit://openstack:{password}@controller
...
auth_strategy = keystone

Configure authentication config. Comment out or remove any other options in the [keystone_authtoken] section.

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = {password}

Configure the Linux bridge agent, builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Note : eno2 is the network interface I had planned for provider networks

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:

[linux_bridge]
...
physical_interface_mappings = provider:eno2

In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:

[vxlan]
...
enable_vxlan = true
local_ip = 10.0.0.41
l2_population = true

In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure the Compute service to use the Networking service

Edit the /etc/nova/nova.conf file and in the [neutron] section, configure access parameters:

[neutron]
...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {password}

Finalize installation. Restart the Compute service:

service nova-compute restart

Restart the Linux bridge agent:

service neutron-linuxbridge-agent restart

Verify neutron installation : List agents to verify successful launch of the neutron agents:

$ openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 3fe3723d-cdd8-4b38-906c-0faf433a5f1e | L3 agent | controller | nova | 🙂 | UP | neutron-l3-agent |
| 47e059c2-1d18-4b64-8af9-c6c6d36048d9 | Linux bridge agent | icompute | None | 🙂 | UP | neutron-linuxbridge-agent |
| 5cf3c2e9-a714-4446-9d39-e87b300c1cbf | Metadata agent | controller | None | 🙂 | UP | neutron-metadata-agent |
| a9222b67-0f4a-4271-acfe-e234f161f570 | Linux bridge agent | controller | None | 🙂 | UP | neutron-linuxbridge-agent |
| c5f4e10b-67a5-4ee0-92ae-a4463ee5b582 | Linux bridge agent | acompute | None | 🙂 | UP | neutron-linuxbridge-agent |
| ca5e5486-9d8e-4063-b683-adedb9cc1b9d | DHCP agent | controller | nova | 🙂 | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

The output should indicate four agents on the controller node and one agent on each compute node.

Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Networking service – Neutron (On Compute Node)

Openstack installation : Networking service – Neutron (On Controller Node)

Content from “openstack.org”, listed here with minor/no changes – just noting down what I did – online notes.

Create database, service credentials, and API endpoints.

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '{password}';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '{password}';

Create the service credentials. Create the neutron user:

$ . admin-openrc
$ openstack user create --domain default --password-prompt neutron

User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

Add the admin role to the neutron user:

$ openstack role add --project service --user neutron admin

Create the neutron service entity:

$ openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+

Create the Networking service API endpoints:

$ openstack endpoint create --region RegionOne network public http://controller:9696 
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne network internal http://controller:9696

+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

$ openstack endpoint create --region RegionOne network admin http://controller:9696

+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+

Install and configure the Networking components

apt install --assume-yes neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, configure database access:
Comment out or remove any other connection options in the [database] section.

[database]
...
connection = mysql+pymysql://neutron:{password}@controller/neutron

In the [DEFAULT] section

  • Enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses
  • Configure RabbitMQ message queue access
  • Configure Identity service access
  • Configure Networking to notify Compute of network topology changes

[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
...
transport_url = rabbit://openstack:{password}@controller
...
auth_strategy = keystone
...

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

In the [keystone_authtoken] section, configure Identity service access:
Comment out or remove any other options

[keystone_authtoken]
...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = {password}

In [nova] section :

[nova]
...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = {password}

[oslo_concurrency]
...
lock_path = /var/log/neutron/tmp

Configure the Modular Layer 2 (ML2) plug-in, uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:

Note : The Linux bridge agent only supports VXLAN overlay networks.

In the [ml2] section

  • Enable flat, VLAN, and VXLAN networks:
  • Enable the port security extension driver
[ml2]
...
type_drivers = flat,vlan,vxlan
...
tenant_network_types = vxlan
...
mechanism_drivers = linuxbridge,l2population
...
extension_drivers = port_security

In the [ml2_type_flat] section, configure the provider virtual network as a flat network:

[ml2_type_flat]
...
flat_networks = provider

In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for self-service networks:

[ml2_type_vxlan]
...
vni_ranges = 1:1000

In the [securitygroup] section, enable ipset to increase efficiency of security group rules:

[securitygroup]
...
enable_ipset = true

Configure the Linux bridge agent, builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface: [enp3s0 is the interface name in my system, the second physical interface i would like to configure for provider networks]

[linux_bridge]
...
physical_interface_mappings = provider:enp3s0

In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:
OVERLAY_IP_ADDRESS – IP address associated with enp2s0 in my system.

[vxlan]
...
enable_vxlan = true
local_ip = 10.0.0.15
l2_population = true

In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure the layer-3 agent, provides routing and NAT services for self-service virtual networks. Edit the /etc/neutron/l3_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the Linux bridge interface driver and external network bridge:

[DEFAULT]
...
interface_driver = linuxbridge

Configure the DHCP agent, provides DHCP services for virtual networks.

Edit the /etc/neutron/dhcp_agent.ini file : In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:

[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

Configure the metadata agent, provides configuration information such as credentials to instances.

Edit the /etc/neutron/metadata_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the metadata host and shared secret:

[DEFAULT]
...
nova_metadata_host = controller
metadata_proxy_shared_secret = {secret}

Configure the Compute service to use the Networking service
Edit the /etc/nova/nova.conf file and perform the following actions:

Edit the /etc/nova/nova.conf file and perform the following actions:

In the [neutron] section, configure access parameters, enable the metadata proxy, and configure the secret:

[neutron]
...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {password}
service_metadata_proxy = true
metadata_proxy_shared_secret = {secret}

Finalize installation : Populate the database

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Restart the Compute API service:

service nova-api restart

Restart the Networking services.

service neutron-server restart
service neutron-linuxbridge-agent restart
service neutron-dhcp-agent restart
service neutron-metadata-agent restart
service neutron-l3-agent restart


Posted in Installation / How To, Notes | Tagged , , , , , , , , | Comments Off on Openstack installation : Networking service – Neutron (On Controller Node)