README.md 34.5 KB
Newer Older
Thomas Goirand's avatar
Thomas Goirand committed
1
# What is OpenStack Cluster Installer (OCI)
Thomas Goirand's avatar
Thomas Goirand committed
2

Thomas Goirand's avatar
Thomas Goirand committed
3
### General description
Thomas Goirand's avatar
Thomas Goirand committed
4

Thomas Goirand's avatar
Thomas Goirand committed
5 6
OCI (OpenStack Cluster Installer) is a software to provision an OpenStack
cluster automatically. This package install a provisioning machine, which
Thomas Goirand's avatar
Thomas Goirand committed
7 8
consists of a DHCP server, a PXE boot server, a web server, and a
puppet-master.
Thomas Goirand's avatar
Thomas Goirand committed
9 10 11 12 13

Once computers in the cluster boot for the first time, a Debian live system
is served by OCI, to act as a discovery image. This live system then reports
the hardware features back to OCI. The computers can then be installed with
Debian from that live system, configured with a puppet-agent that will connect
Thomas Goirand's avatar
Thomas Goirand committed
14 15 16 17
to the puppet-master of OCI. After Debian is installed, the server reboots
under it, and OpenStack services is then provisionned in these machines,
depending on their role in the cluster.

18 19 20 21
OCI is fully packaged in Debian, including all of the Puppet modules and so
on. After installing the OCI package and its dependency, no other artificat
needs to be installed on your provisioning server.

Thomas Goirand's avatar
Thomas Goirand committed
22
### What OpenStack services can OCI install?
Thomas Goirand's avatar
Thomas Goirand committed
23

Thomas Goirand's avatar
Thomas Goirand committed
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
Currently, OCI can install:
- Swift (with optional dedicated proxy nodes)
- Keystone
- Cinder (LVM or Ceph backend)
- Glance (Swift or Ceph backend)
- Heat
- Horizon
- Nova
- Neutron
- Barbican

All of this in a high availability way, using haproxy and corosync for
the controller nodes for all services.

All services are fully using TLS, even within the cluster.
Thomas Goirand's avatar
Thomas Goirand committed
39 40 41 42 43 44 45 46 47 48

As a general rule, what OCI does, is check what type of nodes are part
of the cluster, and take decisions depending on it. For example, if there
are some Ceph OSD nodes, OCI will use Ceph as a backend for Glance and Nova.
If there are some Cinder Volume nodes, OCI will use them with the LVM
backend. If there is some Swiftstore, nodes, but no Swiftproxy, the proxies
will be installed in the controller. If there are some Ceph OSD nodes, but
no dedicated Ceph MON nodes, the controllers will act as Ceph monitors.
If there are some Compute nodes, then Cinder, Nova and Neutron will be
installed on the controller nodes. Etc.
Thomas Goirand's avatar
Thomas Goirand committed
49

Thomas Goirand's avatar
Thomas Goirand committed
50 51 52 53
The minimum number of controller nodes is 3, though it is possible to
install the 3 controllers on VMs on a single server (of course, loosing
the high availability feature if the hardware fails).

Thomas Goirand's avatar
Thomas Goirand committed
54
### Who initiated the project? Who are the main contributors?
Thomas Goirand's avatar
Thomas Goirand committed
55

Thomas Goirand's avatar
Thomas Goirand committed
56 57 58
OCI has been written from scratch by Thomas Goirand (zigo). The work is
fully sponsored by Infomaniak Network, who is using it in production.
Hopefully, this project, over time, will gather more contributors.
Thomas Goirand's avatar
Thomas Goirand committed
59

Thomas Goirand's avatar
Thomas Goirand committed
60 61
# How to install your puppet-master/PXE server

Thomas Goirand's avatar
Thomas Goirand committed
62 63 64
## Installing the package

### The package repository
65

Thomas Goirand's avatar
Thomas Goirand committed
66
The package is either available from plain Debian Sid/Buster, or from the
Thomas Goirand's avatar
Thomas Goirand committed
67 68
OpenStack stretch-rocky backport repositories. If using Stretch is desired,
then the below repository must be added to the sources.list file:
Thomas Goirand's avatar
Thomas Goirand committed
69

70
```
71 72
deb http://stretch-rocky.debian.net/debian stretch-rocky-backports main
deb-src http://stretch-rocky.debian.net/debian stretch-rocky-backports main
Thomas Goirand's avatar
Thomas Goirand committed
73

74 75
deb http://stretch-rocky.debian.net/debian stretch-rocky-backports-nochange main
deb-src http://stretch-rocky.debian.net/debian stretch-rocky-backports-nochange main
Thomas Goirand's avatar
Thomas Goirand committed
76
```
Thomas Goirand's avatar
Thomas Goirand committed
77 78 79

The repository key is available this way:

80 81
```
apt-get update
Thomas Goirand's avatar
Thomas Goirand committed
82
apt-get install --allow-unauthenticated -y openstack-backports-archive-keyring
Thomas Goirand's avatar
Thomas Goirand committed
83 84
apt-get update
```
Thomas Goirand's avatar
Thomas Goirand committed
85

Thomas Goirand's avatar
Thomas Goirand committed
86 87 88 89 90 91 92 93
### Install the package

Simply install the package:

```
apt-get install openstack-cluster-installer
```

Thomas Goirand's avatar
Thomas Goirand committed
94
### Install a db server
95 96 97

MariaDB will do:

Thomas Goirand's avatar
Thomas Goirand committed
98 99 100
```
apt-get install mariadb-server dbconfig-common
```
101

Thomas Goirand's avatar
Thomas Goirand committed
102 103 104 105 106
It is possible to the db creation and credentials by hand, or to let OCI handle
it automatically with dbconfig-common. If APT is running in
non-interactive mode, or if during the installation, the user doesn't ask
for the automatic db handling by dbconfig-common, here's how to create the
database:
Thomas Goirand's avatar
Thomas Goirand committed
107

108
```
Thomas Goirand's avatar
Thomas Goirand committed
109 110 111 112 113 114
apt-get install openstack-pkg-tools
. /usr/share/openstack-pkg-tools/pkgos_func
PASSWORD=$(openssl rand -hex 16)
pkgos_inifile set /etc/openstack-cluster-installer/openstack-cluster-installer.conf database connection mysql+pymysql://oci:${PASSWORD}@localhost:3306/oci"
mysql --execute 'CREATE DATABASE oci;'
mysql --execute "GRANT ALL PRIVILEGES ON oci.* TO 'oci'@'localhost' IDENTIFIED BY '${PASSWORD}';"
115 116
```

Thomas Goirand's avatar
Thomas Goirand committed
117 118 119 120
One must then make sure that the "connection" directive in
/etc/openstack-cluster-installer/openstack-cluster-installer.conf doesn't
contain spaces before and after the equal sign. Then the db is populated
below.
121

Thomas Goirand's avatar
Thomas Goirand committed
122
### Configuring OCI
123 124 125 126

Make sure the db is in sync (if it is, you'll see table exists errors):

```
Thomas Goirand's avatar
Thomas Goirand committed
127
apt-get install -y php-cli
Thomas Goirand's avatar
Thomas Goirand committed
128 129
cd /usr/share/openstack-cluster-installer ; php db_sync.php
```
Thomas Goirand's avatar
Thomas Goirand committed
130 131 132 133

Then edit /etc/openstack-cluster-installer/openstack-cluster-installer.conf
and make it looks the way it pleases you (ie: change network values, etc.).

Thomas Goirand's avatar
Thomas Goirand committed
134 135 136 137 138
### Generate the OCI's root CA

To handle TLS, OCI is using its own root CA. The root CA certificate is
distributed on all nodes of the cluster. To create the initial root CA,
there's a script to do it all:
139 140

```
Thomas Goirand's avatar
Thomas Goirand committed
141 142
oci-root-ca-gen
```
143

Thomas Goirand's avatar
Thomas Goirand committed
144 145 146 147 148
At this point, you should be able to browse through OCI's web interface:
```
firefox http://your-ip-address/oci/
```

149 150 151 152 153 154 155
However, you need a login/pass to get in. There's a shell utility to manage
your usernames. To add a new user, do this:

```
oci-userdb -a mylogin mypassword
```

156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
Passwords are hashed using the PHP password_hash() function using the
BCRYPT algo.

Also, OCI is capable of using an external Radius for its authentication.
However, you still need to manually add logins in the db. What's bellow
inserts a new user that has an entry in the radius server:

```
oci-userdb -r newuser@example.com
```

Note that you also need to configure your radius server address and
shared secret in openstack-cluster-installer.conf.

Note that even if there is an authentication system, it is strongly advised
to not expose OCI to the public internet. The best setup is if your
provisionning server isn't reachable at all from the outside.
Thomas Goirand's avatar
Thomas Goirand committed
173

Thomas Goirand's avatar
Thomas Goirand committed
174 175 176
## Installing side services

### ISC-DHCPD
177

Thomas Goirand's avatar
Thomas Goirand committed
178 179
Configure isc-dhcp to match your network configuration. Note that
"next-server" must be the address of your puppet-master node (ie: the dhcp
180
server that we're currently configuring).
Thomas Goirand's avatar
Thomas Goirand committed
181

182
Edit /etc/default/isc-dhcpd:
Thomas Goirand's avatar
Thomas Goirand committed
183

184
```
Thomas Goirand's avatar
Thomas Goirand committed
185 186
sed -i 's/INTERFACESv4=.*/INTERFACESv4="eth0"/' /etc/default/isc-dhcp-server
```
187 188 189

Then edit /etc/dhcp/dhcpd.conf:

190 191
```
allow booting;
Thomas Goirand's avatar
Thomas Goirand committed
192 193 194 195 196 197 198 199 200 201 202 203 204 205 206
allow bootp;
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
ignore-client-uids On;

subnet 192.168.100.0 netmask 255.255.255.0 {
        range 192.168.100.20 192.168.100.80;
        option domain-name infomaniak.ch;
        option domain-name-servers 9.9.9.9;
        option routers 192.168.100.1;
        option subnet-mask 255.255.255.0;
        option broadcast-address 192.168.100.255;
        next-server 192.168.100.2;
207 208
        if exists user-class and option user-class = "iPXE" {
                filename "http://192.168.100.2/oci/ipxe.php";
Thomas Goirand's avatar
Thomas Goirand committed
209
        } else {
210
                filename "pxelinux.0";
Thomas Goirand's avatar
Thomas Goirand committed
211
        }
Thomas Goirand's avatar
Thomas Goirand committed
212 213
}
```
Thomas Goirand's avatar
Thomas Goirand committed
214

215 216 217 218 219
Carefully note that 192.168.100.2 must be the address of your OCI server,
as it will be used for serving PXE, TFTP and web for the slave nodes.
It is of course fine to use another address if your OCI server does,
so feel free to adapt the above to your liking.

220 221 222
Also, for OCI to allow query from the DHCP range, you must add your
DHCP subnets to TRUSTED_NETWORKS in openstack-cluster-installer.conf.

Thomas Goirand's avatar
Thomas Goirand committed
223
### tftpd
224

Thomas Goirand's avatar
Thomas Goirand committed
225 226
Configure tftp-hpa to serve files from OCI:

227
```
Thomas Goirand's avatar
Thomas Goirand committed
228 229
sed -i 's#TFTP_DIRECTORY=.*#TFTP_DIRECTORY="/var/lib/openstack-cluster-installer/tftp"#' /etc/default/tftpd-hpa
```
Thomas Goirand's avatar
Thomas Goirand committed
230

Thomas Goirand's avatar
Thomas Goirand committed
231
Then restart tftpd-hpa.
Thomas Goirand's avatar
Thomas Goirand committed
232

Thomas Goirand's avatar
Thomas Goirand committed
233 234 235
## Getting ready to install servers

### Configuring ssh keys
Thomas Goirand's avatar
Thomas Goirand committed
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257

When setting-up, OCI will create a public / private ssh keypair in here:

```
/etc/openstack-cluster-installer/id_rsa
```

Once done, it will copy the corresponding id_rsa.pub content into:

```
/etc/openstack-cluster-installer/authorized_keys
```

and will also add all the public keys it finds under
/root/.ssh/authorized_keys in it. Later on, this file will be copied
in the OCI Debian live image, and in all new systems OCI will install.
OCI will later on use the private key it generated to log into the
servers, while your keys will also be present so you can log into each
individual servers using your private key. Therefore, it is strongly
advise to customize /etc/openstack-cluster-installer/authorized_keys
*before* you build the OCI Debian Live image.

Thomas Goirand's avatar
Thomas Goirand committed
258
### Build OCI's live image ###
259 260 261

```
mkdir -p /root/live-image
Thomas Goirand's avatar
Thomas Goirand committed
262 263
cd /root/live-image
openstack-cluster-installer-build-live-image --pxe-server-ip 192.168.100.2 --debian-mirror-addr http://deb.debian.org/debian --debian-security-mirror-addr http://security.debian.org/
264
cp -auxf /var/lib/openstack-cluster-installer/tftp/* /usr/share/openstack-cluster-installer
Thomas Goirand's avatar
Thomas Goirand committed
265
cd ..
Thomas Goirand's avatar
Thomas Goirand committed
266 267
rm -rf /root/live-image
```
Thomas Goirand's avatar
Thomas Goirand committed
268

Thomas Goirand's avatar
Thomas Goirand committed
269 270 271 272
Is is possible to use package proxy servers like approx,
or local mirrors, which gives the possibility to have your cluster
and OCI itself completely disconnected from internet.

Thomas Goirand's avatar
Thomas Goirand committed
273
### Configure puppet's ENC
274

Thomas Goirand's avatar
Thomas Goirand committed
275 276 277 278 279
Once the puppet-master service is installed, its external node
classifier (ENC) directives must be set, so that OCI acts as ENC
(which means OCI will define roles and puppet classes to call when
installing a new server with puppet):

Thomas Goirand's avatar
Thomas Goirand committed
280 281
```
. /usr/share/openstack-pkg-tools/pkgos_func
Thomas Goirand's avatar
Thomas Goirand committed
282 283 284
pkgos_add_directive /etc/puppet/puppet.conf master "external_nodes = /usr/bin/oci-puppet-external-node-classifier" "# Path to enc"
pkgos_inifile set /etc/puppet/puppet.conf master external_nodes /usr/bin/oci-puppet-external-node-classifier
pkgos_add_directive /etc/puppet/puppet.conf master "node_terminus = exec" "# Tell what type of ENC"
Thomas Goirand's avatar
Thomas Goirand committed
285 286
pkgos_inifile set /etc/puppet/puppet.conf master node_terminus exec
```
Thomas Goirand's avatar
Thomas Goirand committed
287

Thomas Goirand's avatar
Thomas Goirand committed
288
then restart the puppet-master service.
Thomas Goirand's avatar
Thomas Goirand committed
289

Thomas Goirand's avatar
Thomas Goirand committed
290
### Optional: approx
Thomas Goirand's avatar
Thomas Goirand committed
291 292 293 294 295 296

To speed-up package download, it is highly recommended to install approx
locally on your OCI provisionning server, and use its address when
setting-up servers (the address is set in
/etc/openstack-cluster-installer/openstack-cluster-installer.conf).

Thomas Goirand's avatar
Thomas Goirand committed
297
# Using OCI
Thomas Goirand's avatar
Thomas Goirand committed
298

Thomas Goirand's avatar
Thomas Goirand committed
299
## Booting-up servers
Thomas Goirand's avatar
Thomas Goirand committed
300 301 302 303 304 305 306 307 308 309 310 311 312 313

Start-up a bunch of computers, booting them with PXE. If everything goes well, they will
catch the OCI's DHCP, and boot-up OCI's Debian live image. Once the server
is up, an agent will run to report to OCI's web interface. Just refresh
OCI's web interface, and you will see machines. You can also use the CLI
tool:

```
# apt-get install openstack-cluster-installer-cli
# ocicli machine-list
serial   ipaddr          memory  status     lastseen             cluster  hostname
2S2JGM2  192.168.100.37  4096    live       2018-09-20 09:22:31  null
2S2JGM3  192.168.100.39  4096    live       2018-09-20 09:22:50  null
```
Thomas Goirand's avatar
Thomas Goirand committed
314

Thomas Goirand's avatar
Thomas Goirand committed
315 316 317 318
Note that ocicli can either use a login/password which can be set in
the OCI's internal db, or the IP address of the server where ocicli runs can
be white-listed in /etc/openstack-cluster-installer/openstack-cluster-installer.conf.

Thomas Goirand's avatar
Thomas Goirand committed
319
## Creating Swift regions, locations, networks, roles and clusters
Thomas Goirand's avatar
Thomas Goirand committed
320

Thomas Goirand's avatar
Thomas Goirand committed
321
### Before we start
Thomas Goirand's avatar
Thomas Goirand committed
322 323 324 325 326 327

In this documentation, everything is done through the command line using
ocicli. However, absolutely everything can also be done using the web
interface. It is just easier to explain using the CLI, as this avoids
the necessity of showing snapshots of the web interface.

Thomas Goirand's avatar
Thomas Goirand committed
328
### Creating Swift regions and locations
Thomas Goirand's avatar
Thomas Goirand committed
329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359

Before installing the systems on your servers, clusters must be defined.
This starts by setting-up Swift regions. In a Swift cluster, there are
zones and regions. When uploading a file to Swift, it is replicated on
N zones (usually 3). If 2 regions are defined, then Swift tries to
replicate objects on both regions.

Under OCI, you must first define Swift regions. To do so, click on
"Swift region" on the web interface, or using ocicli, type:

```
# ocicli swift-region-create datacenter-1
# ocicli swift-region-create datacenter-2
```

Then create locations attached to these regions:

```
# ocicli dc1-zone1 datacenter-1
# ocicli dc1-zone2 datacenter-1
# ocicli dc2-zone1 datacenter-2
```

Later on, when adding a swift data node to a cluster (data nodes are
the servers that will actually do the Swift storage), a location must
be selected.

Once the locations have been defined, it is time to define networks.
Networks are attached to locations as well. The Swift zones and regions
will be related to these locations and regions.

Thomas Goirand's avatar
Thomas Goirand committed
360
### Creating networks
Thomas Goirand's avatar
Thomas Goirand committed
361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380

```
# ocicli network-create dc1-net1 192.168.101.0 24 dc1-zone1 no
```

The above command will create a subnet 192.168.101.0/24, located at
dc1-zone1. Let's create 2 more networks:

```
# ocicli network-create dc1-net2 192.168.102.0 24 dc1-zone2 no
# ocicli network-create dc2-net1 192.168.103.0 24 dc2-zone1 no
```

Next, for the cluster to be reachable, let's create a public network
on which customers will connect:

```
# ocicli network-create pubnet1 203.0.113.0 28 public yes
```

381 382 383 384 385 386
Note that if using a /32, it will be setup on the lo interface of
your controller. The expected setup is to use BGP to route that
public IP on the controller. To do that, it is possible to customize
the ENC and add BGP peering to your router. See at the end of this
documentation for that.

Thomas Goirand's avatar
Thomas Goirand committed
387
### Creating a new cluster
Thomas Goirand's avatar
Thomas Goirand committed
388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418

Let's create a new cluster:

```
# ocicli cluster-create swift01 example.com
```

Now that we have a new cluster, the networks we created can be added to it:

```
# ocicli network-add dc1-net1 swift01 all eth0
# ocicli network-add dc1-net2 swift01 all eth0
# ocicli network-add dc2-net1 swift01 all eth0
# ocicli network-add pubnet1 swift01 all eth0
```

When adding the public network, automatically, one IP address will be
reserved for the VIP (Virtual Private IP). This IP address will later
be shared by the controller nodes, to perform HA (High Availability),
controlled by pacemaker / corosync. The principle is: if one of
the controllers nodes is hosting the VIP (and it's assigned to its
eth0), and becomes unavailable (let's say, the server crashes or the
network wire is unplugged), then the VIP is re-assigned to the eth0
of another controller node of the cluster.

If selecting 2 network interfaces (for example, eth0 and eth1), then
bonding will be used. Note that your network equipment (switches, etc.)
must be configured accordingly (LACP, etc.), and that the setup of
these equipment is out of the scope of this documentation. Consult your
network equipment vendor for more information.

Thomas Goirand's avatar
Thomas Goirand committed
419
## Enrolling servers in a cluster
Thomas Goirand's avatar
Thomas Goirand committed
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467

Now that we have networks assigned to the cluster, it is time to add
assign servers to the cluster. Let's say we have the below output:

```
# ocicli machine-list
serial  ipaddr          memory  status  lastseen             cluster  hostname
C1      192.168.100.20  8192    live    2018-09-19 20:31:57  null
C2      192.168.100.21  8192    live    2018-09-19 20:31:04  null
C3      192.168.100.22  8192    live    2018-09-19 20:31:14  null
C4      192.168.100.23  5120    live    2018-09-19 20:31:08  null
C5      192.168.100.24  5120    live    2018-09-19 20:31:06  null
C6      192.168.100.25  5120    live    2018-09-19 20:31:14  null
C7      192.168.100.26  4096    live    2018-09-19 20:31:18  null
C8      192.168.100.27  4096    live    2018-09-19 20:31:26  null
C9      192.168.100.28  4096    live    2018-09-19 20:30:50  null
CA      192.168.100.29  4096    live    2018-09-19 20:31:00  null
CB      192.168.100.30  4096    live    2018-09-19 20:31:07  null
CC      192.168.100.31  4096    live    2018-09-19 20:31:20  null
CD      192.168.100.32  4096    live    2018-09-19 20:31:28  null
CE      192.168.100.33  4096    live    2018-09-19 20:31:33  null
CF      192.168.100.34  4096    live    2018-09-19 20:31:40  null
D0      192.168.100.35  4096    live    2018-09-19 20:31:47  null
D1      192.168.100.37  4096    live    2018-09-21 20:31:23  null
D2      192.168.100.39  4096    live    2018-09-21 20:31:31  null
```

Then we can enroll machines in the cluster this way:

```
# ocicli machine-add C1 swift01 controller dc1-zone1
# ocicli machine-add C2 swift01 controller dc1-zone2
# ocicli machine-add C3 swift01 controller dc2-zone1
# ocicli machine-add C4 swift01 swiftproxy dc1-zone1
# ocicli machine-add C5 swift01 swiftproxy dc1-zone2
# ocicli machine-add C6 swift01 swiftproxy dc2-zone1
# ocicli machine-add C7 swift01 swiftstore dc1-zone1
# ocicli machine-add C8 swift01 swiftstore dc1-zone2
# ocicli machine-add C9 swift01 swiftstore dc2-zone1
# ocicli machine-add CA swift01 swiftstore dc1-zone1
# ocicli machine-add CB swift01 swiftstore dc1-zone2
# ocicli machine-add CC swift01 swiftstore dc2-zone1
```

As a result, there's going to be 1 controller, 1 Swift proxy and
2 Swift data node on each zone of our clusters. IP addresses will
automatically be assigned to servers as you add them to the clusters.
They aren't shown in ocicli, but you can check for them through the
Thomas Goirand's avatar
Thomas Goirand committed
468
web interface. The result should be like this:
469

Thomas Goirand's avatar
Thomas Goirand committed
470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494
```
# ocicli machine-list
serial  ipaddr          memory  status  lastseen             cluster  hostname
C1      192.168.100.20  8192    live    2018-09-19 20:31:57  7        swift01-controller-1.example.com
C2      192.168.100.21  8192    live    2018-09-19 20:31:04  7        swift01-controller-2.example.com
C3      192.168.100.22  8192    live    2018-09-19 20:31:14  7        swift01-controller-3.example.com
C4      192.168.100.23  5120    live    2018-09-19 20:31:08  7        swift01-swiftproxy-1.example.com
C5      192.168.100.24  5120    live    2018-09-19 20:31:06  7        swift01-swiftproxy-2.example.com
C6      192.168.100.25  5120    live    2018-09-19 20:31:14  7        swift01-swiftproxy-3.example.com
C7      192.168.100.26  4096    live    2018-09-19 20:31:18  7        swift01-swiftstore-1.example.com
C8      192.168.100.27  4096    live    2018-09-19 20:31:26  7        swift01-swiftstore-2.example.com
C9      192.168.100.28  4096    live    2018-09-19 20:30:50  7        swift01-swiftstore-3.example.com
CA      192.168.100.29  4096    live    2018-09-19 20:31:00  7        swift01-swiftstore-4.example.com
CB      192.168.100.30  4096    live    2018-09-19 20:31:07  7        swift01-swiftstore-5.example.com
CC      192.168.100.31  4096    live    2018-09-19 20:31:20  7        swift01-swiftstore-6.example.com
CD      192.168.100.32  4096    live    2018-09-19 20:31:28  null
CE      192.168.100.33  4096    live    2018-09-19 20:31:33  null
CF      192.168.100.34  4096    live    2018-09-19 20:31:40  null
D0      192.168.100.35  4096    live    2018-09-19 20:31:47  null
D1      192.168.100.37  4096    live    2018-09-21 20:31:23  null
D2      192.168.100.39  4096    live    2018-09-21 20:31:31  null
```

As you can see, hostnames are calculated automatically as well.

Thomas Goirand's avatar
Thomas Goirand committed
495
## Calculating the Swift ring
Thomas Goirand's avatar
Thomas Goirand committed
496 497 498 499 500 501 502 503 504 505 506

Before starting to install servers, the swift ring must be built.
Simply issue this command:

```
# ocicli swift-calculate-ring swift01
```

Note that it may take a very long time, depending on your cluster size.
This is expected. Just be patient.

Thomas Goirand's avatar
Thomas Goirand committed
507
## Installing servers
Thomas Goirand's avatar
Thomas Goirand committed
508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524

There's no (yet) a big "install the cluster" button on the web interface, or on
the CLI. Instead, servers must be installed one by one:

```
# ocicli machine-install-os C1
# ocicli machine-install-os C2
# ocicli machine-install-os C3
```

It is advised to first install the controller nodes, manually check that
they are installed correctly (for example, check that "openstack user list"
works), then the Swift store nodes, then the Swift proxy nodes. However,
nodes of the same type can be installed at once. Also, du to the use of
a VIP and corosync/pacemaker, controller nodes *must* be installed roughly
at the same time.

Thomas Goirand's avatar
Thomas Goirand committed
525 526 527 528 529 530 531
It is possible to see a server's installation log last lines using the
CLI as well:

```
# ocicli machine-install-log C1
```

Thomas Goirand's avatar
Thomas Goirand committed
532 533 534 535
This will show the logs of the system installation from /var/log/oci,
then once the server has rebooted, it will show the puppet logs from
/var/log/puppet-first-run.

Thomas Goirand's avatar
Thomas Goirand committed
536 537 538 539 540 541 542 543 544
## Checking your installation

Login on a controller node. To do that, list its IP:

```
# CONTROLLER_IP=$(ocicli machine-list | grep C1 | awk '{print $2}')
# ssh root@${CONTROLLER_IP}
```

545
Once logged into the controller, you'll see login credentials under
Thomas Goirand's avatar
Thomas Goirand committed
546
/root/oci-openrc.sh. Source it and try:
547

Thomas Goirand's avatar
Thomas Goirand committed
548 549 550 551 552
```
# . /root/oci-openrc.sh
# openstack user list
```

553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608
You can also try Swift:

```
# . /root/oci-openrc.sh
# openstack container create foo
# echo "test" >bar
# openstack object create foo bar
# rm bar
# openstack object delete foo bar
```

## Enabling Swift object encryption

Locally on the Swift store, Swift stores the object in clear form. This
means that anyone with physical access to the data center can pull a hard
drive and objects can be accessed from the /srv/node folder.
To mitigate this risk, Swift can do encryption of the objects it stores.
The metadata (accounts, containters, etc.) will still be stored in clear
form, but at least, the data that is stored encrypted.

The way this is implemented in OCI is to use Barbican. This is the reason
why Barbican is provisionned by default on the controller nodes. By default,
encryption isn't activated. To activate it, you must first store the key
for object encryption in the Barbican store. It can be done this way:

```
# ENC_KEY=$(openssl rand -hex 32)
# openstack secret store --name swift-encryption-key \
  --payload-content-type=text/plain --algorithm aes \
  --bit-length 256 --mode ctr --secret-type symmetric \
  --payload ${ENC_KEY}
+---------------+--------------------------------------------------------------------------------------------+
| Field         | Value                                                                                      |
+---------------+--------------------------------------------------------------------------------------------+
| Secret href   | https://swift01-api.example.com/keymanager/v1/secrets/6ba8dd62-d752-4144-b803-b32012d707d0 |
| Name          | swift-encryption-key                                                                       |
| Created       | None                                                                                       |
| Status        | None                                                                                       |
| Content types | {'default': 'text/plain'}                                                                  |
| Algorithm     | aes                                                                                        |
| Bit length    | 256                                                                                        |
| Secret type   | symmetric                                                                                  |
| Mode          | ctr                                                                                        |
| Expiration    | None                                                                                       |
+---------------+--------------------------------------------------------------------------------------------+
```

Once that's done, the key ID (here: 6ba8dd62-d752-4144-b803-b32012d707d0)
has to be entered in the OCI's web interface, in the cluster definition,
under "Swift encryption key id (blank: no encryption):". Once that's done,
another puppet run is needed on the swift proxy nodes:

```
root@C1-swift01-swiftproxy-1>_ ~ # OS_CACERT=/etc/ssl/certs/oci-pki-oci-ca-chain.pem puppet agent --test --debug
```

609 610 611
This should enable encryption. Note that the encryption key must be stored
in Barbican under the user swift and project services, so that Swift has
access to it.
612

Thomas Goirand's avatar
Thomas Goirand committed
613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644
## Adding other types of nodes

OCI can handle, by default, the below types of nodes:

- cephmon: Ceph monitor
- cephosd: Ceph data machines
- compute: Nova compute and Neutron DVR nodes
- controller: The OpenStack control plane, running all API and daemons
- swiftproxy: Swift proxy servers
- swiftstore: Swift data machines
- volume: Cinder LVM nodes

It is only mandatory to install 3 controllers, then everything else is
optional. There's nothing to configure, OCI will understand what the
user wants depending of what type of nodes is provisioned.

If cephosd nodes are deployed, then everything will be using Ceph:
- Nova
- Glance
- Cinder

Though even with Ceph, setting-up volume nodes will add the LVM
backend capability. With or without volume nodes, if some OSD nodes
are deployed, cinder-volume with Ceph backend will be installed on
the controller nodes.

Live migration of VMs between compute nodes is only possible if using
Cpeh (ie: if some Ceph OSD nodes are deployed).

Ceph MON nodes are optional. If they aren't deplyed, the Ceph MON and
MGR will be installed on the controller nodes.

Thomas Goirand's avatar
Thomas Goirand committed
645 646
# Advanced usage
## Customizing the ENC
647 648 649 650 651 652 653 654 655 656 657 658 659

In /etc/openstack-cluster-installer/hiera, you'll find 2 folders and a
all.yaml. These are to allow one to customize the output of OCI's ENC.
For example, if you put:

```
   ntp:
      servers:
         - 0.us.pool.ntp.org iburst
```

in /etc/openstack-cluster-installer/hiera/all.yaml, then all nodes will
be configured with ntp using 0.us.pool.ntp.org to synchronize time.
Thomas Goirand's avatar
Thomas Goirand committed
660

661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676
If we have a swift01 cluster, then the full folder structure is as follow:

```
/etc/openstack-cluster-installer/hiera/roles/controller.yaml
/etc/openstack-cluster-installer/hiera/roles/swiftproxy.yaml
/etc/openstack-cluster-installer/hiera/roles/swiftstore.yaml
/etc/openstack-cluster-installer/hiera/nodes/-hostname-of-your-node-.yaml
/etc/openstack-cluster-installer/hiera/all.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/roles/controller.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/roles/swiftproxy.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/roles/swiftstore.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/nodes/-hostname-of-your-node-.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/all.yaml

```

677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699
## Customizing installed server at setup time

Sometimes, it is desirable to configure a server at setup time. For example,
it could be needed to configure routing (using BGP) for the virtual IP to be
available at setup time. OCI offers all what's needed in order to enrich the
server configuration at install time, before puppet agent even starts.

Say you want to configure swift01-controller-1 in your swift01 cluster, add
quagga to it, and add some configuration files. Simply create the folder,
fill content in it, and add a oci-packages-list file:

```
# mkdir -p /var/lib/oci/clusters/swift01/swift01-controller-1.infomaniak.ch/oci-in-target
# cd /var/lib/oci/clusters/swift01/swift01-controller-1.infomaniak.ch
# echo -n "quagga,tmux" >oci-packages-list
# mkdir -p oci-in-target/etc/quagga
# echo "some conf" >oci-in-target/etc/quagga/bgpd.conf
```

When OCI provision the baremetal server, it looks if the oci-packages-list
file exists. If it does, the packages are added when installing. Then the
oci-in-target content is copied into the target system.

Thomas Goirand's avatar
Thomas Goirand committed
700 701
## Using a BGP VIP

702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726
The same way, you can for example, decide to have the VIP of your
controllers to use BGP routing. To do that, write in
/etc/openstack-cluster-installer/roles/controller.yaml:

```
   quagga::bgpd:
      my_asn: 64496,
      router_id: 192.0.2.1
      networks4:
         - '192.0.2.0/24'
      peers:
         64497:
            addr4:
               - '192.0.2.2'
            desc: TEST Network
```

Though you may want to do this only for a specific node of a single
cluster of servers, rather than all. In such case, simply use this
filepath scheme:
/etc/openstack-cluster-installer/clusters/cloud1/nodes/cloud1-controller-1.example.com.yaml

For all controllers of the cloud1 cluster, use:
/etc/openstack-cluster-installer/clusters/cloud1/roles/controller.yaml

727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742
## Doing a test in OCI's manifests for debug purpose

If you would like to test a change in OCI's puppet files, edit them
in /usr/share/puppet/modules/oci, then on the master run, for example:

```
# puppet master --compile swift01-controller-1.example.com
# /etc/init.d/puppet-master stop
# /etc/init.d/puppet-master start
```

then on swift01-controller-1.example.com you can run:

```
# OS_CACERT=/etc/ssl/certs/oci-pki-oci-ca-chain.pem puppet agent --test --debug
```
743

744
## Customizing files and packages in your servers.
745 746 747 748 749 750 751 752 753

If you wish to customize the file contents of your hosts, simply write
any file in, for example:

```
/var/lib/oci/clusters/swift01/swift01-controller-1.example.com/oci-in-target
```

and it will be copied in the server you'll be installing.
754 755 756 757 758 759 760 761 762 763 764 765 766

The same way, you can add additional packages to your server by adding their
names in this file:

```
/var/lib/oci/clusters/swift01/swift01-controller-1.example.com/oci-packages-list
```

Packages must be listed on a single line, separated by comas. For example:

```
quagga,bind
```
767 768 769 770 771 772 773 774 775 776 777 778 779

### Enabling Hiera for environment

If you need to enable Hiera, you can do it this way:
```
# mkdir -p /etc/puppet/code/environments/production/manifests/
# echo "hiera_include('classes')" > /etc/puppet/code/environments/production/manifests/site.pp
# cat /etc/puppet/code/hiera/common.yaml
---
classes:
  - xxx
...
```
780
#!/bin/sh
781

782 783 784
set -e
set -x

785 786
# Once deployment is ready

787 788 789 790 791 792
There's currently a few issues that need to be addressed by hand. Hopefully,
all of these will be automated in a near future. In the mean while, please
do contribute the fixes if you find out how, or just do as per what's below.

## Fixing-up the controllers

793 794
Unfortunately, sometimes, there's some scheduling issues in the puppet
apply. If this happens, one can try to relaunch the puppet thing:
795 796 797 798 799

```
# OS_CACERT=/etc/ssl/certs/oci-pki-oci-ca-chain.pem puppet agent --test --debug 2>&1 | tee /var/log/puppet-run-1
```

Thomas Goirand's avatar
Thomas Goirand committed
800 801 802
Do this on the controller-1 node first, wait until it finishes, then restart
it on the other controller nodes.

803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829
## Adding custom firewall rulles

OCI is using puppet-module-puppetlabs-firewall, and flushes iptables on each
run. Therefore, if you need custom firewall rules, you also have to do it
via puppet. If you want to do apply the same firewall rules on all nodes,
simply edit the site.pp like this in /etc/puppet/code/environments/production/manifests/site.pp:

```
hiera_include('classes')

firewall { '000 allow monitoring network':
  proto       => tcp,
  action      => accept,
  source      => "10.3.50.0/24",
}
```

Note that the firewall rule is prefixed with a number. This is mandatory.
Also, make sure that this number doesn't enter in conflict with an already
existing rule.

What's done by OCI is: protect the controller's VIP (deny access to it from
the outside), and protect the swiftstore ports for account, container and
object servers from any query not from within the cluster. So the above will
allow a monitoring server from 10.3.50.0/24 to monitor your swiftstore
ndoes.

830 831 832 833 834 835 836 837
## Setting-up redis cluster

Currently, this is not yet automated:

```
# redis-cli -h 192.168.101.2 --cluster create 192.168.101.2:6379 192.168.101.3:6379 192.168.101.4:6379
```

838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862
## Enabling cloudkitty rating

First, enable the hashmap module:

```
cloudkitty module enable hashmap
cloudkitty module set priority hashmap 100
```

Note that the error 503 may be just ignored, it still works, as "module
list" shows. Now, let's add rating for instances:

```
cloudkitty hashmap group create instance_uptime_flavor
cloudkitty hashmap service create compute
cloudkitty hashmap field create 96a34245-83ae-406b-9621-c4dcd627fb8e flavor
```

The above ID is the one of the hashmap service create. Then we reuse the ID
of the field create we just had for the -f parameter, and the group ID for
the -g parameter below:
```
cloudkitty hashmap mapping create --field-id ce85c041-00a9-4a6a-a25d-9ebf028692b6 --value demo-flavor -t flat -g 2a986ce8-60a3-4f09-911e-c9989d875187 0.03
```

863 864 865 866 867 868 869 870 871 872 873 874 875 876 877
## Adding compute nodes

To add the compute node to the cluster and check it's there, on the controller, do:

```
# . oci-openrc
# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts"
# openstack hypervisor list
+----+-------------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname           | Hypervisor Type | Host IP       | State |
+----+-------------------------------+-----------------+---------------+-------+
|  4 | swift01-compute-1.example.com | QEMU            | 192.168.103.7 | up    |
+----+-------------------------------+-----------------+---------------+-------+
```

Thomas Goirand's avatar
Thomas Goirand committed
878 879
There's nothing more to it... :)

880 881 882
## Installing a first OpenStack image

```
Thomas Goirand's avatar
Thomas Goirand committed
883
wget http://cdimage.debian.org/cdimage/openstack/current-9/debian-9-openstack-amd64.qcow2
884 885 886 887 888 889
openstack image create \
	--container-format bare --disk-format qcow2 \
	--file debian-9-openstack-amd64.qcow2 \
	debian-9-openstack-amd64
```

890 891
## Setting-up networking

Thomas Goirand's avatar
Thomas Goirand committed
892 893 894 895 896 897 898
There's many ways to handle networking in OpenStack. This documentation only
quickly covers one way, and it is out of the scope of this doc to explain
all of OpenStack networking. However, the reader must know that OCI is
setting-up compute nodes using DVR (Distributed Virtual Routers), which
means a Neutron router is installed on every compute nodes. Also,
OpenVSwitch is used, using VXLan between the compute nodes. Anyway, here's
one way to setup networking. Something like this may do it:
899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925

```
# Create external network
openstack network create --external --provider-physical-network external --provider-network-type flat ext-net
openstack subnet create --network ext-net --allocation-pool start=192.168.105.100,end=192.168.105.199 --dns-nameserver 84.16.67.69 --gateway 192.168.105.1 --subnet-range 192.168.105.0/24 --no-dhcp ext-subnet

# Create internal network
openstack network create --share demo-net
openstack subnet create --network demo-net --subnet-range 192.168.200.0/24 --dns-nameserver 84.16.67.69 demo-subnet

# Create router, add it to demo-subnet and set it as gateway
openstack router create demo-router
openstack router add subnet demo-router demo-subnet
openstack router set demo-router --external-gateway ext-net

# Create a few floating IPs
openstack floating ip create ext-net
openstack floating ip create ext-net
openstack floating ip create ext-net
openstack floating ip create ext-net
openstack floating ip create ext-net

# Add rules to the admin's security group to allow ping and ssh
SECURITY_GROUP=$(openstack security group list --project admin --format=csv | q -d , -H 'SELECT ID FROM -')
openstack security group rule create --ingress --protocol tcp --dst-port 22 ${SECURITY_GROUP}
openstack security group rule create --protocol icmp --ingress ${SECURITY_GROUP}
```
926 927 928 929 930 931 932 933 934 935 936

## Adding an ssh key

```
openstack keypair create --public-key ~/.ssh/id_rsa.pub demo-keypair
```

## Creating flavor

```
openstack flavor create --ram 2048 --disk 5 --vcpus 1 demo-flavor
937 938
openstack flavor create --ram 6144 --disk 20 --vcpus 2 cpu2-ram6-disk20
openstack flavor create --ram 12288 --disk 40 --vcpus 4 cpu4-ram12-disk40
939 940 941 942 943 944 945 946 947 948 949 950 951 952 953
```

## Boot a VM

```
#!/bin/sh

set -e
set -x

NETWORK_ID=$(openstack network list --name demo-net -c ID -f value)
IMAGE_ID=$(openstack image list --name debian-9-openstack-amd64 -c ID -f value)
FLAVOR_ID=$(openstack flavor show demo-flavor -c id -f value)

openstack server create --image ${IMAGE_ID} --flavor ${FLAVOR_ID} \
954
	--key-name demo-keypair --nic net-id=${NETWORK_ID} --availability-zone nova:swift01-compute-1.infomaniak.ch demo-server
955
```