summaryrefslogtreecommitdiffstats
path: root/playbooks/openstack
diff options
context:
space:
mode:
Diffstat (limited to 'playbooks/openstack')
-rw-r--r--playbooks/openstack/README.md53
-rw-r--r--playbooks/openstack/advanced-configuration.md202
-rw-r--r--playbooks/openstack/openshift-cluster/provision.yml3
-rw-r--r--playbooks/openstack/sample-inventory/group_vars/OSEv3.yml6
-rw-r--r--playbooks/openstack/sample-inventory/group_vars/all.yml9
-rwxr-xr-xplaybooks/openstack/sample-inventory/inventory.py11
6 files changed, 56 insertions, 228 deletions
diff --git a/playbooks/openstack/README.md b/playbooks/openstack/README.md
index f3fe13530..f567242cd 100644
--- a/playbooks/openstack/README.md
+++ b/playbooks/openstack/README.md
@@ -6,7 +6,7 @@ etc.). The result is an environment ready for OpenShift installation
via [openshift-ansible].
We provide everything necessary to be able to install OpenShift on
-OpenStack (including the DNS and load balancer servers when
+OpenStack (including the load balancer servers when
necessary). In addition, we work on providing integration with the
OpenStack-native services (storage, lbaas, baremetal as a service,
dns, etc.).
@@ -24,7 +24,7 @@ The OpenStack release must be Newton (for Red Hat OpenStack this is
version 10) or newer. It must also satisfy these requirements:
* Heat (Orchestration) must be available
-* The deployment image (CentOS 7 or RHEL 7) must be loaded
+* The deployment image (CentOS 7.4 or RHEL 7) must be loaded
* The deployment flavor must be available to your user
- `m1.medium` / 4GB RAM + 40GB disk should be enough for testing
- look at
@@ -38,18 +38,6 @@ Optional:
* External Neutron network with a floating IP address pool
-## DNS Requirements
-
-OpenShift requires DNS to operate properly. OpenStack supports DNS-as-a-service
-in the form of the Designate project, but the playbooks here don't support it
-yet. Until we do, you will need to provide a DNS solution yourself (or in case
-you are not running Designate when we do).
-
-If your server supports nsupdate, we will use it to add the necessary records.
-
-TODO(shadower): describe how to build a sample DNS server and how to configure
-our playbooks for nsupdate.
-
## Installation
@@ -57,14 +45,13 @@ There are four main parts to the installation:
1. [Preparing Ansible and dependencies](#1-preparing-ansible-and-dependencies)
2. [Configuring the desired OpenStack environment and OpenShift cluster](#2-configuring-the-openstack-environment-and-openshift-cluster)
-3. [Creating the OpenStack resources (VMs, networking, etc.)](#3-creating-the-openstack-resources-vms-networking-etc)
-4. [Installing OpenShift](#4-installing-openshift)
+3. [Creating the OpenStack Resources and Installing OpenShift](#3-creating-the-openstack-resources-and-installing-openshift)
This guide is going to install [OpenShift Origin][origin]
with [CentOS 7][centos7] images with minimal customisation.
-We will create the VMs for running OpenShift, in a new Neutron
-network, assign Floating IP addresses and configure DNS.
+We will create the VMs for running OpenShift, in a new Neutron network and
+assign Floating IP addresses.
The OpenShift cluster will have a single Master node that will run
`etcd`, a single Infra node and two App nodes.
@@ -156,14 +143,6 @@ $ vi inventory/group_vars/all.yml
4. Set the `openshift_openstack_default_flavor` to the flavor you want your
OpenShift VMs to use.
- See `openstack flavor list` for the list of available flavors.
-5. Set the `openshift_openstack_dns_nameservers` to the list of the IP addresses
- of the DNS servers used for the **private** address resolution.
-
-**NOTE ON DNS**: at minimum, the OpenShift nodes need to be able to access each
-other by their hostname. OpenStack doesn't provide this by default, so you
-need to provide a DNS server. Put the address of that DNS server in
-`openshift_openstack_dns_nameservers` variable.
-
@@ -191,7 +170,7 @@ the [Sample OpenShift Inventory][sample-openshift-inventory] and
the [advanced configuration][advanced-configuration].
-### 3. Creating the OpenStack resources (VMs, networking, etc.)
+### 3. Creating the OpenStack Resources and Installing OpenShift
We provide an `ansible.cfg` file which has some useful defaults -- you should
copy it to the directory you're going to run `ansible-playbook` from.
@@ -200,13 +179,18 @@ copy it to the directory you're going to run `ansible-playbook` from.
$ cp openshift-ansible/ansible.cfg ansible.cfg
```
-Then run the provisioning playbook -- this will create the OpenStack
+Then run the provision + install playbook -- this will create the OpenStack
resources:
```bash
-$ ansible-playbook --user openshift -i inventory openshift-ansible/playbooks/openstack/openshift-cluster/provision.yaml
+$ ansible-playbook --user openshift -i inventory \
+ openshift-ansible/playbooks/openstack/openshift-cluster/provision_install.yaml \
+ -e openshift_repos_enable_testing=true
```
+Note, you may want to use the testing repo for development purposes only.
+Normally, `openshift_repos_enable_testing` should not be specified.
+
If you're using multiple inventories, make sure you pass the path to
the right one to `-i`.
@@ -214,15 +198,6 @@ If your SSH private key is not in `~/.ssh/id_rsa` use the `--private-key`
option to specify the correct path.
-### 4. Installing OpenShift
-
-Run the `byo/config.yml` playbook on top of the OpenStack nodes we have
-prepared.
-
-```bash
-$ ansible-playbook -i inventory openshift-ansible/playbooks/byo/config.yml
-```
-
### Next Steps
@@ -240,7 +215,6 @@ advanced configuration:
* [External Dns][external-dns]
* Multiple Clusters (TODO)
* [Cinder Registry][cinder-registry]
-* [Bastion Node][bastion]
[ansible]: https://www.ansible.com/
@@ -259,4 +233,3 @@ advanced configuration:
[loadbalancer]: ./advanced-configuration.md#multi-master-configuration
[external-dns]: ./advanced-configuration.md#dns-configuration-variables
[cinder-registry]: ./advanced-configuration.md#creating-and-using-a-cinder-volume-for-the-openshift-registry
-[bastion]: ./advanced-configuration.md#configure-static-inventory-and-access-via-a-bastion-node
diff --git a/playbooks/openstack/advanced-configuration.md b/playbooks/openstack/advanced-configuration.md
index 90cc20b98..db2a13d38 100644
--- a/playbooks/openstack/advanced-configuration.md
+++ b/playbooks/openstack/advanced-configuration.md
@@ -23,68 +23,45 @@ There are no additional dependencies for the cluster nodes. Required
configuration steps are done by Heat given a specific user data config
that normally should not be changed.
-## Required galaxy modules
-
-In order to pull in external dependencies for DNS configuration steps,
-the following commads need to be executed:
-
- ansible-galaxy install \
- -r openshift-ansible-contrib/playbooks/provisioning/openstack/galaxy-requirements.yaml \
- -p openshift-ansible-contrib/roles
-
-Alternatively you can install directly from github:
-
- ansible-galaxy install git+https://github.com/redhat-cop/infra-ansible,master \
- -p openshift-ansible-contrib/roles
-
-Notes:
-* This assumes we're in the directory that contains the clonned
-openshift-ansible-contrib repo in its root path.
-* When trying to install a different version, the previous one must be removed first
-(`infra-ansible` directory from [roles](https://github.com/openshift/openshift-ansible-contrib/tree/master/roles)).
-Otherwise, even if there are differences between the two versions, installation of the newer version is skipped.
+## Accessing the OpenShift Cluster
+### Configure DNS
-## Accessing the OpenShift Cluster
+OpenShift requires a two public DNS records to function fully. The first one points to
+the master/load balancer and provides the UI/API access. The other one is a
+wildcard domain that resolves app route requests to the infra node. A private DNS
+server and records are not required and not managed here.
-### Use the Cluster DNS
+If you followed the default installation from the README section, there is no
+DNS configured. You should add two entries to the `/etc/hosts` file on the
+Ansible host (where you to do a quick validation. A real deployment will
+however require a DNS server with the following entries set.
-In addition to the OpenShift nodes, we created a DNS server with all
-the necessary entries. We will configure your *Ansible host* to use
-this new DNS and talk to the deployed OpenShift.
+First, run the `openstack server list` command and note the floating IP
+addresses of the *master* and *infra* nodes (we will use `10.40.128.130` for
+master and `10.40.128.134` for infra here).
-First, get the DNS IP address:
+Then add the following entries to your `/etc/hosts`:
-```bash
-$ openstack server show dns-0.openshift.example.com --format value --column addresses
-openshift-ansible-openshift.example.com-net=192.168.99.11, 10.40.128.129
+```
+10.40.128.130 console.openshift.example.com
+10.40.128.134 cakephp-mysql-example-test.apps.openshift.example.com
```
-Note the floating IP address (it's `10.40.128.129` in this case) -- if
-you're not sure, try pinging them both -- it's the one that responds
-to pings.
+This points the cluster domain (as defined in the
+`openshift_master_cluster_public_hostname` Ansible variable in `OSEv3`) to the
+master node and any routes for deployed apps to the infra node.
-Next, edit your `/etc/resolv.conf` as root and put `nameserver DNS_IP` as your
-**first entry**.
+If you deploy another app, it will end up with a different URL (e.g.
+myapp-test.apps.openshift.example.com) and you will need to add that too. This
+is why a real deployment should always run a DNS where the second entry will be
+a wildcard `*.apps.openshift.example.com).
-If your `/etc/resolv.conf` currently looks like this:
+This will be sufficient to validate the cluster here.
-```
-; generated by /usr/sbin/dhclient-script
-search openstacklocal
-nameserver 192.168.0.3
-nameserver 192.168.0.2
-```
+Take a look at the [External DNS](#dns-configuration-variables) section for
+configuring a DNS service.
-Change it to this:
-
-```
-; generated by /usr/sbin/dhclient-script
-search openstacklocal
-nameserver 10.40.128.129
-nameserver 192.168.0.3
-nameserver 192.168.0.2
-```
### Get the `oc` Client
@@ -189,8 +166,8 @@ That sudomain can be set as well by the `openshift_openstack_app_subdomain` vari
the inventory.
The `openstack_<role name>_hostname` is a set of variables used for customising
-hostnames of servers with a given role. When such a variable stays commented,
-default hostname (usually the role name) is used.
+public names of Nova servers provisioned with a given role. When such a variable stays commented,
+default value (usually the role name) is used.
The `openshift_openstack_dns_nameservers` is a list of DNS servers accessible from all
the created Nova servers. These will provide the internal name resolution for
@@ -205,7 +182,7 @@ When Network Manager is enabled for provisioned cluster nodes, which is
normally the case, you should not change the defaults and always deploy dnsmasq.
`openshift_openstack_external_nsupdate_keys` describes an external authoritative DNS server(s)
-processing dynamic records updates in the public and private cluster views:
+processing dynamic records updates in the public only cluster view:
openshift_openstack_external_nsupdate_keys:
public:
@@ -213,10 +190,6 @@ processing dynamic records updates in the public and private cluster views:
key_algorithm: 'hmac-md5'
key_name: 'update-key'
server: <public DNS server IP>
- private:
- key_secret: <some nsupdate key 2>
- key_algorithm: 'hmac-sha256'
- server: <public or private DNS server IP>
Here, for the public view section, we specified another key algorithm and
optional `key_name`, which normally defaults to the cluster's DNS domain.
@@ -224,24 +197,6 @@ This just illustrates a compatibility mode with a DNS service deployed
by OpenShift on OSP10 reference architecture, and used in a mixed mode with
another external DNS server.
-Another example defines an external DNS server for the public view
-additionally to the in-stack DNS server used for the private view only:
-
- openshift_openstack_external_nsupdate_keys:
- public:
- key_secret: <some nsupdate key>
- key_algorithm: 'hmac-sha256'
- server: <public DNS server IP>
-
-Here, updates matching the public view will be hitting the given public
-server IP. While updates matching the private view will be sent to the
-auto evaluated in-stack DNS server's **public** IP.
-
-Note, for the in-stack DNS server, private view updates may be sent only
-via the public IP of the server. You can not send updates via the private
-IP yet. This forces the in-stack private server to have a floating IP.
-See also the [security notes](#security-notes)
-
## Flannel networking
In order to configure the
@@ -330,14 +285,6 @@ The `openshift_openstack_required_packages` variable also provides a list of the
prerequisite packages to be installed before to deploy an OpenShift cluster.
Those are ignored though, if the `manage_packages: False`.
-The `openstack_inventory` controls either a static inventory will be created after the
-cluster nodes provisioned on OpenStack cloud. Note, the fully dynamic inventory
-is yet to be supported, so the static inventory will be created anyway.
-
-The `openstack_inventory_path` points the directory to host the generated static inventory.
-It should point to the copied example inventory directory, otherwise ti creates
-a new one for you.
-
## Multi-master configuration
Please refer to the official documentation for the
@@ -347,7 +294,6 @@ variables](https://docs.openshift.com/container-platform/3.6/install_config/inst
in `inventory/group_vars/OSEv3.yml`. For example, given a load balancer node
under the ansible group named `ext_lb`:
- openshift_master_cluster_method: native
openshift_master_cluster_hostname: "{{ groups.ext_lb.0 }}"
openshift_master_cluster_public_hostname: "{{ groups.ext_lb.0 }}"
@@ -386,18 +332,6 @@ be the case for development environments. When turned off, the servers will
be provisioned omitting the ``yum update`` command. This brings security
implications though, and is not recommended for production deployments.
-### DNS servers security options
-
-Aside from `openshift_openstack_node_ingress_cidr` restricting public access to in-stack DNS
-servers, there are following (bind/named specific) DNS security
-options available:
-
- named_public_recursion: 'no'
- named_private_recursion: 'yes'
-
-External DNS servers, which is not included in the 'dns' hosts group,
-are not managed. It is up to you to configure such ones.
-
## Configure the OpenShift parameters
Finally, you need to update the DNS entry in
@@ -540,43 +474,6 @@ You can also run the registry setup playbook directly:
-## Configure static inventory and access via a bastion node
-
-Example inventory variables:
-
- openshift_openstack_use_bastion: true
- openshift_openstack_bastion_ingress_cidr: "{{openshift_openstack_subnet_prefix}}.0/24"
- openstack_private_ssh_key: ~/.ssh/id_rsa
- openstack_inventory: static
- openstack_inventory_path: ../../../../inventory
- openstack_ssh_config_path: /tmp/ssh.config.openshift.ansible.openshift.example.com
-
-The `openshift_openstack_subnet_prefix` is the openstack private network for your cluster.
-And the `openshift_openstack_bastion_ingress_cidr` defines accepted range for SSH connections to nodes
-additionally to the `openshift_openstack_ssh_ingress_cidr`` (see the security notes above).
-
-The SSH config will be stored on the ansible control node by the
-gitven path. Ansible uses it automatically. To access the cluster nodes with
-that ssh config, use the `-F` prefix, f.e.:
-
- ssh -F /tmp/ssh.config.openshift.ansible.openshift.example.com master-0.openshift.example.com echo OK
-
-Note, relative paths will not work for the `openstack_ssh_config_path`, but it
-works for the `openstack_private_ssh_key` and `openstack_inventory_path`. In this
-guide, the latter points to the current directory, where you run ansible commands
-from.
-
-To verify nodes connectivity, use the command:
-
- ansible -v -i inventory/hosts -m ping all
-
-If something is broken, double-check the inventory variables, paths and the
-generated `<openstack_inventory_path>/hosts` and `openstack_ssh_config_path` files.
-
-The `inventory: dynamic` can be used instead to access cluster nodes directly via
-floating IPs. In this mode you can not use a bastion node and should specify
-the dynamic inventory file in your ansible commands , like `-i openstack.py`.
-
## Using Docker on the Ansible host
If you don't want to worry about the dependencies, you can use the
@@ -606,28 +503,6 @@ the playbooks:
ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml
-### Run the playbook
-
-Assuming your OpenStack (Keystone) credentials are in the `keystonerc`
-this is how you stat the provisioning process from your ansible control node:
-
- . keystonerc
- ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/provision.yaml
-
-Note, here you start with an empty inventory. The static inventory will be populated
-with data so you can omit providing additional arguments for future ansible commands.
-
-If bastion enabled, the generates SSH config must be applied for ansible.
-Otherwise, it is auto included by the previous step. In order to execute it
-as a separate playbook, use the following command:
-
- ansible-playbook openshift-ansible-contrib/playbooks/provisioning/openstack/post-provision-openstack.yml
-
-The first infra node then becomes a bastion node as well and proxies access
-for future ansible commands. The post-provision step also configures Satellite,
-if requested, and DNS server, and ensures other OpenShift requirements to be met.
-
-
## Running Custom Post-Provision Actions
A custom playbook can be run like this:
@@ -735,21 +610,6 @@ Once it succeeds, you can install openshift by running:
OpenShift UI may be accessed via the 1st master node FQDN, port 8443.
-When using a bastion, you may want to make an SSH tunnel from your control node
-to access UI on the `https://localhost:8443`, with this inventory variable:
-
- openshift_openstack_ui_ssh_tunnel: True
-
-Note, this requires sudo rights on the ansible control node and an absolute path
-for the `openstack_private_ssh_key`. You should also update the control node's
-`/etc/hosts`:
-
- 127.0.0.1 master-0.openshift.example.com
-
-In order to access UI, the ssh-tunnel service will be created and started on the
-control node. Make sure to remove these changes and the service manually, when not
-needed anymore.
-
## Scale Deployment up/down
### Scaling up
@@ -768,5 +628,3 @@ Usage:
```
ansible-playbook -i <path to inventory> openshift-ansible-contrib/playbooks/provisioning/openstack/scale-up.yaml` [-e increment_by=<number>] [-e openshift_ansible_dir=<path to openshift-ansible>]
```
-
-Note: This playbook works only without a bastion node (`openshift_openstack_use_bastion: False`).
diff --git a/playbooks/openstack/openshift-cluster/provision.yml b/playbooks/openstack/openshift-cluster/provision.yml
index 17357a8db..3e295b2c8 100644
--- a/playbooks/openstack/openshift-cluster/provision.yml
+++ b/playbooks/openstack/openshift-cluster/provision.yml
@@ -30,9 +30,6 @@
include: ../../init/facts.yml
-# NOTE(shadower): the (internal) DNS must be functional at this point!!
-# That will have happened in provision.yml if nsupdate was configured.
-
# TODO(shadower): consider splitting this up so people can stop here
# and configure their DNS if they have to.
- name: Populate the DNS entries
diff --git a/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml b/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml
index 1e55adb9e..933117127 100644
--- a/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml
+++ b/playbooks/openstack/sample-inventory/group_vars/OSEv3.yml
@@ -1,12 +1,12 @@
---
+## Openshift product versions and repos to install from
openshift_deployment_type: origin
+#openshift_repos_enable_testing: true
#openshift_deployment_type: openshift-enterprise
#openshift_release: v3.5
openshift_master_default_subdomain: "apps.{{ openshift_openstack_clusterid }}.{{ openshift_openstack_public_dns_domain }}"
-openshift_master_cluster_method: native
-openshift_master_cluster_hostname: "console.{{ openshift_openstack_clusterid }}.{{ openshift_openstack_public_dns_domain }}"
-openshift_master_cluster_public_hostname: "{{ openshift_master_cluster_hostname }}"
+openshift_master_cluster_public_hostname: "console.{{ openshift_openstack_clusterid }}.{{ openshift_openstack_public_dns_domain }}"
osm_default_node_selector: 'region=primary'
diff --git a/playbooks/openstack/sample-inventory/group_vars/all.yml b/playbooks/openstack/sample-inventory/group_vars/all.yml
index 1019a6b3e..c7afe9a24 100644
--- a/playbooks/openstack/sample-inventory/group_vars/all.yml
+++ b/playbooks/openstack/sample-inventory/group_vars/all.yml
@@ -10,7 +10,6 @@ openshift_openstack_dns_nameservers: []
#openshift_openstack_node_hostname: "app-node"
#openshift_openstack_lb_hostname: "lb"
#openshift_openstack_etcd_hostname: "etcd"
-#openshift_openstack_dns_hostname: "dns"
openshift_openstack_keypair_name: "openshift"
openshift_openstack_external_network_name: "public"
@@ -34,7 +33,6 @@ openshift_openstack_external_network_name: "public"
#openshift_openstack_node_image_name: "centos7"
#openshift_openstack_lb_image_name: "centos7"
#openshift_openstack_etcd_image_name: "centos7"
-#openshift_openstack_dns_image_name: "centos7"
openshift_openstack_default_image_name: "centos7"
openshift_openstack_num_masters: 1
@@ -49,7 +47,6 @@ openshift_openstack_num_nodes: 2
#openshift_openstack_node_flavor: "m1.medium"
#openshift_openstack_lb_flavor: "m1.medium"
#openshift_openstack_etcd_flavor: "m1.medium"
-#openshift_openstack_dns_flavor: "m1.medium"
openshift_openstack_default_flavor: "m1.medium"
# # Numerical index of nodes to remove
@@ -62,7 +59,6 @@ openshift_openstack_default_flavor: "m1.medium"
#openshift_openstack_docker_infra_volume_size: "15"
#openshift_openstack_docker_node_volume_size: "15"
#openshift_openstack_docker_etcd_volume_size: "2"
-#openshift_openstack_docker_dns_volume_size: "1"
#openshift_openstack_docker_lb_volume_size: "5"
openshift_openstack_docker_volume_size: "15"
@@ -93,7 +89,6 @@ openshift_openstack_subnet_prefix: "192.168.99"
# # Roll-your-own DNS
-#openshift_openstack_num_dns: 0
#openshift_openstack_external_nsupdate_keys:
# public:
# key_secret: 'SKqKNdpfk7llKxZ57bbxUnUDobaaJp9t8CjXLJPl+fRI5mPcSBuxTAyvJPa6Y9R7vUg9DwCy/6WTpgLNqnV4Hg=='
@@ -104,10 +99,6 @@ openshift_openstack_subnet_prefix: "192.168.99"
# key_algorithm: 'hmac-md5'
# server: '192.168.1.2'
-# # Customize DNS server security options
-#named_public_recursion: 'no'
-#named_private_recursion: 'yes'
-
# NOTE(shadower): Do not change this value. The Ansible user is currently
# hardcoded to `openshift`.
diff --git a/playbooks/openstack/sample-inventory/inventory.py b/playbooks/openstack/sample-inventory/inventory.py
index 47c56d94d..ad3fd936b 100755
--- a/playbooks/openstack/sample-inventory/inventory.py
+++ b/playbooks/openstack/sample-inventory/inventory.py
@@ -79,10 +79,19 @@ def build_inventory():
public_v4 = server.public_v4 or server.private_v4
if public_v4:
- hostvars['public_v4'] = public_v4
+ hostvars['public_v4'] = server.public_v4
+ hostvars['openshift_public_ip'] = server.public_v4
# TODO(shadower): what about multiple networks?
if server.private_v4:
hostvars['private_v4'] = server.private_v4
+ # NOTE(shadower): Yes, we set both hostname and IP to the private
+ # IP address for each node. OpenStack doesn't resolve nodes by
+ # name at all, so using a hostname here would require an internal
+ # DNS which would complicate the setup and potentially introduce
+ # performance issues.
+ hostvars['openshift_ip'] = server.private_v4
+ hostvars['openshift_hostname'] = server.private_v4
+ hostvars['openshift_public_hostname'] = server.name
node_labels = server.metadata.get('node_labels')
if node_labels: