summaryrefslogtreecommitdiffstats
path: root/roles/openshift_cfme
diff options
context:
space:
mode:
authorTim Bielawa <tbielawa@redhat.com>2017-04-28 13:44:54 -0400
committerTim Bielawa <tbielawa@redhat.com>2017-06-14 15:17:01 -0400
commite1a91973650a26859d1d02449ac35b1946746392 (patch)
tree234d847f3374b349e531e0d0e0c9ea95e35e5128 /roles/openshift_cfme
parentcf81c53e8b747603ba6599f8c9fbdf50feff4c88 (diff)
downloadopenshift-e1a91973650a26859d1d02449ac35b1946746392.tar.gz
openshift-e1a91973650a26859d1d02449ac35b1946746392.tar.bz2
openshift-e1a91973650a26859d1d02449ac35b1946746392.tar.xz
openshift-e1a91973650a26859d1d02449ac35b1946746392.zip
First POC of a CFME turnkey solution in openshift-anisble
Diffstat (limited to 'roles/openshift_cfme')
-rw-r--r--roles/openshift_cfme/README.md343
-rw-r--r--roles/openshift_cfme/defaults/main.yml38
-rw-r--r--roles/openshift_cfme/files/miq-template.yaml566
-rw-r--r--roles/openshift_cfme/files/openshift_cfme.exports3
-rw-r--r--roles/openshift_cfme/handlers/main.yml42
-rw-r--r--roles/openshift_cfme/img/CFMEBasicDeployment.pngbin0 -> 38316 bytes
-rw-r--r--roles/openshift_cfme/meta/main.yml20
-rw-r--r--roles/openshift_cfme/tasks/create_pvs.yml36
-rw-r--r--roles/openshift_cfme/tasks/main.yml164
-rw-r--r--roles/openshift_cfme/tasks/uninstall.yml43
-rw-r--r--roles/openshift_cfme/templates/miq-pv-db.yaml.j213
-rw-r--r--roles/openshift_cfme/templates/miq-pv-region.yaml.j213
-rw-r--r--roles/openshift_cfme/templates/miq-pv-server.yaml.j213
13 files changed, 1294 insertions, 0 deletions
diff --git a/roles/openshift_cfme/README.md b/roles/openshift_cfme/README.md
new file mode 100644
index 000000000..e983e6f44
--- /dev/null
+++ b/roles/openshift_cfme/README.md
@@ -0,0 +1,343 @@
+# OpenShift-Ansible - CFME Role
+
+# PROOF OF CONCEPT - Alpha Version
+
+This role is based on the work in the upstream
+[manageiq/manageiq-pods](https://github.com/ManageIQ/manageiq-pods)
+project. For additional literature on configuration specific to
+ManageIQ (optional post-installation tasks), visit the project's
+[upstream documentation page](http://manageiq.org/docs/get-started/basic-configuration).
+
+Please submit a
+[new issue](https://github.com/openshift/openshift-ansible/issues/new)
+if you run into bugs with this role or wish to request enhancements.
+
+# Important Notes
+
+This is an early *proof of concept* role to install the Cloud Forms
+Management Engine (ManageIQ) on OpenShift Container Platform (OCP).
+
+* This role is still in **ALPHA STATUS**
+* Many options are hard-coded still (ex: NFS setup)
+* Not many configurable options yet
+* **Should** be ran on a dedicated cluster
+* **Will not run** on undersized infra
+* The terms *CFME* and *MIQ* / *ManageIQ* are interchangeable
+
+## Requirements
+
+**NOTE:** These requirements are copied from the upstream
+[manageiq/manageiq-pods](https://github.com/ManageIQ/manageiq-pods)
+project.
+
+### Prerequisites:
+
+*
+ [OpenShift Origin 1.5](https://docs.openshift.com/container-platform/3.5/welcome/index.html)
+ or
+ [higher](https://docs.openshift.com/container-platform/latest/welcome/index.html)
+ provisioned
+* NFS or other compatible volume provider
+* A cluster-admin user (created by role if required)
+
+### Cluster Sizing
+
+In order to avoid random deployment failures due to resource
+starvation, we recommend a minimum cluster size for a **test**
+environment.
+
+| Type | Size | CPUs | Memory |
+|----------------|---------|----------|----------|
+| Masters | `1+` | `8` | `12GB` |
+| Nodes | `2+` | `4` | `8GB` |
+| PV Storage | `25GB` | `N/A` | `N/A` |
+
+
+![Basic CFME Deployment](img/CFMEBasicDeployment.png)
+
+**CFME has hard-requirements for memory. CFME will NOT install if your
+ infrastructure does not meet or exceed the requirements given
+ above. Do not run this playbook if you do not have the required
+ memory, you will just waste your time.**
+
+
+### Other sizing considerations
+
+* Recommendations assume MIQ will be the **only application running**
+ on this cluster.
+* Alternatively, you can provision an infrastructure node to run
+ registry/metrics/router/logging pods.
+* Each MIQ application pod will consume at least `3GB` of RAM on initial
+ deployment (blank deployment without providers).
+* RAM consumption will ramp up higher depending on appliance use, once
+ providers are added expect higher resource consumption.
+
+
+### Assumptions
+
+1) You meet/exceed the [cluster sizing](#cluster-sizing) requirements
+1) Your NFS server is on your master host
+1) Your PV backing NFS storage volume is mounted on `/exports/`
+
+Required directories that NFS will export to back the PVs:
+
+* `/exports/miq-pv0[123]`
+
+If the required directories are not present at install-time, they will
+be created using the recommended permissions per the
+[upstream documentation](https://github.com/ManageIQ/manageiq-pods#make-persistent-volumes-to-host-the-miq-database-and-application-data):
+
+* UID/GID: `root`/`root`
+* Mode: `0775`
+
+**IMPORTANT:** If you are using a separate volume (`/dev/vdX`) for NFS
+ storage, **ensure** it is mounted on `/exports/` **before** running
+ this role.
+
+
+
+## Role Variables
+
+Core variables in this role:
+
+| Name | Default value | Description |
+|-------------------------------|---------------|---------------|
+| `openshift_cfme_install_app` | `False` | `True`: Install everything and create a new CFME app, `False`: Just install all of the templates and scaffolding |
+
+
+Variables you may override have defaults defined in
+[defaults/main.yml](defaults/main.yml).
+
+
+# Usage
+
+This section describes the basic usage of this role. All parameters
+will use their [default values](defaults/main.yml).
+
+## Pre-flight Checks
+
+**IMPORTANT:** As documented above in [the prerequisites](#prerequisites),
+ you **must already** have your OCP cluster up and running.
+
+**Optional:** The ManageIQ pod is fairly large (about 1.7 GB) so to
+save some spin-up time post-deployment, you can begin pre-pulling the
+docker image now to each of your nodes now:
+
+```
+root@node0x # docker pull docker.io/manageiq/manageiq-pods:app-latest
+```
+
+## Getting Started
+
+1) The entry point playbook to install CFME is located in
+[the BYO playbooks](../../playbooks/byo/openshift-cfme/config.yml)
+directory
+
+2) Using your existing `hosts` inventory file, run `ansible-playbook`
+with the entry point playbook:
+
+```
+$ ansible-playbook -v -i <INVENTORY_FILE> playbooks/byo/openshift-cfme/config.yml
+```
+
+## Next Steps
+
+Once complete, the playbook will let you know:
+
+
+```
+TASK [openshift_cfme : Status update] *********************************************************
+ok: [ho.st.na.me] => {
+ "msg": "CFME has been deployed. Note that there will be a delay before it is fully initialized.\n"
+}
+```
+
+This will take several minutes (*possibly 10 or more*, depending on
+your network connection). However, you can get some insight into the
+deployment process during initialization.
+
+On your master node, switch to the `cfme` project (or whatever you
+named it if you overrode the `openshift_cfme_project` variable) and
+check on the pod states:
+
+```
+[root@cfme-master01 ~]# oc project cfme
+Now using project "cfme" on server "https://10.10.0.100:8443".
+
+[root@cfme-master01 ~]# oc get pod
+NAME READY STATUS RESTARTS AGE
+manageiq-0 0/1 Running 0 14m
+memcached-1-3lk7g 1/1 Running 0 14m
+postgresql-1-12slb 1/1 Running 0 14m
+```
+
+Note how the `manageiq-0` pod says `0/1` under the **READY**
+column. After some time (depending on your network connection) you'll
+be able to `rsh` into the pod to find out more of what's happening in
+real time:
+
+```
+[root@cfme-master01 ~]# oc rsh manageiq-0 bash -l
+```
+
+The `rsh` command opens a shell in your pod for you. In this case it's
+the pod called `manageiq-0`. `systemd` is managing the services in
+this pod so we can use the `list-units` command to see what is running
+currently: `# systemctl list-units | grep appliance`.
+
+If you see the `appliance-initialize` service running, this indicates
+that basic setup is still in progress. We can monitor the process with
+the `journalctl` command like so:
+
+
+```
+[root@manageiq-0 vmdb]# journalctl -f -u appliance-initialize.service
+Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Checking deployment status ==
+Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: No pre-existing EVM configuration found on region PV
+Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Checking for existing data on server PV ==
+Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Starting New Deployment ==
+Jun 14 14:55:52 manageiq-0 appliance-initialize.sh[58]: == Applying memcached config ==
+Jun 14 14:55:53 manageiq-0 appliance-initialize.sh[58]: == Initializing Appliance ==
+Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: create encryption key
+Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: configuring external database
+Jun 14 14:55:57 manageiq-0 appliance-initialize.sh[58]: Checking for connections to the database...
+Jun 14 14:56:09 manageiq-0 appliance-initialize.sh[58]: Create region starting
+Jun 14 14:58:15 manageiq-0 appliance-initialize.sh[58]: Create region complete
+Jun 14 14:58:15 manageiq-0 appliance-initialize.sh[58]: == Initializing PV data ==
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: == Initializing PV data backup ==
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: sending incremental file list
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: created directory /persistent/server-deploy/backup/backup_2017_06_14_145816
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/REGION
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/certs/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/certs/v2_key
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/config/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: region-data/var/www/miq/vmdb/config/database.yml
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/vmdb/
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: server-data/var/www/miq/vmdb/GUID
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: sent 1330 bytes received 136 bytes 2932.00 bytes/sec
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: total size is 770 speedup is 0.53
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: == Restoring PV data symlinks ==
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/REGION symlink is already in place, skipping
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/config/database.yml symlink is already in place, skipping
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/certs/v2_key symlink is already in place, skipping
+Jun 14 14:58:16 manageiq-0 appliance-initialize.sh[58]: /var/www/miq/vmdb/log symlink is already in place, skipping
+Jun 14 14:58:28 manageiq-0 systemctl[304]: Removed symlink /etc/systemd/system/multi-user.target.wants/appliance-initialize.service.
+Jun 14 14:58:29 manageiq-0 systemd[1]: Started Initialize Appliance Database.
+```
+
+Most of what we see here (above) is the initial database seeding
+process. This process isn't very quick, so be patient.
+
+At the bottom of the log there is a special line from the `systemctl`
+service, `Removed symlink
+/etc/systemd/system/multi-user.target.wants/appliance-initialize.service`. The
+`appliance-initialize` service is no longer marked as enabled. This
+indicates that the base application initialization is complete now.
+
+We're not done yet though, there are other ancillary services which
+run in this pod to support the application. *Still in the rsh shell*,
+Use the `ps` command to monitor for the `httpd` processes
+starting. You will see output similar to the following when that stage
+has completed:
+
+```
+[root@manageiq-0 vmdb]# ps aux | grep http
+root 1941 0.0 0.1 249820 7640 ? Ss 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
+apache 1942 0.0 0.0 250752 6012 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
+apache 1943 0.0 0.0 250472 5952 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
+apache 1944 0.0 0.0 250472 5916 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
+apache 1945 0.0 0.0 250360 5764 ? S 15:02 0:00 /usr/sbin/httpd -DFOREGROUND
+```
+
+Furthermore, you can expand your search process by just looking for
+processes with `MIQ` in their name:
+
+```
+[root@manageiq-0 vmdb]# ps aux | grep miq
+root 333 27.7 4.2 555884 315916 ? Sl 14:58 3:59 MIQ Server
+root 1976 0.6 4.0 507224 303740 ? SNl 15:02 0:03 MIQ: MiqGenericWorker id: 1, queue: generic
+root 1984 0.6 4.0 507224 304312 ? SNl 15:02 0:03 MIQ: MiqGenericWorker id: 2, queue: generic
+root 1992 0.9 4.0 508252 304888 ? SNl 15:02 0:05 MIQ: MiqPriorityWorker id: 3, queue: generic
+root 2000 0.7 4.0 510308 304696 ? SNl 15:02 0:04 MIQ: MiqPriorityWorker id: 4, queue: generic
+root 2008 1.2 4.0 514000 303612 ? SNl 15:02 0:07 MIQ: MiqScheduleWorker id: 5
+root 2026 0.2 4.0 517504 303644 ? SNl 15:02 0:01 MIQ: MiqEventHandler id: 6, queue: ems
+root 2036 0.2 4.0 518532 303768 ? SNl 15:02 0:01 MIQ: MiqReportingWorker id: 7, queue: reporting
+root 2044 0.2 4.0 519560 303812 ? SNl 15:02 0:01 MIQ: MiqReportingWorker id: 8, queue: reporting
+root 2059 0.2 4.0 528372 303956 ? SNl 15:02 0:01 puma 3.3.0 (tcp://127.0.0.1:5000) [MIQ: Web Server Worker]
+root 2067 0.9 4.0 529664 305716 ? SNl 15:02 0:05 puma 3.3.0 (tcp://127.0.0.1:3000) [MIQ: Web Server Worker]
+root 2075 0.2 4.0 529408 304056 ? SNl 15:02 0:01 puma 3.3.0 (tcp://127.0.0.1:4000) [MIQ: Web Server Worker]
+root 2329 0.0 0.0 10640 972 ? S+ 15:13 0:00 grep --color=auto -i miq
+```
+
+Finally, *still in the rsh shell*, to test if the application is
+running correctly, we can request the application homepage. If the
+page is available the page title will be `ManageIQ: Login`:
+
+```
+[root@manageiq-0 vmdb]# curl -s -k https://localhost | grep -A2 '<title>'
+<title>
+ManageIQ: Login
+</title>
+```
+
+**Note:** The `-s` flag makes `curl` operations silent and the `-k`
+flag to ignore errors about untrusted certificates.
+
+
+
+# Additional Upstream Resources
+
+Below are some useful resources from the upstream project
+documentation. You may find these of value.
+
+* [Verify Setup Was Successful](https://github.com/ManageIQ/manageiq-pods#verifying-the-setup-was-successful)
+* [POD Access And Routes](https://github.com/ManageIQ/manageiq-pods#pod-access-and-routes)
+* [Troubleshooting](https://github.com/ManageIQ/manageiq-pods#troubleshooting)
+
+
+# Manual Cleanup
+
+At this time uninstallation/cleanup is still a manual process. You
+will have to follow a few steps to fully remove CFME from your
+cluster.
+
+Delete the project:
+
+* `oc delete project cfme`
+
+Delete the PVs:
+
+* `oc delete pv miq-pv01`
+* `oc delete pv miq-pv02`
+* `oc delete pv miq-pv03`
+
+Clean out the old PV data:
+
+* `cd /exports/`
+* `find miq* -type f -delete`
+* `find miq* -type d -delete`
+
+Remove the NFS exports:
+
+* `rm /etc/exports.d/openshift_cfme.exports`
+* `exportfs -ar`
+
+Delete the user:
+
+* `oc delete user cfme`
+
+**NOTE:** The `oc delete project cfme` command will return quickly,
+but continue to operate in the background. Continue running `oc get
+pods` after you've completed the other tasks to monitor the pod
+termination progress. Likewise, run `oc get project` after the pods
+have disappeared to ensure that the `cfme` project has been terminated
+as well.
diff --git a/roles/openshift_cfme/defaults/main.yml b/roles/openshift_cfme/defaults/main.yml
new file mode 100644
index 000000000..493e1ef68
--- /dev/null
+++ b/roles/openshift_cfme/defaults/main.yml
@@ -0,0 +1,38 @@
+---
+# Namespace for the CFME project
+openshift_cfme_project: cfme
+# Namespace/project description
+openshift_cfme_project_description: ManageIQ - CloudForms Management Engine
+# Basic user assigned the `admin` role for the project
+openshift_cfme_user: cfme
+# Project system account for enabling privileged pods
+openshift_cfme_service_account: "system:serviceaccount:{{ openshift_cfme_project }}:default"
+# All the required exports
+openshift_cfme_pv_exports:
+ - miq-pv01
+ - miq-pv02
+ - miq-pv03
+# PV template files and their created object names
+openshift_cfme_pv_data:
+ - pv_name: miq-pv01
+ pv_template: miq-pv-db.yaml
+ pv_label: CFME DB PV
+ - pv_name: miq-pv02
+ pv_template: miq-pv-region.yaml
+ pv_label: CFME Region PV
+ - pv_name: miq-pv03
+ pv_template: miq-pv-server.yaml
+ pv_label: CFME Server PV
+
+# Tuning parameter to use more than 5 images at once from an ImageStream
+openshift_cfme_maxImagesBulkImportedPerRepository: 100
+# Hostname/IP of the NFS server. Currently defaults to first master
+openshift_cfme_nfs_server: "{{ groups.nfs.0 }}"
+# TODO: Refactor '_install_app' variable. This is just for testing but
+# maybe in the future it should control the entire yes/no for CFME.
+#
+# Whether or not the manageiq app should be initialized ('oc new-app
+# --template=manageiq). If False everything UP TO 'new-app' is ran.
+openshift_cfme_install_app: False
+# Docker image to pull
+openshift_cfme_container_image: "docker.io/manageiq/manageiq-pods:app-latest-fine"
diff --git a/roles/openshift_cfme/files/miq-template.yaml b/roles/openshift_cfme/files/miq-template.yaml
new file mode 100644
index 000000000..8f0d2af38
--- /dev/null
+++ b/roles/openshift_cfme/files/miq-template.yaml
@@ -0,0 +1,566 @@
+---
+path: /tmp/miq-template-out
+data:
+ apiVersion: v1
+ kind: Template
+ labels:
+ template: manageiq
+ metadata:
+ name: manageiq
+ annotations:
+ description: "ManageIQ appliance with persistent storage"
+ tags: "instant-app,manageiq,miq"
+ iconClass: "icon-rails"
+ objects:
+ - apiVersion: v1
+ kind: Secret
+ metadata:
+ name: "${NAME}-secrets"
+ stringData:
+ pg-password: "${DATABASE_PASSWORD}"
+ - apiVersion: v1
+ kind: Service
+ metadata:
+ annotations:
+ description: "Exposes and load balances ManageIQ pods"
+ service.alpha.openshift.io/dependencies: '[{"name":"${DATABASE_SERVICE_NAME}","namespace":"","kind":"Service"},{"name":"${MEMCACHED_SERVICE_NAME}","namespace":"","kind":"Service"}]'
+ name: ${NAME}
+ spec:
+ clusterIP: None
+ ports:
+ - name: http
+ port: 80
+ protocol: TCP
+ targetPort: 80
+ - name: https
+ port: 443
+ protocol: TCP
+ targetPort: 443
+ selector:
+ name: ${NAME}
+ - apiVersion: v1
+ kind: Route
+ metadata:
+ name: ${NAME}
+ spec:
+ host: ${APPLICATION_DOMAIN}
+ port:
+ targetPort: https
+ tls:
+ termination: passthrough
+ to:
+ kind: Service
+ name: ${NAME}
+ - apiVersion: v1
+ kind: ImageStream
+ metadata:
+ name: miq-app
+ annotations:
+ description: "Keeps track of the ManageIQ image changes"
+ spec:
+ dockerImageRepository: "${APPLICATION_IMG_NAME}"
+ - apiVersion: v1
+ kind: ImageStream
+ metadata:
+ name: miq-postgresql
+ annotations:
+ description: "Keeps track of the PostgreSQL image changes"
+ spec:
+ dockerImageRepository: "${POSTGRESQL_IMG_NAME}"
+ - apiVersion: v1
+ kind: ImageStream
+ metadata:
+ name: miq-memcached
+ annotations:
+ description: "Keeps track of the Memcached image changes"
+ spec:
+ dockerImageRepository: "${MEMCACHED_IMG_NAME}"
+ - apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: "${NAME}-${DATABASE_SERVICE_NAME}"
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: ${DATABASE_VOLUME_CAPACITY}
+ - apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: "${NAME}-region"
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: ${APPLICATION_REGION_VOLUME_CAPACITY}
+ - apiVersion: apps/v1beta1
+ kind: "StatefulSet"
+ metadata:
+ name: ${NAME}
+ annotations:
+ description: "Defines how to deploy the ManageIQ appliance"
+ spec:
+ serviceName: "${NAME}"
+ replicas: "${APPLICATION_REPLICA_COUNT}"
+ template:
+ metadata:
+ labels:
+ name: ${NAME}
+ name: ${NAME}
+ spec:
+ containers:
+ - name: manageiq
+ image: "${APPLICATION_IMG_NAME}:${APPLICATION_IMG_TAG}"
+ livenessProbe:
+ tcpSocket:
+ port: 443
+ initialDelaySeconds: 480
+ timeoutSeconds: 3
+ readinessProbe:
+ httpGet:
+ path: /
+ port: 443
+ scheme: HTTPS
+ initialDelaySeconds: 200
+ timeoutSeconds: 3
+ ports:
+ - containerPort: 80
+ protocol: TCP
+ - containerPort: 443
+ protocol: TCP
+ securityContext:
+ privileged: true
+ volumeMounts:
+ -
+ name: "${NAME}-server"
+ mountPath: "/persistent"
+ -
+ name: "${NAME}-region"
+ mountPath: "/persistent-region"
+ env:
+ -
+ name: "APPLICATION_INIT_DELAY"
+ value: "${APPLICATION_INIT_DELAY}"
+ -
+ name: "DATABASE_SERVICE_NAME"
+ value: "${DATABASE_SERVICE_NAME}"
+ -
+ name: "DATABASE_REGION"
+ value: "${DATABASE_REGION}"
+ -
+ name: "MEMCACHED_SERVICE_NAME"
+ value: "${MEMCACHED_SERVICE_NAME}"
+ -
+ name: "POSTGRESQL_USER"
+ value: "${DATABASE_USER}"
+ -
+ name: "POSTGRESQL_PASSWORD"
+ valueFrom:
+ secretKeyRef:
+ name: "${NAME}-secrets"
+ key: "pg-password"
+ -
+ name: "POSTGRESQL_DATABASE"
+ value: "${DATABASE_NAME}"
+ -
+ name: "POSTGRESQL_MAX_CONNECTIONS"
+ value: "${POSTGRESQL_MAX_CONNECTIONS}"
+ -
+ name: "POSTGRESQL_SHARED_BUFFERS"
+ value: "${POSTGRESQL_SHARED_BUFFERS}"
+ resources:
+ requests:
+ memory: "${APPLICATION_MEM_REQ}"
+ cpu: "${APPLICATION_CPU_REQ}"
+ limits:
+ memory: "${APPLICATION_MEM_LIMIT}"
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - /opt/manageiq/container-scripts/sync-pv-data
+ volumes:
+ -
+ name: "${NAME}-region"
+ persistentVolumeClaim:
+ claimName: ${NAME}-region
+ volumeClaimTemplates:
+ - metadata:
+ name: "${NAME}-server"
+ annotations:
+ # Uncomment this if using dynamic volume provisioning.
+ # https://docs.openshift.org/latest/install_config/persistent_storage/dynamically_provisioning_pvs.html
+ # volume.alpha.kubernetes.io/storage-class: anything
+ spec:
+ accessModes: [ ReadWriteOnce ]
+ resources:
+ requests:
+ storage: "${APPLICATION_VOLUME_CAPACITY}"
+ - apiVersion: v1
+ kind: "Service"
+ metadata:
+ name: "${MEMCACHED_SERVICE_NAME}"
+ annotations:
+ description: "Exposes the memcached server"
+ spec:
+ ports:
+ -
+ name: "memcached"
+ port: 11211
+ targetPort: 11211
+ selector:
+ name: "${MEMCACHED_SERVICE_NAME}"
+ - apiVersion: v1
+ kind: "DeploymentConfig"
+ metadata:
+ name: "${MEMCACHED_SERVICE_NAME}"
+ annotations:
+ description: "Defines how to deploy memcached"
+ spec:
+ strategy:
+ type: "Recreate"
+ triggers:
+ -
+ type: "ImageChange"
+ imageChangeParams:
+ automatic: true
+ containerNames:
+ - "memcached"
+ from:
+ kind: "ImageStreamTag"
+ name: "miq-memcached:${MEMCACHED_IMG_TAG}"
+ -
+ type: "ConfigChange"
+ replicas: 1
+ selector:
+ name: "${MEMCACHED_SERVICE_NAME}"
+ template:
+ metadata:
+ name: "${MEMCACHED_SERVICE_NAME}"
+ labels:
+ name: "${MEMCACHED_SERVICE_NAME}"
+ spec:
+ volumes: []
+ containers:
+ -
+ name: "memcached"
+ image: "${MEMCACHED_IMG_NAME}:${MEMCACHED_IMG_TAG}"
+ ports:
+ -
+ containerPort: 11211
+ readinessProbe:
+ timeoutSeconds: 1
+ initialDelaySeconds: 5
+ tcpSocket:
+ port: 11211
+ livenessProbe:
+ timeoutSeconds: 1
+ initialDelaySeconds: 30
+ tcpSocket:
+ port: 11211
+ volumeMounts: []
+ env:
+ -
+ name: "MEMCACHED_MAX_MEMORY"
+ value: "${MEMCACHED_MAX_MEMORY}"
+ -
+ name: "MEMCACHED_MAX_CONNECTIONS"
+ value: "${MEMCACHED_MAX_CONNECTIONS}"
+ -
+ name: "MEMCACHED_SLAB_PAGE_SIZE"
+ value: "${MEMCACHED_SLAB_PAGE_SIZE}"
+ resources:
+ requests:
+ memory: "${MEMCACHED_MEM_REQ}"
+ cpu: "${MEMCACHED_CPU_REQ}"
+ limits:
+ memory: "${MEMCACHED_MEM_LIMIT}"
+ - apiVersion: v1
+ kind: "Service"
+ metadata:
+ name: "${DATABASE_SERVICE_NAME}"
+ annotations:
+ description: "Exposes the database server"
+ spec:
+ ports:
+ -
+ name: "postgresql"
+ port: 5432
+ targetPort: 5432
+ selector:
+ name: "${DATABASE_SERVICE_NAME}"
+ - apiVersion: v1
+ kind: "DeploymentConfig"
+ metadata:
+ name: "${DATABASE_SERVICE_NAME}"
+ annotations:
+ description: "Defines how to deploy the database"
+ spec:
+ strategy:
+ type: "Recreate"
+ triggers:
+ -
+ type: "ImageChange"
+ imageChangeParams:
+ automatic: true
+ containerNames:
+ - "postgresql"
+ from:
+ kind: "ImageStreamTag"
+ name: "miq-postgresql:${POSTGRESQL_IMG_TAG}"
+ -
+ type: "ConfigChange"
+ replicas: 1
+ selector:
+ name: "${DATABASE_SERVICE_NAME}"
+ template:
+ metadata:
+ name: "${DATABASE_SERVICE_NAME}"
+ labels:
+ name: "${DATABASE_SERVICE_NAME}"
+ spec:
+ volumes:
+ -
+ name: "miq-pgdb-volume"
+ persistentVolumeClaim:
+ claimName: "${NAME}-${DATABASE_SERVICE_NAME}"
+ containers:
+ -
+ name: "postgresql"
+ image: "${POSTGRESQL_IMG_NAME}:${POSTGRESQL_IMG_TAG}"
+ ports:
+ -
+ containerPort: 5432
+ readinessProbe:
+ timeoutSeconds: 1
+ initialDelaySeconds: 15
+ exec:
+ command:
+ - "/bin/sh"
+ - "-i"
+ - "-c"
+ - "psql -h 127.0.0.1 -U ${POSTGRESQL_USER} -q -d ${POSTGRESQL_DATABASE} -c 'SELECT 1'"
+ livenessProbe:
+ timeoutSeconds: 1
+ initialDelaySeconds: 60
+ tcpSocket:
+ port: 5432
+ volumeMounts:
+ -
+ name: "miq-pgdb-volume"
+ mountPath: "/var/lib/pgsql/data"
+ env:
+ -
+ name: "POSTGRESQL_USER"
+ value: "${DATABASE_USER}"
+ -
+ name: "POSTGRESQL_PASSWORD"
+ valueFrom:
+ secretKeyRef:
+ name: "${NAME}-secrets"
+ key: "pg-password"
+ -
+ name: "POSTGRESQL_DATABASE"
+ value: "${DATABASE_NAME}"
+ -
+ name: "POSTGRESQL_MAX_CONNECTIONS"
+ value: "${POSTGRESQL_MAX_CONNECTIONS}"
+ -
+ name: "POSTGRESQL_SHARED_BUFFERS"
+ value: "${POSTGRESQL_SHARED_BUFFERS}"
+ resources:
+ requests:
+ memory: "${POSTGRESQL_MEM_REQ}"
+ cpu: "${POSTGRESQL_CPU_REQ}"
+ limits:
+ memory: "${POSTGRESQL_MEM_LIMIT}"
+
+ parameters:
+ -
+ name: "NAME"
+ displayName: Name
+ required: true
+ description: "The name assigned to all of the frontend objects defined in this template."
+ value: manageiq
+ -
+ name: "DATABASE_SERVICE_NAME"
+ displayName: "PostgreSQL Service Name"
+ required: true
+ description: "The name of the OpenShift Service exposed for the PostgreSQL container."
+ value: "postgresql"
+ -
+ name: "DATABASE_USER"
+ displayName: "PostgreSQL User"
+ required: true
+ description: "PostgreSQL user that will access the database."
+ value: "root"
+ -
+ name: "DATABASE_PASSWORD"
+ displayName: "PostgreSQL Password"
+ required: true
+ description: "Password for the PostgreSQL user."
+ from: "[a-zA-Z0-9]{8}"
+ generate: expression
+ -
+ name: "DATABASE_NAME"
+ required: true
+ displayName: "PostgreSQL Database Name"
+ description: "Name of the PostgreSQL database accessed."
+ value: "vmdb_production"
+ -
+ name: "DATABASE_REGION"
+ required: true
+ displayName: "Application Database Region"
+ description: "Database region that will be used for application."
+ value: "0"
+ -
+ name: "MEMCACHED_SERVICE_NAME"
+ required: true
+ displayName: "Memcached Service Name"
+ description: "The name of the OpenShift Service exposed for the Memcached container."
+ value: "memcached"
+ -
+ name: "MEMCACHED_MAX_MEMORY"
+ displayName: "Memcached Max Memory"
+ description: "Memcached maximum memory for memcached object storage in MB."
+ value: "64"
+ -
+ name: "MEMCACHED_MAX_CONNECTIONS"
+ displayName: "Memcached Max Connections"
+ description: "Memcached maximum number of connections allowed."
+ value: "1024"
+ -
+ name: "MEMCACHED_SLAB_PAGE_SIZE"
+ displayName: "Memcached Slab Page Size"
+ description: "Memcached size of each slab page."
+ value: "1m"
+ -
+ name: "POSTGRESQL_MAX_CONNECTIONS"
+ displayName: "PostgreSQL Max Connections"
+ description: "PostgreSQL maximum number of database connections allowed."
+ value: "100"
+ -
+ name: "POSTGRESQL_SHARED_BUFFERS"
+ displayName: "PostgreSQL Shared Buffer Amount"
+ description: "Amount of memory dedicated for PostgreSQL shared memory buffers."
+ value: "256MB"
+ -
+ name: "APPLICATION_CPU_REQ"
+ displayName: "Application Min CPU Requested"
+ required: true
+ description: "Minimum amount of CPU time the Application container will need (expressed in millicores)."
+ value: "1000m"
+ -
+ name: "POSTGRESQL_CPU_REQ"
+ displayName: "PostgreSQL Min CPU Requested"
+ required: true
+ description: "Minimum amount of CPU time the PostgreSQL container will need (expressed in millicores)."
+ value: "500m"
+ -
+ name: "MEMCACHED_CPU_REQ"
+ displayName: "Memcached Min CPU Requested"
+ required: true
+ description: "Minimum amount of CPU time the Memcached container will need (expressed in millicores)."
+ value: "200m"
+ -
+ name: "APPLICATION_MEM_REQ"
+ displayName: "Application Min RAM Requested"
+ required: true
+ description: "Minimum amount of memory the Application container will need."
+ value: "6144Mi"
+ -
+ name: "POSTGRESQL_MEM_REQ"
+ displayName: "PostgreSQL Min RAM Requested"
+ required: true
+ description: "Minimum amount of memory the PostgreSQL container will need."
+ value: "1024Mi"
+ -
+ name: "MEMCACHED_MEM_REQ"
+ displayName: "Memcached Min RAM Requested"
+ required: true
+ description: "Minimum amount of memory the Memcached container will need."
+ value: "64Mi"
+ -
+ name: "APPLICATION_MEM_LIMIT"
+ displayName: "Application Max RAM Limit"
+ required: true
+ description: "Maximum amount of memory the Application container can consume."
+ value: "16384Mi"
+ -
+ name: "POSTGRESQL_MEM_LIMIT"
+ displayName: "PostgreSQL Max RAM Limit"
+ required: true
+ description: "Maximum amount of memory the PostgreSQL container can consume."
+ value: "8192Mi"
+ -
+ name: "MEMCACHED_MEM_LIMIT"
+ displayName: "Memcached Max RAM Limit"
+ required: true
+ description: "Maximum amount of memory the Memcached container can consume."
+ value: "256Mi"
+ -
+ name: "POSTGRESQL_IMG_NAME"
+ displayName: "PostgreSQL Image Name"
+ description: "This is the PostgreSQL image name requested to deploy."
+ value: "docker.io/manageiq/manageiq-pods"
+ -
+ name: "POSTGRESQL_IMG_TAG"
+ displayName: "PostgreSQL Image Tag"
+ description: "This is the PostgreSQL image tag/version requested to deploy."
+ value: "postgresql-latest-fine"
+ -
+ name: "MEMCACHED_IMG_NAME"
+ displayName: "Memcached Image Name"
+ description: "This is the Memcached image name requested to deploy."
+ value: "docker.io/manageiq/manageiq-pods"
+ -
+ name: "MEMCACHED_IMG_TAG"
+ displayName: "Memcached Image Tag"
+ description: "This is the Memcached image tag/version requested to deploy."
+ value: "memcached-latest-fine"
+ -
+ name: "APPLICATION_IMG_NAME"
+ displayName: "Application Image Name"
+ description: "This is the Application image name requested to deploy."
+ value: "docker.io/manageiq/manageiq-pods"
+ -
+ name: "APPLICATION_IMG_TAG"
+ displayName: "Application Image Tag"
+ description: "This is the Application image tag/version requested to deploy."
+ value: "app-latest-fine"
+ -
+ name: "APPLICATION_DOMAIN"
+ displayName: "Application Hostname"
+ description: "The exposed hostname that will route to the application service, if left blank a value will be defaulted."
+ value: ""
+ -
+ name: "APPLICATION_REPLICA_COUNT"
+ displayName: "Application Replica Count"
+ description: "This is the number of Application replicas requested to deploy."
+ value: "1"
+ -
+ name: "APPLICATION_INIT_DELAY"
+ displayName: "Application Init Delay"
+ required: true
+ description: "Delay in seconds before we attempt to initialize the application."
+ value: "15"
+ -
+ name: "APPLICATION_VOLUME_CAPACITY"
+ displayName: "Application Volume Capacity"
+ required: true
+ description: "Volume space available for application data."
+ value: "5Gi"
+ -
+ name: "APPLICATION_REGION_VOLUME_CAPACITY"
+ displayName: "Application Region Volume Capacity"
+ required: true
+ description: "Volume space available for region application data."
+ value: "5Gi"
+ -
+ name: "DATABASE_VOLUME_CAPACITY"
+ displayName: "Database Volume Capacity"
+ required: true
+ description: "Volume space available for database."
+ value: "15Gi"
diff --git a/roles/openshift_cfme/files/openshift_cfme.exports b/roles/openshift_cfme/files/openshift_cfme.exports
new file mode 100644
index 000000000..5457d41fc
--- /dev/null
+++ b/roles/openshift_cfme/files/openshift_cfme.exports
@@ -0,0 +1,3 @@
+/exports/miq-pv01 *(rw,no_root_squash,no_wdelay)
+/exports/miq-pv02 *(rw,no_root_squash,no_wdelay)
+/exports/miq-pv03 *(rw,no_root_squash,no_wdelay)
diff --git a/roles/openshift_cfme/handlers/main.yml b/roles/openshift_cfme/handlers/main.yml
new file mode 100644
index 000000000..476a5e030
--- /dev/null
+++ b/roles/openshift_cfme/handlers/main.yml
@@ -0,0 +1,42 @@
+---
+######################################################################
+# NOTE: These are duplicated from roles/openshift_master/handlers/main.yml
+#
+# TODO: Use the consolidated 'openshift_handlers' role once it's ready
+# See: https://github.com/openshift/openshift-ansible/pull/4041#discussion_r118770782
+######################################################################
+
+- name: restart master
+ systemd: name={{ openshift.common.service_type }}-master state=restarted
+ when: (openshift.master.ha is not defined or not openshift.master.ha | bool) and (not (master_service_status_changed | default(false) | bool))
+ notify: Verify API Server
+
+- name: restart master api
+ systemd: name={{ openshift.common.service_type }}-master-api state=restarted
+ when: (openshift.master.ha is defined and openshift.master.ha | bool) and (not (master_api_service_status_changed | default(false) | bool)) and openshift.master.cluster_method == 'native'
+ notify: Verify API Server
+
+- name: restart master controllers
+ systemd: name={{ openshift.common.service_type }}-master-controllers state=restarted
+ when: (openshift.master.ha is defined and openshift.master.ha | bool) and (not (master_controllers_service_status_changed | default(false) | bool)) and openshift.master.cluster_method == 'native'
+
+- name: Verify API Server
+ # Using curl here since the uri module requires python-httplib2 and
+ # wait_for port doesn't provide health information.
+ command: >
+ curl --silent --tlsv1.2
+ {% if openshift.common.version_gte_3_2_or_1_2 | bool %}
+ --cacert {{ openshift.common.config_base }}/master/ca-bundle.crt
+ {% else %}
+ --cacert {{ openshift.common.config_base }}/master/ca.crt
+ {% endif %}
+ {{ openshift.master.api_url }}/healthz/ready
+ args:
+ # Disables the following warning:
+ # Consider using get_url or uri module rather than running curl
+ warn: no
+ register: api_available_output
+ until: api_available_output.stdout == 'ok'
+ retries: 120
+ delay: 1
+ changed_when: false
diff --git a/roles/openshift_cfme/img/CFMEBasicDeployment.png b/roles/openshift_cfme/img/CFMEBasicDeployment.png
new file mode 100644
index 000000000..a89c1e325
--- /dev/null
+++ b/roles/openshift_cfme/img/CFMEBasicDeployment.png
Binary files differ
diff --git a/roles/openshift_cfme/meta/main.yml b/roles/openshift_cfme/meta/main.yml
new file mode 100644
index 000000000..9200f2c3c
--- /dev/null
+++ b/roles/openshift_cfme/meta/main.yml
@@ -0,0 +1,20 @@
+---
+galaxy_info:
+ author: Tim Bielawa
+ description: OpenShift CFME (Manage IQ) Deployer
+ company: Red Hat, Inc.
+ license: Apache License, Version 2.0
+ min_ansible_version: 2.2
+ version: 1.0
+ platforms:
+ - name: EL
+ versions:
+ - 7
+ categories:
+ - cloud
+ - system
+dependencies:
+- role: lib_openshift
+- role: lib_utils
+- role: openshift_common
+- role: openshift_master_facts
diff --git a/roles/openshift_cfme/tasks/create_pvs.yml b/roles/openshift_cfme/tasks/create_pvs.yml
new file mode 100644
index 000000000..7fa7d3997
--- /dev/null
+++ b/roles/openshift_cfme/tasks/create_pvs.yml
@@ -0,0 +1,36 @@
+---
+# Check for existance and then conditionally:
+# - evaluate templates
+# - PVs
+#
+# These tasks idempotently create required CFME PV objects. Do not
+# call this file directly. This file is intended to be ran as an
+# include that has a 'with_items' attached to it. Hence the use below
+# of variables like "{{ item.pv_label }}"
+
+- name: "Check if the {{ item.pv_label }} template has been created already"
+ oc_obj:
+ namespace: "{{ openshift_cfme_project }}"
+ state: list
+ kind: pv
+ name: "{{ item.pv_name }}"
+ register: miq_pv_check
+
+# Skip all of this if the PV already exists
+- block:
+ - name: "Ensure the {{ item.pv_label }} template is evaluated"
+ template:
+ src: "{{ item.pv_template }}.j2"
+ dest: "{{ template_dir }}/{{ item.pv_template }}"
+
+ - name: "Ensure {{ item.pv_label }} is created"
+ oc_obj:
+ namespace: "{{ openshift_cfme_project }}"
+ kind: pv
+ name: "{{ item.pv_name }}"
+ state: present
+ delete_after: True
+ files:
+ - "{{ template_dir }}/{{ item.pv_template }}"
+ when:
+ - not miq_pv_check.results.results.0
diff --git a/roles/openshift_cfme/tasks/main.yml b/roles/openshift_cfme/tasks/main.yml
new file mode 100644
index 000000000..a19442a4e
--- /dev/null
+++ b/roles/openshift_cfme/tasks/main.yml
@@ -0,0 +1,164 @@
+---
+######################################################################
+# Users, projects, and privileges
+
+- name: Ensure the CFME user exists
+ oc_user:
+ state: present
+ username: "{{ openshift_cfme_user }}"
+
+- name: Ensure the CFME namespace exists with CFME user as admin
+ oc_project:
+ state: present
+ name: "{{ openshift_cfme_project }}"
+ display_name: "{{ openshift_cfme_project_description }}"
+ admin: "{{ openshift_cfme_user }}"
+
+- name: Ensure the CFME namespace service account is privileged
+ oc_adm_policy_user:
+ namespace: "{{ openshift_cfme_project }}"
+ user: "{{ openshift_cfme_service_account }}"
+ resource_kind: scc
+ resource_name: privileged
+ state: present
+
+######################################################################
+# Service settings
+
+- name: Ensure bulk image import limit is tuned
+ yedit:
+ src: /etc/origin/master/master-config.yaml
+ key: 'imagePolicyConfig.maxImagesBulkImportedPerRepository'
+ value: "{{ openshift_cfme_maxImagesBulkImportedPerRepository | int() }}"
+ state: present
+ backup: True
+ register: master_config_updated
+ notify:
+ - restart master
+
+- meta: flush_handlers
+
+######################################################################
+# NFS
+
+- name: Ensure the /exports/ directory exists
+ file:
+ path: /exports/
+ state: directory
+ mode: 0755
+ owner: root
+ group: root
+
+- name: Ensure the miq-pv0X export directories exist
+ file:
+ path: "/exports/{{ item }}"
+ state: directory
+ mode: 0775
+ owner: root
+ group: root
+ with_items: "{{ openshift_cfme_pv_exports }}"
+
+- name: Ensure the NFS exports for CFME PVs exist
+ copy:
+ src: openshift_cfme.exports
+ dest: /etc/exports.d/openshift_cfme.exports
+ register: nfs_exports_updated
+
+- name: Ensure the NFS export table is refreshed if exports were added
+ command: exportfs -ar
+ when:
+ - nfs_exports_updated.changed
+
+
+######################################################################
+# Create the required CFME PVs. Check out these online docs if you
+# need a refresher on includes looping with items:
+# * http://docs.ansible.com/ansible/playbooks_loops.html#loops-and-includes-in-2-0
+# * http://stackoverflow.com/a/35128533
+#
+# TODO: Handle the case where a PV template is updated in
+# openshift-ansible and the change needs to be landed on the managed
+# cluster.
+
+- include: create_pvs.yml
+ with_items: "{{ openshift_cfme_pv_data }}"
+
+######################################################################
+# CFME App Template
+#
+# Note, this is different from the create_pvs.yml tasks in that the
+# application template does not require any jinja2 evaluation.
+#
+# TODO: Handle the case where the server template is updated in
+# openshift-ansible and the change needs to be landed on the managed
+# cluster.
+
+- name: Check if the CFME Server template has been created already
+ oc_obj:
+ namespace: "{{ openshift_cfme_project }}"
+ state: list
+ kind: template
+ name: manageiq
+ register: miq_server_check
+
+- name: Copy over CFME Server template
+ copy:
+ src: miq-template.yaml
+ dest: "{{ template_dir }}/miq-template.yaml"
+
+- name: Ensure the server template was read from disk
+ debug:
+ var=r_openshift_cfme_miq_template_content
+
+- name: Ensure CFME Server Template exists
+ oc_obj:
+ namespace: "{{ openshift_cfme_project }}"
+ kind: template
+ name: "manageiq"
+ state: present
+ content: "{{ r_openshift_cfme_miq_template_content }}"
+
+######################################################################
+# Let's do this
+
+- name: Ensure the CFME Server is created
+ oc_process:
+ namespace: "{{ openshift_cfme_project }}"
+ template_name: manageiq
+ create: True
+ register: cfme_new_app_process
+ run_once: True
+ when:
+ # User said to install CFME in their inventory
+ - openshift_cfme_install_app | bool
+ # # The server app doesn't exist already
+ # - not miq_server_check.results.results.0
+
+- debug:
+ var: cfme_new_app_process
+
+######################################################################
+# Various cleanup steps
+
+# TODO: Not sure what to do about this right now. Might be able to
+# just delete it? This currently warns about "Unable to find
+# '<TEMP_DIR>' in expected paths."
+- name: Ensure the temporary PV/App templates are erased
+ file:
+ path: "{{ item }}"
+ state: absent
+ with_fileglob:
+ - "{{ template_dir }}/*.yaml"
+
+- name: Ensure the temporary PV/app template directory is erased
+ file:
+ path: "{{ template_dir }}"
+ state: absent
+
+######################################################################
+
+- name: Status update
+ debug:
+ msg: >
+ CFME has been deployed. Note that there will be a delay before
+ it is fully initialized.
diff --git a/roles/openshift_cfme/tasks/uninstall.yml b/roles/openshift_cfme/tasks/uninstall.yml
new file mode 100644
index 000000000..cba734a0e
--- /dev/null
+++ b/roles/openshift_cfme/tasks/uninstall.yml
@@ -0,0 +1,43 @@
+---
+- include_role:
+ name: lib_openshift
+
+- name: Uninstall CFME - ManageIQ
+ debug:
+ msg: Uninstalling Cloudforms Management Engine - ManageIQ
+
+- name: Ensure the CFME project is removed
+ oc_project:
+ state: absent
+ name: "{{ openshift_cfme_project }}"
+
+- name: Ensure the CFME template is removed
+ oc_obj:
+ namespace: "{{ openshift_cfme_project }}"
+ state: absent
+ kind: template
+ name: manageiq
+
+- name: Ensure the CFME PVs are removed
+ oc_obj:
+ state: absent
+ all_namespaces: True
+ kind: pv
+ name: "{{ item }}"
+ with_items: "{{ openshift_cfme_pv_exports }}"
+
+- name: Ensure the CFME user is removed
+ oc_user:
+ state: absent
+ username: "{{ openshift_cfme_user }}"
+
+- name: Ensure the CFME NFS Exports are removed
+ file:
+ path: /etc/exports.d/openshift_cfme.exports
+ state: absent
+ register: nfs_exports_removed
+
+- name: Ensure the NFS export table is refreshed if exports were removed
+ command: exportfs -ar
+ when:
+ - nfs_exports_removed.changed
diff --git a/roles/openshift_cfme/templates/miq-pv-db.yaml.j2 b/roles/openshift_cfme/templates/miq-pv-db.yaml.j2
new file mode 100644
index 000000000..b8c3bb277
--- /dev/null
+++ b/roles/openshift_cfme/templates/miq-pv-db.yaml.j2
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: miq-pv01
+spec:
+ capacity:
+ storage: 15Gi
+ accessModes:
+ - ReadWriteOnce
+ nfs:
+ path: /exports/miq-pv01
+ server: {{ openshift_cfme_nfs_server }}
+ persistentVolumeReclaimPolicy: Retain
diff --git a/roles/openshift_cfme/templates/miq-pv-region.yaml.j2 b/roles/openshift_cfme/templates/miq-pv-region.yaml.j2
new file mode 100644
index 000000000..7218773f0
--- /dev/null
+++ b/roles/openshift_cfme/templates/miq-pv-region.yaml.j2
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: miq-pv02
+spec:
+ capacity:
+ storage: 5Gi
+ accessModes:
+ - ReadWriteOnce
+ nfs:
+ path: /exports/miq-pv02
+ server: {{ openshift_cfme_nfs_server }}
+ persistentVolumeReclaimPolicy: Retain
diff --git a/roles/openshift_cfme/templates/miq-pv-server.yaml.j2 b/roles/openshift_cfme/templates/miq-pv-server.yaml.j2
new file mode 100644
index 000000000..7b40b6c69
--- /dev/null
+++ b/roles/openshift_cfme/templates/miq-pv-server.yaml.j2
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: miq-pv03
+spec:
+ capacity:
+ storage: 5Gi
+ accessModes:
+ - ReadWriteOnce
+ nfs:
+ path: /exports/miq-pv03
+ server: {{ openshift_cfme_nfs_server }}
+ persistentVolumeReclaimPolicy: Retain