summaryrefslogtreecommitdiffstats
path: root/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml
diff options
context:
space:
mode:
authorOpenShift Merge Robot <openshift-merge-robot@users.noreply.github.com>2018-02-14 14:28:33 -0800
committerGitHub <noreply@github.com>2018-02-14 14:28:33 -0800
commitb62c397f0625b9ff3654347a1777ed2277942712 (patch)
tree950a36359a9ac5e7d4a0b692ccdaf43e6f106463 /roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml
parentdeb9a793cbb169b964424720f9c3a6ce6b976b09 (diff)
parent61df593d2047995f25327e54b32956944f413100 (diff)
downloadopenshift-b62c397f0625b9ff3654347a1777ed2277942712.tar.gz
openshift-b62c397f0625b9ff3654347a1777ed2277942712.tar.bz2
openshift-b62c397f0625b9ff3654347a1777ed2277942712.tar.xz
openshift-b62c397f0625b9ff3654347a1777ed2277942712.zip
Merge pull request #7097 from ewolinetz/logging_fresh_lg_cluster_fix
Automatic merge from submit-queue. Whenever we create a new es node ignore health checks, changing prome… …theus pw gen for increased secret idempotency Addresses https://bugzilla.redhat.com/show_bug.cgi?id=1540099 Whenever we are in a cluster sized > 1 the nodes required for recovery > 1. So when we have a fresh install we will not see the cluster start up because the number of required nodes is not met. Whenever we are creating a new node, we do not wait for the health check so that the logging playbook can complete and we can roll out all updated nodes. Also addresses prometheus pw generation so that each rerun of the playbook doesn't change the secret which triggers a full rollout of the cluster (assumes that keys/certs have changed).
Diffstat (limited to 'roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml')
-rw-r--r--roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml9
1 files changed, 6 insertions, 3 deletions
diff --git a/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml b/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml
index a1e172168..934ab886b 100644
--- a/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml
+++ b/roles/openshift_logging_elasticsearch/tasks/restart_es_node.yml
@@ -3,7 +3,8 @@
command: >
{{ openshift_client_binary }} rollout latest {{ _es_node }} -n {{ openshift_logging_elasticsearch_namespace }}
-- name: "Waiting for {{ _es_node }} to finish scaling up"
+- when: not _skip_healthcheck | bool
+ name: "Waiting for {{ _es_node }} to finish scaling up"
oc_obj:
state: list
name: "{{ _es_node }}"
@@ -19,12 +20,14 @@
retries: 60
delay: 30
-- name: Gettings name(s) of replica pod(s)
+- when: not _skip_healthcheck | bool
+ name: Gettings name(s) of replica pod(s)
command: >
{{ openshift_client_binary }} get pods -l deploymentconfig={{ _es_node }} -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name}
register: _pods
-- name: "Waiting for ES to be ready for {{ _es_node }}"
+- when: not _skip_healthcheck | bool
+ name: "Waiting for ES to be ready for {{ _es_node }}"
shell: >
{{ openshift_client_binary }} exec "{{ _pod }}" -c elasticsearch -n "{{ openshift_logging_elasticsearch_namespace }}" -- es_cluster_health
with_items: "{{ _pods.stdout.split(' ') }}"