summaryrefslogtreecommitdiffstats
path: root/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml
diff options
context:
space:
mode:
authorOpenShift Merge Robot <openshift-merge-robot@users.noreply.github.com>2018-01-10 17:58:44 -0800
committerGitHub <noreply@github.com>2018-01-10 17:58:44 -0800
commit693769209936849a6f83c4ef85bda39dabfb8800 (patch)
treeb3e17a9559c7dea02accb8e75865aad7eee3f764 /playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml
parente45ef801051202f9d79a0dc814d4a3e056b257d2 (diff)
parent0841917f05cfad2701164edbb271167c277d3300 (diff)
downloadopenshift-693769209936849a6f83c4ef85bda39dabfb8800.tar.gz
openshift-693769209936849a6f83c4ef85bda39dabfb8800.tar.bz2
openshift-693769209936849a6f83c4ef85bda39dabfb8800.tar.xz
openshift-693769209936849a6f83c4ef85bda39dabfb8800.zip
Merge pull request #5080 from sdodson/drain-timeouts
Automatic merge from submit-queue. Add the ability to specify a timeout for node drain operations A timeout to wait for nodes to drain pods can be specified to ensure that the upgrade continues even if nodes fail to drain pods in the allowed time. The default value of 0 will wait indefinitely allowing the admin to investigate the root cause and ensuring that disruption budgets are respected. In practice the `oc adm drain` command will eventually error out, at least that's what we've seen in our large online clusters, when that happens a second attempt will be made to drain the nodes, if it fails again it will abort the upgrade for that node or for the entire cluster based on your defined `openshift_upgrade_nodes_max_fail_percentage`. `openshift_upgrade_nodes_drain_timeout=0` is the default and will wait until all pods have been drained successfully `openshift_upgrade_nodes_drain_timeout=600` would wait for 600s before moving on to the tasks which would forcefully stop pods such as stopping docker, node, and openvswitch.
Diffstat (limited to 'playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml')
-rw-r--r--playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml12
1 files changed, 9 insertions, 3 deletions
diff --git a/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml b/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml
index 464af3ae6..850442b3b 100644
--- a/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml
+++ b/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml
@@ -33,12 +33,18 @@
- name: Drain Node for Kubelet upgrade
command: >
- {{ hostvars[groups.oo_first_master.0]['first_master_client_binary'] }} adm drain {{ openshift.node.nodename | lower }} --config={{ openshift.common.config_base }}/master/admin.kubeconfig --force --delete-local-data --ignore-daemonsets
+ {{ hostvars[groups.oo_first_master.0]['first_master_client_binary'] }} adm drain {{ openshift.node.nodename | lower }}
+ --config={{ openshift.common.config_base }}/master/admin.kubeconfig
+ --force --delete-local-data --ignore-daemonsets
+ --timeout={{ openshift_upgrade_nodes_drain_timeout | default(0) }}s
delegate_to: "{{ groups.oo_first_master.0 }}"
register: l_upgrade_nodes_drain_result
until: not (l_upgrade_nodes_drain_result is failed)
- retries: 60
- delay: 60
+ retries: "{{ 1 if ( openshift_upgrade_nodes_drain_timeout | default(0) | int ) == 0 else 0 }}"
+ delay: 5
+ failed_when:
+ - l_upgrade_nodes_drain_result is failed
+ - openshift_upgrade_nodes_drain_timeout | default(0) | int == 0
post_tasks:
- import_role: