-
-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update patching playbook to utilize kubernetes.core collection #859
Update patching playbook to utilize kubernetes.core collection #859
Conversation
This looks awesome @PrymalInstynct were you able to test this and all seemed fine? |
So I did not test the conversion from template to playbooks, but I wrote my own role about a month ago that did the same thing along with sending notifications to discord based on the status of the various tasks so I could patch weekly on a cron and know when the cluster rebooted. The logic is sound in my role and I copy pasted most of it, but I am happy to test this PR later today/tomorrow. |
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
I just ran this, the first node cordon, drained fine. The error on the second node is probably not a fault of this playbook but more rook being a pain in the ass:
|
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
So you will notice I am still running the shell module in my role for the drain task because I can include the pod_selectors flag to ignore the app So we could keep using the command module for that task until v2.5.0 is published. I will let you make that call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, when I pulled the kubeconfig parameter out I assumed that a control plane node would automatically know where the kubeconfig file lived.
…aml.j2 Co-authored-by: Devin Buhl <[email protected]>
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
…aml.j2 Co-authored-by: Devin Buhl <[email protected]>
Original PlaybookI ran the original playbook and it has been sitting at the drain task on the first node for 20 minutes. I am using the following command to monitor what is going on with the pods And here is my output at the moment
So I updated the original playbook to include the The original playbook ran in
|
Sounds good, sorry I just commit a change where you have to resolve the conflicts again (my bad). I forgot I committed a fix the playbook in my last PR. |
No worrires, I think its ready to go now. |
bootstrap/templates/ansible/playbooks/cluster-rollout-update.yaml.j2
Outdated
Show resolved
Hide resolved
…aml.j2 Co-authored-by: Devin Buhl <[email protected]>
Recommend replacing shell and command modules with kubernetes.core.k8s_drain
The kubernetes.core.k8s_drain module is well supported and conducts the checks being done as a part of the command and shell tasks already written in this playbook.
In the upcoming kubernetes.core v2.5.0 collection release pod_selectors and label_selectors will be supported to make the drain process faster/more accurate see commented example on lines 43-45. add ability to filter the list of pods to be drained by a pod label selector
Added Check to determine if the node requires a reboot based on installed packages before executing the reboot task
NOTE: this is only applicable to Debian based nodes