-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support draining DaemonSet pods using sriov devices #840
base: master
Are you sure you want to change the base?
Conversation
Thanks for your PR,
To skip the vendors CIs, Maintainers can use one of:
|
Pull Request Test Coverage Report for Build 13370907002Details
💛 - Coveralls |
pkg/drain/drainer.go
Outdated
// remove pods that are owned by a DaemonSet and use SR-IOV devices | ||
dsPodsList := getDsPodsToRemove(podList) | ||
for _, pod := range dsPodsList { | ||
err = d.kubeClient.CoreV1().Pods(pod.Namespace).Delete(ctx, pod.Name, metav1.DeleteOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we block? Waiting to ensure that the pod is fully removed before continuing? A pod that's slow to delete might cause a race condition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I switch to use the drain from the kubernetes drain helper it does that :)
2140f89
to
c20fba0
Compare
pkg/drain/drainer.go
Outdated
|
||
// on full drain there is no need to try and remove pods that are owned by DaemonSets | ||
// as we are going to reboot the node in any case. | ||
if fullNodeDrain { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we care ? (just thinking how to simplify) why not always remove DS pods from node IF they have sriov resources ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed this one as requested :)
c20fba0
to
cd2fb78
Compare
@@ -2376,6 +2538,18 @@ func waitForPodRunning(p *corev1.Pod) *corev1.Pod { | |||
return ret | |||
} | |||
|
|||
func waitForDaemonReady(d *appsv1.DaemonSet) *appsv1.DaemonSet { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: waitForDaemonSetReady
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
reqLogger.Info("drainNode(): Draining failed, retrying", "error", err) | ||
return false, nil | ||
|
||
err = d.removeDaemonSetsFromNode(ctx, node.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely clear on how this works. I understand the process of selecting and removing DS pods, but I'm unsure how we prevent the DS controller from restarting pods on the node we intend to drain. Could you clarify?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the idea is that if the daemon use vfs the new pod will stuck on pending so after we finish the configuration the daemon will be able to start
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the DS-managed Pod will be in Pending state?
Is it because of the cordon/taint? I think that the fact that the node is cordoned will not prevent DS pods to restart on it.
Or Is it because SRIOV resources are not exposed on the node? If so, Are we sure that SRIOV resources will not be exposed on the node at the moment when we try to kill the DS-managed pod? How we ensure this?
I don’t have a complete picture of how the operator’s drain logic works, so I might be missing something obvious—but I think it’s worth asking :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in case of reconfiguration ds pod may get back to running state if there are still sriov resources on the worker IMO.
indeed cordon does not affect daemonset pods (tested in kind cluster).
adding a custom taint with effect NoSchedule will prevent DS pod from getting rescheduled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a commit from the community meeting.
instead if using taint that can be harmful for the cluster (if some daemonset get deleted they will stuck until the sriov operator finish the work)
we agree to implement in the drain controller a removal of the device plugin label from the node + the removal of the running device plugin pod
@adrianchiris @ykulazhenkov @zeeke let me know what do you think before I start coding it please :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 from me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we roll more responsibilites to the drain controller.
maybe it should be the role of config daemon to ensure device plugin no longer runs on node before requesting drain ?
then put node label back after config completed.
just a different apporach to consider. im not against the current one.
cd2fb78
to
2067ad5
Compare
with this commit we also take care of removing daemonset owned pods using sriov devices. we only do it when drain is requested we don't do it for reboot requests Signed-off-by: Sebastian Sch <[email protected]>
2067ad5
to
68754e7
Compare
I checked the code and it looks good. I have one question about the logic. I want to make sure that we are not missing anything. |
with this commit we also take care of removing DaemonSet owned pods using sriov devices.
we only do it when drain is requested we don't do it for reboot requests