-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: validate pods and systemd-networkd restart for PRs #1909
Conversation
@@ -17,6 +17,9 @@ do | |||
echo "Node internal ip: $node_ip" | |||
privileged_pod=$(kubectl get pods -n kube-system -l app=privileged-daemonset -o wide | grep "$node_name" | awk '{print $1}') | |||
echo "privileged pod : $privileged_pod" | |||
if [ "$privileged_pod" == '' ]; then | |||
exit 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we encounter such case during testing ?
Can we add the status of privileged pod deployment then. ( kubectl describe daemonset privileged-daemonset -n kube-system
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure I'll add that, here you can see this run just got stuck in the loop. I had to manually cancel it https://dev.azure.com/msazure/One/_build/results?buildId=71680459&view=logs&j=4ea62961-c456-50ab-e773-f15fbc744993&t=6637b73f-d7ef-5d5e-d4d4-eb0bbec757cb&s=c689f5d8-16f1-5a52-95fe-f6a4e6a9e7fe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I meant if we fail to get the privileged pod then it will be good to have the status of daemon set. Basically before exiting we can have that as I see from the pipeline run, daemonset.apps/privileged-daemonset
created.
Ideally we should be waiting for the deployment to be complete before proceeding i think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah makes sense, I made the switch
kubectl apply -f test/integration/manifests/cilium/cilium-agent/daemonset.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-agent/clusterrole.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-agent/clusterrolebinding.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-agent/serviceaccount.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-operator/deployment.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-operator/serviceaccount.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-operator/clusterrole.yaml | ||
kubectl apply -f test/integration/manifests/cilium/cilium-operator/clusterrolebinding.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubectl apply
works on directories. if these have ordering constraints, name the files with a priority prefix.
kubectl apply -f test/integration/manifests/cilium/cilium-agent/daemonset.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-agent/clusterrole.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-agent/clusterrolebinding.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-agent/serviceaccount.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-operator/deployment.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-operator/serviceaccount.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-operator/clusterrole.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-operator/clusterrolebinding.yaml | |
kubectl apply -f test/integration/manifests/cilium/cilium-agent | |
kubectl apply -f test/integration/manifests/cilium/cilium-operator |
creationTimestamp: "2023-04-17T23:00:12Z" | ||
labels: | ||
app.kubernetes.io/managed-by: Helm | ||
helm.toolkit.fluxcd.io/name: cilium-adapter-helmrelease | ||
helm.toolkit.fluxcd.io/namespace: 643dcea81aff3b00014098f5 | ||
name: cilium-config | ||
namespace: kube-system | ||
resourceVersion: "993" | ||
uid: 31ccec09-0511-4f18-b21a-10cf4033ce06 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got some extra metadata here
* update script to check cns in memory and add to pr pipeline * adding stage to both overlay and podsubnet cilium stages * add exit case if priveleged pod is not found * check status of priv pod * call ds status before exit * install cilium ds with kubectl and not helm for systemd-networkd initcontainer patch * upload cilium ds * adding files for cilium-agent and cilium-operator deployment * update cilium ds * addressing comments
* update script to check cns in memory and add to pr pipeline * adding stage to both overlay and podsubnet cilium stages * add exit case if priveleged pod is not found * check status of priv pod * call ds status before exit * install cilium ds with kubectl and not helm for systemd-networkd initcontainer patch * upload cilium ds * adding files for cilium-agent and cilium-operator deployment * update cilium ds * addressing comments
Reason for Change:
Issue Fixed:
Requirements:
Notes:
verified script manually and on both ACN PR and load test pipeline.
Load Test Pipeline run: https://msazure.visualstudio.com/One/_build/results?buildId=71640453&view=logs&j=aafe78ca-85dd-54d8-c96a-840101a5fad4