Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Chore] Fix smoke-daemonset test case to work on multi-node cluster. #2412

Merged
merged 1 commit into from
Dec 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion tests/e2e-openshift/monitoring/02-generate-traces.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,10 @@ spec:
- "--otlp-endpoint=cluster-collector-collector-headless:4317"
- "--otlp-insecure=true"
- "--rate=1"
- "--duration=5s"
- "--duration=3m"
- "--otlp-attributes=telemetrygen=\"traces\""
- "--otlp-header=telemetrygen=\"traces\""
- "--span-duration=1s"
- "--workers=1"
- "traces"
restartPolicy: Never
4 changes: 2 additions & 2 deletions tests/e2e-openshift/monitoring/check_metrics.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ SECRET=$(oc get secret -n openshift-user-workload-monitoring | grep prometheus-u
TOKEN=$(echo $(oc get secret $SECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d)
THANOS_QUERIER_HOST=$(oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host')

#Check metrics used in the prometheus rules created for TempoStack. Refer issue https://issues.redhat.com/browse/TRACING-3399 for skipped metrics.
metrics="otelcol_exporter_enqueue_failed_spans otelcol_exporter_sent_spans otelcol_process_cpu_seconds otelcol_process_memory_rss otelcol_process_runtime_heap_alloc_bytes otelcol_process_runtime_total_alloc_bytes otelcol_process_runtime_total_sys_memory_bytes otelcol_process_uptime otelcol_receiver_accepted_spans otelcol_receiver_refused_spans"
#Check metrics for OpenTelemetry collector instance.
metrics="otelcol_process_uptime otelcol_process_runtime_total_sys_memory_bytes otelcol_process_memory_rss otelcol_exporter_sent_spans otelcol_process_cpu_seconds otelcol_process_memory_rss otelcol_process_runtime_heap_alloc_bytes otelcol_process_runtime_total_alloc_bytes otelcol_process_runtime_total_sys_memory_bytes otelcol_process_uptime otelcol_receiver_accepted_spans otelcol_receiver_refused_spans"

for metric in $metrics; do
query="$metric"
Expand Down
10 changes: 8 additions & 2 deletions tests/e2e/smoke-daemonset/00-assert.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,12 @@ spec:
maxSurge: 0
maxUnavailable: 1
status:
numberAvailable: 1
numberMisscheduled: 0
numberReady: 1

---
# This KUTTL assert uses the check-daemonset.sh script to ensure the number of ready pods in a daemonset matches the desired count, retrying until successful or a timeout occurs. The script is needed as the number of Kubernetes cluster nodes can vary and we cannot statically set desiredNumberScheduled and numberReady in the assert for daemonset status.

apiVersion: kuttl.dev/v1beta1
kind: TestAssert
commands:
- script: ./tests/e2e/smoke-daemonset/check-daemonset.sh
Copy link
Member

@pavolloffay pavolloffay Dec 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need a script and cannot statically set the desiredNumberScheduled and numberReady

(a comment in the code would be nice)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the comment.

15 changes: 15 additions & 0 deletions tests/e2e/smoke-daemonset/check-daemonset.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
#!/bin/bash

# Name of the daemonset to check
DAEMONSET_NAME="daemonset-test-collector"

# Get the desired and ready pod counts for the daemonset
read DESIRED READY <<< $(kubectl get daemonset -n $NAMESPACE $DAEMONSET_NAME -o custom-columns=:status.desiredNumberScheduled,:status.numberReady --no-headers)

# Check if the desired count matches the ready count
if [ "$DESIRED" -eq "$READY" ]; then
echo "Desired count ($DESIRED) matches the ready count ($READY) for $DAEMONSET_NAME."
else
echo "Desired count ($DESIRED) does not match the ready count ($READY) for $DAEMONSET_NAME."
exit 1
fi
Loading