-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: prometheus scale down e2e test #448
test: prometheus scale down e2e test #448
Conversation
1494d03
to
953b8cb
Compare
@simonpasquier @jan--f Please help review when convenient |
953b8cb
to
f0d678a
Compare
f.GetResourceWithRetry(t, ms.Name, ms.Namespace, &prom) | ||
|
||
assert.Equal(t, prom.Status.Replicas, int32(1)) | ||
if err = wait.PollUntilContextTimeout(context.Background(), 5*time.Second, framework.DefaultTestTimeout, true, func(ctx context.Context) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetResourceWithRetry
already polls on getting the Prometheus resource.
If we need to retry for checking Status.Replica
I'd propose we add a AssertReplicaStatus(name, namespace, resources, expectedReplicas)
to framework.go.
93e585a
to
a35d0d5
Compare
resolve merge conflicts
a35d0d5
to
7968633
Compare
7968633
to
5e34e30
Compare
if wait.Interrupted(err) { | ||
t.Fatal(fmt.Errorf("Prometheus was not scaled down")) | ||
} | ||
f.AssertPrometheusReplicaStatus(ms.Name, ms.Namespace, numOfRep) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
f.AssertPrometheusReplicaStatus(ms.Name, ms.Namespace, numOfRep) | |
f.AssertPrometheusReplicaStatus(ms.Name, ms.Namespace, 0) |
Should this test for scale down to 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry typo, thanks your sharp eyes
test/e2e/framework/assertions.go
Outdated
return true, nil | ||
|
||
}); wait.Interrupted(err) { | ||
t.Fatal(fmt.Errorf("Prometheus %s/%s was not scaled down to %d", namespace, name, expectedReplicas)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t.Fatal(fmt.Errorf("Prometheus %s/%s was not scaled down to %d", namespace, name, expectedReplicas)) | |
t.Fatal(fmt.Errorf("Prometheus %s/%s has unexpected number of replicas, got %d, expected %d", namespace, name, prom.Status.Replicas, expectedReplicas)) |
The fix of PR #454 can't work, case Alertmanager_runs_in_HA_mode failed in https://github.com/rhobs/observability-operator/actions/runs/8660157870/job/23747502573?pr=448, will provide new update to the case Alertmanager_runs_in_HA_mode |
f2b0a3c
to
bd97bc3
Compare
bd97bc3
to
b8c9740
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jan--f, lihongyan1 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
This should fix test failure COO-79
monitoring_stack_controller_test.go:585: assertion failed: 0 (prom.Status.Replicas int32) != 1 (int32)
Also fix
COO-84 [QE] Case Alertmanager_runs_in_HA_mode is not stable
monitoring_stack_controller_test.go:99: statefulsets.apps "alertmanager-alerting" not found