-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compute build revision in KafkaConnectAssemblyOperatorPodSetTest #10113
Conversation
Thanks for the PR. Maybe you should fill in the PR description to explain why this is good and useful change? The main advantage of the hardcoded value is that it checks that the revision hash is stable and not changing randomly. Having it calculated automatically in the same way as the operator does it makes the test dependent on the operator implementation. But maybe more important, it also hides potential issues and makes it questionable of what value does testing it actually add. |
@scholzj Thank you for your comment, updated the description. |
04f9afb
to
3f7366b
Compare
Yes, it needs to be updated sometimes. But it is important to make sure that the hash is always the same when nothing changes. And one of the ways to test it is to have the hardcoded hash instead of calculating it. That way:
If you do not want to change it ... have you tried to just configure the Or is there something else you meant with |
3f7366b
to
926e2ce
Compare
By the "version bump process" I mean the part when the default kafka version changes. I had to execute that process on a fork, and found that most changes can be easily automated (e.g. updating examples), except this one. So if the computation is problematic since it's using the same logic as the code itself, then maybe if the test contained a "template" docker file, where the kafka version can be replaced easily, then generate the expected hash based on that, we somewhat duplicate the code, but reduce the risk of the code and the test going wrong at the same time. And with this solution we wouldn't need to manually update the hash when the kafka version changes. Does that sound acceptable? |
Right, so I think for that, you can just set the private static final KafkaConnect CONNECT = new KafkaConnectBuilder()
.withNewMetadata()
.withName(NAME)
.withNamespace(NAMESPACE)
.endMetadata()
.withNewSpec()
.withImage("my-base-image:latest")
.withReplicas(3)
.endSpec()
.build();
private static final KafkaConnectCluster CLUSTER = KafkaConnectCluster.fromCrd(RECONCILIATION, CONNECT, VERSIONS, SHARED_ENV_PROVIDER); That should make it independent on the Kafka version but allow to keep the static value. |
…BuildChangedCluster By setting the base image, the test won't need an update when the default Kafka version changes. Signed-off-by: Daniel Urban <[email protected]>
926e2ce
to
b2ded3b
Compare
Thank you - made the change. I thought that testing with the default Kafka version was the intention, but if not, it makes sense. |
I think that is more a side-efect of the default base image being based on the default Kafka version, not the intention. I think this change should work well -> it makes sure the hash does not change and makes it independent on the Kafka version. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
Type of change
Description
On each Kafka version change, the KafkaConnectAssemblyOperatorPodSetTest needs a manual update due to a hard-coded hash. This change updates the test to compute the expected hash, meaning that the test won't need a manual update on each version change.
Checklist
Please go through this checklist and make sure all applicable tasks have been done