You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My root and user certificate are stored in the same Kubernetes Secret. When configuring the tls and authentication properties for a KafkaMirrorMaker2 resource, I can not use this one secret to configure both tls and authentication. Doing so results in an error in the operator because it tries to mount the same secret twice (two volume mounts with the same name).
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1:443/api/v1/namespaces/develop/pods. Message: Pod "kafka-connect-mirrormaker2-0" is invalid: [spec.volumes[5].name: Duplicate value: "source-develop-kafka-con-9f60b841", spec.containers[0].volumeMounts[5].mountPath: Invalid value: "/opt/kafka/mm2-certs/source/develop-kafka-connect-kafka-user": must be unique]. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.volumes[5].name, message=Duplicate value: "source-develop-kafka-con-9f60b841", reason=FieldValueDuplicate, additionalProperties={}), StatusCause(field=spec.containers[0].volumeMounts[5].mountPath, message=Invalid value: "/opt/kafka/mm2-certs/source/develop-kafka-connect-kafka-user": must be unique, reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Pod, name=kafka-connect-mirrormaker2-0, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "kafka-connect-mirrormaker2-0" is invalid: [spec.volumes[5].name: Duplicate value: "source-develop-kafka-con-9f60b841", spec.containers[0].volumeMounts[5].mountPath: Invalid value: "/opt/kafka/mm2-certs/source/develop-kafka-connect-kafka-user": must be unique], metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:660) ~[io.fabric8.kubernetes-client-6.8.1.jar:?]
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:640) ~[io.fabric8.kubernetes-client-6.8.1.jar:?]
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:589) ~[io.fabric8.kubernetes-client-6.8.1.jar:?]
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:549) ~[io.fabric8.kubernetes-client-6.8.1.jar:?]
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
at io.fabric8.kubernetes.client.http.StandardHttpClient.lambda$completeOrCancel$10(StandardHttpClient.java:140) ~[io.fabric8.kubernetes-client-api-6.8.1.jar:?]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
at io.fabric8.kubernetes.client.utils.AsyncUtils.lambda$retryWithExponentialBackoff$3(AsyncUtils.java:90) ~[io.fabric8.kubernetes-client-api-6.8.1.jar:?]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:614) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:844) ~[?:?]
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482) ~[?:?]
... 1 more
Steps to reproduce
Create a secret containing the root certificate, user certificate and user key for Kafka.
I guess this is a fair point, but not sure how easy it will be to fix it. I will have a look at it. In any case, the workaround is pretty straight forward, just use different secrets for the time being.
Ok, so getting back to it ... I'm a bit strugling to reproduce it as this seems to be covered. So I think I need some clarification from you ...
What is the exact version of Strimzi you are using? You have there version: 3.8.0 as well as image: "strimzi/kafka:0.37.0-kafka-3.4.0". These two things need to be in sync and if you really use Strimzi 0.37.0 then version: 3.8.0 will make it fail. Also, mixing different container image and different operator version can lead to many problems.
The error refers to a secret develop-kafka-connect-kafka-user. That does not seem to correspond to the KafkaMirrorMaker2 CR you shared. So, can you please share the exact KafkaMirrorMaker2 CR you are actually using? Also with that, the actual pod definition (kubectl get strimzipodset kafka-connect-mirrormaker2 - o yaml)?
@scholzj After aligning my Kafka Connect version with the Strimzi version used on the cluster, I seem to be no longer encountering the problem. Thanks for your suggestion.
I tried to anonymize the example I included, that might be the reason why some things might not correspond.
Thanks for your suggestion!
Bug Description
My root and user certificate are stored in the same Kubernetes Secret. When configuring the
tls
andauthentication
properties for a KafkaMirrorMaker2 resource, I can not use this one secret to configure both tls and authentication. Doing so results in an error in the operator because it tries to mount the same secret twice (two volume mounts with the same name).Steps to reproduce
Expected behavior
The volume containing the certificates should only be mounted to the pod once.
Strimzi version
0.37.0
Kubernetes version
Kubernetes 1.27.11
Installation method
YAML files
Infrastructure
Bare-metal
Configuration files and logs
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: