-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handle headless service of statefulset pods #52
Comments
@rjanovski I agree. I should be able to take a look at this in the new few days. It's bitten me a few times. |
@rjanovski can you post your service object here so I can make sure I know what you mean by following the labels (as opposed to selectors?) I want to make sure I can account for your situation and and not make assumptions. thanks I would assume the best solutions would be Also, I will need to ensure I respect subdomain if that is specified. |
@cjimti I meant the selector labels, yes. but just querying the endpoints of the service seem to be enough. (and should work in any namespace/subdomain, I'm using default) BTW, I'm just using the kafka helm chart: https://github.com/bitnami/charts/tree/master/bitnami/kafka so replicating my scenario is quite simple ( headless service:
and the endoints (which have
|
@rjanovski please try 1.8.2 when you get a chance. @alloran contributed an update to resolve this. Thanks |
Works great now, thanks! |
Somewhere between 1.8.2 and 1.14.0 has been a regression, this is not working anymore 😢 EDIT: 1.13.2 is the culprit, 1.13.1 is the last working version. |
See #133 |
Hey guys, thanks for this cool tool!
I have an issue where headless service exposing a statefulset is not getting mapped well.
it would be great if you can provide the special case handling for such scenario (as it seems that everyone is struggling with this one).
results in:
but since there are 3 pods, what's really needed is:
pods:
the issue is that there are pod-names under the headless service, and these don't get mapped.
it seems not too hard to follow the labels and get the right pods (name and IP) and map correctly.
even if just
kubefwd pod
was available there was something to work with.The text was updated successfully, but these errors were encountered: