-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Specify placement group with Hetzner Autoscaler #5919
Comments
Nice suggestion. Placement Groups have a limit of 10 instances, so using them for all your Nodes might become a Problem. This can be introduced more nicely in the new JSON Format for Node Group configuration: /area provider/hetzner |
Thanks for the reply! I wasn't aware of the new env variable. Yes, that doesn't seem like a good place to configure it. It's not entirely clear to me how it will interact with the exist variables it's replacing. If both So, if we were to add support for placement groups (acknowledging the 10 node limit), the JSON might look like this?
Could a warning be shown in the logs and the setting disabled if a placement group is configured for a node pool that could have more than 10 nodes? |
autoscaler/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go Lines 416 to 420 in 81eed96
The JSON should look like this: {
"nodeConfigs": {
"pool-1": {
"placementGroup": "name or ID here"
}
}
I would prefer if we would fail more loudly by just refusing to start. This is how the config validation is done right now, see autoscaler/cluster-autoscaler/cloudprovider/hetzner/hetzner_cloud_provider.go Lines 203 to 215 in 81eed96
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Not stale for me. |
@dominic-p are you interested in implementing this? I can help out if you have any questions :) /remove-lifecycle stale |
Thanks for following up. I can take a stab at it. Go's not my language, unfortunately, but the change may be trivial enough for me to muddle through. Can you point me in the direction (e.g. which file(s) I should start looking at)? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Still not stale for me. Still struggling to find the time to work on it. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Just checking in on this again. If someone can point me in the right direction, I can take a stab at a PR for this. As I mentioned before, Go isn't a language I know, but it should be pretty straightforward. |
Some steps:
|
Thank you so much for taking the time to lay that out for me. I took a shot at a PR. Let me know what you think. As I said, go is not my language, but I think I was able to muddle through the steps you gave me. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
reopen the issue, as PR #6999 is already opened and is under the review stage. /reopen |
@Shubham82: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which component are you using?:
cluster-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
Hetzner Placement Groups allow you to spread your VMs across different physical hardware which decreases the probability that some instances might fail together.
Describe the solution you'd like.:
It would be nice be able to specify a given placement group for nodes managed by the autoscaler just like we can currently specify things like the network or SSH key. Maybe a new env variable like
HCLOUD_PLACEMENT_GROUP
could be introduced.Describe any alternative solutions you've considered.:
Of course, the status quo is fine. This would just make the cluster a bit more robust.
The text was updated successfully, but these errors were encountered: