Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for node pool placement group config #6999

Merged
merged 5 commits into from
Nov 20, 2024

Conversation

dominic-p
Copy link
Contributor

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR adds support for specifying a placement group for a node pool using the HCLOUD_CLUSTER_CONFIG JSON. This is a nice feature to have because it allows you to spread a node pool's VMs over different physical hardware, increasing the overall resilience of the pool.

Which issue(s) this PR fixes:

Fixes #5919

Special notes for your reviewer:

This is the first go code that I have written, so I apologize if it's a total mess. Also, I'm not sure how to setup a dev/testing environment for this project, so I was not able to actually run this code to test it.

Does this PR introduce a user-facing change?

Add support for specifying node pool placement groups when using HCLOUD_CLUSTER_CONFIG

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

To specify a placement group, use JSON like this:

{
  "nodeConfigs": {
    "pool-1": {
      "placementGroup": "name or ID here"
    }
}

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jul 3, 2024
@k8s-ci-robot k8s-ci-robot added the area/provider/hetzner Issues or PRs related to Hetzner provider label Jul 3, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @dominic-p!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 3, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @dominic-p. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jul 3, 2024
@dominic-p
Copy link
Contributor Author

Friendly bump. Is there anything I can do to polish this PR or help with the review process?

Copy link
Contributor

@lukasmetzner lukasmetzner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey,

thanks for opening the PR and sorry for the long wait ^^ It mostly works, a few go syntax issues and you also have to add the placement groups to the server creation request:

hetzner_node_group.go:449

opts := hcloud.ServerCreateOpts{
    ...
    PlacementGroup: n.placementGroup,
}

@dominic-p
Copy link
Contributor Author

Thank you so much for the review! I believe I have implemented all of your suggested changes. Let me know what you think.

I'm seeing a ton of errors in the CI tests, but they don't seem to be relevant to this PR. It's hard for me to say though.

@apricote
Copy link
Member

apricote commented Nov 5, 2024

One of the files is not properly formatted, that causes the CI failure in test-and-verify. You can fix it by following the CI output:

Please run hack/update-gofmt.sh to fix the following files:
./cluster-autoscaler/cloudprovider/hetzner/hetzner_cloud_provider.go

(And yes, most of the issues in the output are not relevant to this PR and the important line is very easy to miss)

@dominic-p
Copy link
Contributor Author

Ok, it looks like the tests are passing now. If we do wind up with a placement group that would be too large, the error message is maybe a little less readable now (it will reference placement group IDs instead of names). But, this seemed like the cleanest way to solve the problems.

Let me know if you would prefer a different approach.

@apricote
Copy link
Member

/approve

Thank you very much @dominic-p!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: apricote, dominic-p

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 20, 2024
@apricote
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 20, 2024
@k8s-ci-robot k8s-ci-robot merged commit 4c37ff3 into kubernetes:master Nov 20, 2024
6 checks passed
@dominic-p
Copy link
Contributor Author

You are very welcome, and thank you for being so patient with me as I tried to learn go for this PR. :)

@dominic-p dominic-p deleted the iss-5919-placement-groups branch November 20, 2024 17:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler area/provider/hetzner Issues or PRs related to Hetzner provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: Specify placement group with Hetzner Autoscaler
4 participants