Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security group discover fail when using launchTemplate in AWSNodeTemplate #3435

Closed
BEvgeniyS opened this issue Feb 21, 2023 · 2 comments · Fixed by #3437
Closed

Security group discover fail when using launchTemplate in AWSNodeTemplate #3435

BEvgeniyS opened this issue Feb 21, 2023 · 2 comments · Fixed by #3437
Assignees
Labels
bug Something isn't working

Comments

@BEvgeniyS
Copy link

BEvgeniyS commented Feb 21, 2023

Version

Karpenter Version: 0.25.0
Kubernetes Version: v1.23.14-eks-ffeb93d

Expected Behavior

While using launchTemplate, I expect karpenter controller to discover the security groups, either by taking it from launch template itself, or discover it by default kubernetes.io/cluster/<clustername> tag. The subnet discovery works as usual.

Karpenter v0.22.1 didn't show this error, but v0.24.0 and v0.25.0 both do.

Actual Behavior

Controller shows the following error repeatedly

2023-02-21T22:43:08.810Z	ERROR	controller	Reconciler error	{"commit": "beb0a64-dirty", "controller": "awsnodetemplate", "controllerGroup": "karpenter.k8s.aws", "controllerKind": "AWSNodeTemplate", "AWSNodeTemplate": {"name":"awslinux2"}, "namespace": "", "name": "awslinux2", "reconcileID": "2de7cfc8-07f4-4c7e-88a8-98ee45b4ad6e", "error": "describing security groups [], InvalidParameterValue: The filter 'null' is invalid
\tstatus code: 400, request id: e46a6816-a666-4139-851a-7f5c4ed52d65"}

The nodes are being launched, though

Steps to Reproduce the Problem

  1. install/upgrade to karpenter 0.24.0+
  2. apply the manifest

Resource Specs and Logs

Node template I use:

kind: AWSNodeTemplate
metadata:
  annotations:
    meta.helm.sh/release-name: rokt-karpenter
    meta.helm.sh/release-namespace: karpenter
  labels:
    app.kubernetes.io/managed-by: Helm
  name: awslinux2
spec:
  launchTemplate: NodeLaunchTemplate_xxxxx
  subnetSelector:
    Name: '*-app-*'
    kubernetes.io/cluster/dev-eks: shared

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@engedaam
Copy link
Contributor

engedaam commented Feb 22, 2023

Thank you for creating this issue, the error should not be proceeded and we are currently working to fix that. I suggest that you move away from using the launchTemplate field in the future as that is something the team is considering for deprecation during v1bata1. We currently have an RFC up for that #2964

@BEvgeniyS
Copy link
Author

Thank you for creating this issue, the error should not be proceeded and we are currently working to fix that. I suggest that you move away from using the launchTemplate field in the future as that is something the team is considering for deprecation during v1bata1. We currently have an RFC up for that #2964

Thanks for quick reply.

I understand the push for getting away from it. But there are many things that we apply dynamically to the launch template (static userdata won't do, it's all constructed from arguments and substitutions within CFN stack)

In our case we could use same nodes we already use with cluster-autoscaler, and then just migrate to bottlerocket instead of trying to shoehorn the advanced config we do to AwsNodeTemplate or setting up the ami baking factory. Just my 2c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
2 participants