-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instance override issue when launching a worker group launch template per az #368
Comments
The problem is here: Currently the code applies 2 instance overrides to the ASG, the second of which defaults to So for you, you would need to choose a second instance type as |
I think the changes we should make in this module are:
But would be nice to hear other people's opinions 🙂 |
I got it running as I wanted, I only had to remove instance_type override from my config and that defaults to t3.large. However, I kind of see this as a bug. That second override in the code should not happen if instance_type override matches local. |
I don't think this is possible in Terraform currently because it requires conditionally setting an |
Should all be resolved in latest release 🙂 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Launch templates with mixed policies fail when re-using instance type .
I'm submitting a
What is the current behavior?
I'm trying to create two worker_groups with launch templates one per availability zone. The groups are created and I can see them in the console, however this TF module is reporting an issue when overriding:
aws_autoscaling_group.workers_launch_template.1: Error creating AutoScaling Group: ValidationError: Cannot add same instance type override more than once. Remove these duplicates from the request and try again: [t3.large]
status code: 400, request id: xxxx
module.eks.module.eks.aws_autoscaling_group.workers_launch_template[0]: 1 error(s) occurred:
aws_autoscaling_group.workers_launch_template.0: Error creating AutoScaling Group: ValidationError: Cannot add same instance type override more than once. Remove these duplicates from the request and try again: [t3.large]
status code: 400, request id: yyy
If this is a bug, how to reproduce? Please include a code sample if relevant.
I'm using the following configuration to test spot fleet creation:
worker_group_launch_template_count = "2"
worker_groups_launch_template = [
{
spot_instance_pools = 3
asg_desired_capacity = 2
on_demand_base_capacity = 0
on_demand_percentage_above_base_capacity = 0
autoscaling_enabled = 1
key_name = "${var.key_name}"
instance_type = "t3.large"
name = "t3large-1a"
kubelet_extra_args = "--node-labels=spot=true --node-labels=role=t3large-1,env=k8s-env,stack=k8s-env,az=a,node-role.kubernetes.io/t3large-1=true,node-role.kubernetes.io/t3large-1a=true"
subnets = "${data.aws_subnet.private_subnet_a.id}"
},
{
spot_instance_pools = 3
asg_desired_capacity = 2
on_demand_base_capacity = 0
on_demand_percentage_above_base_capacity = 0
autoscaling_enabled = 1
key_name = "${var.key_name}"
instance_type = "t3.large"
name = "t3large-1b"
kubelet_extra_args = "--node-labels=spot=true --node-labels=role=t3large-1,env=k8s-env,stack=k8s-env,az=b,node-role.kubernetes.io/t3large-1=true,node-role.kubernetes.io/t3large-1b=true" subnets = "${data.aws_subnet.private_subnet_b.id}"
}
]
What's the expected behavior?
Have two templates with same instance size, one for az=a and one for az=b
Are you able to fix this problem and submit a PR? Link here if you have already.
Environment details
Any other relevant info
The text was updated successfully, but these errors were encountered: