You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Both the EKS control plane and managed workers are provisioned. The error message should not appear.
Are you able to fix this problem and submit a PR? Link here if you have already.
Potentially? I've found that exporting the KUBECONFIG environment variable and pointing it at the file generated by this module allows me to proceed past the issue.
# Note this syntax is for fish shellset-x KUBECONFIG kubeconfig_eks-cluster
It's unclear if we can set the path for the kubernetes provider as part of this provisioning and use the generated kubeconfig instead of relying on the default present on the system (which may not be the recently provisioned cluster).
@bradfordcp I noticed you're using version 8.0.0 but the code example above doesn't specify kubernetes provider configuration.
I've deployed 8.0.0 successfully with the following provider config which is pretty much taken from this portion of the README:
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.10"
}
v8.0.0 of this introduced a change to how aws-auth was handled, and now requires the Kubernetes provider be configured similar to the above rather than relying on calling out to kubctl.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
I have issues
When attempting to provision an EKS cluster with managed worker nodes the apply fails while attempting to push the
aws_auth
config map.I'm submitting a...
What is the current behavior?
After typing
yes
to approve the plan I am met with the following message.The VPC and EKS cluster are provisioned, but the nodes do not appear.
If this is a bug, how to reproduce? Please include a code sample if relevant.
What's the expected behavior?
Both the EKS control plane and managed workers are provisioned. The error message should not appear.
Are you able to fix this problem and submit a PR? Link here if you have already.
Potentially? I've found that exporting the
KUBECONFIG
environment variable and pointing it at the file generated by this module allows me to proceed past the issue.It's unclear if we can set the path for the
kubernetes
provider as part of this provisioning and use the generated kubeconfig instead of relying on the default present on the system (which may not be the recently provisioned cluster).Environment details
Any other relevant info
This may be related to #488
The text was updated successfully, but these errors were encountered: