Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure to provision managed worker nodes as call to kubernetes provider fails to find current kubeconfig #690

Closed
1 of 4 tasks
bradfordcp opened this issue Jan 16, 2020 · 3 comments

Comments

@bradfordcp
Copy link

I have issues

When attempting to provision an EKS cluster with managed worker nodes the apply fails while attempting to push the aws_auth config map.

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

After typing yes to approve the plan I am met with the following message.

module.eks-cluster.kubernetes_config_map.aws_auth[0]: Creating...

Error: Post http://localhost/api/v1/namespaces/kube-system/configmaps: dial tcp [::1]:80: connect: connection refused

  on .terraform/modules/eks-cluster/terraform-aws-modules-terraform-aws-eks-a9db852/aws_auth.tf line 52, in resource "kubernetes_config_map" "aws_auth":
  52: resource "kubernetes_config_map" "aws_auth" {

The VPC and EKS cluster are provisioned, but the nodes do not appear.

If this is a bug, how to reproduce? Please include a code sample if relevant.

provider "aws" {
  version = "~> 2.44"
  profile = "default"
  region  = "us-east-2"
}

resource "aws_security_group" "elb" {
  name        = "elb"
  description = "ELB based rules allowing all traffic in and out"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "worker-nodes" {
  name        = "worker-nodes"
  description = "Rules for EKS K8s nodes"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["10.0.0.0/16"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["0.0.0.0/0"]
  }
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "2.21.0"

  name = "eks-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-2a", "us-east-2b", "us-east-2c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  create_database_subnet_group    = false
  create_elasticache_subnet_group = false
  create_redshift_subnet_group    = false

  enable_dns_hostnames = true
  enable_dns_support   = true

  enable_nat_gateway     = true
  single_nat_gateway     = false
  one_nat_gateway_per_az = true

  enable_elasticloadbalancing_endpoint             = true
  elasticloadbalancing_endpoint_security_group_ids = [aws_security_group.worker-nodes.id]

  customer_gateways                  = {}
  enable_vpn_gateway                 = true
  amazon_side_asn                    = 64512
  propagate_private_route_tables_vgw = true
  propagate_public_route_tables_vgw  = true

  tags = {
    Terraform                           = "true"
    Environment                         = "dev"
    "kubernetes.io/cluster/eks-cluster" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/role/elb" = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = "1"
  }
}

module "eks-cluster" {
  source  = "terraform-aws-modules/eks/aws"
  version = "8.0.0"

  cluster_name    = "eks-cluster"
  cluster_version = "1.14"
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id

  node_groups = {
    dse-workers = {
      desired_capacity = 3
      k8s_labels = {
        role = "worker"
      }
    }
  }

  tags = {
    Terraform   = "true"
    environment = "dev"
  }
}

What's the expected behavior?

Both the EKS control plane and managed workers are provisioned. The error message should not appear.

Are you able to fix this problem and submit a PR? Link here if you have already.

Potentially? I've found that exporting the KUBECONFIG environment variable and pointing it at the file generated by this module allows me to proceed past the issue.

# Note this syntax is for fish shell
set -x KUBECONFIG kubeconfig_eks-cluster

It's unclear if we can set the path for the kubernetes provider as part of this provisioning and use the generated kubeconfig instead of relying on the default present on the system (which may not be the recently provisioned cluster).

Environment details

  • Affected module version: 8.0.0
  • OS: Fedora 31 Silverblue
  • Terraform version:
    Terraform v0.12.19
    + provider.aws v2.44.0
    + provider.azurerm v1.38.0
    + provider.google v3.4.0
    + provider.kubernetes v1.10.0
    + provider.local v1.4.0
    + provider.null v2.1.2
    + provider.random v2.2.1
    + provider.template v2.1.2
    

Any other relevant info

This may be related to #488

@davidalger
Copy link
Contributor

@bradfordcp I noticed you're using version 8.0.0 but the code example above doesn't specify kubernetes provider configuration.

I've deployed 8.0.0 successfully with the following provider config which is pretty much taken from this portion of the README:

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.10"
}

v8.0.0 of this introduced a change to how aws-auth was handled, and now requires the Kubernetes provider be configured similar to the above rather than relying on calling out to kubctl.

@bradfordcp
Copy link
Author

🤦‍♂️ I dug into the kubernetes provider docs, but came up empty. Thanks for the help.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants