Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent resource_manager_tags behavior on google_container_cluster #18793

Comments

@ghost
Copy link

ghost commented Jul 19, 2024

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to a user, that user is claiming responsibility for the issue.
  • Customers working with a Google Technical Account Manager or Customer Engineer can ask them to reach out internally to expedite investigation and resolution of this issue.

Terraform Version & Provider Version(s)

Terraform v1.6.6
on Rocky Linux 8

  • provider registry.terraform.io/hashicorp/google v5.38
  • provider registry.terraform.io/hashicorp/google-beta v5.38

Affected Resource(s)

google_container_cluster

Terraform Configuration

Shows resource_manager_tags = null

resource "google_container_cluster" "kubernetes_cluster" {
  provider = google-beta
  name     = "sample-cluster"
  location = "us-central1"

  node_pool_auto_config {
    // set this on a preexisting cluster to prevent node pool from cycling involuntarily
    resource_manager_tags = null
  }
}

Shows resource_manager_tags = {} (must be pre-existing resource)

resource "google_container_cluster" "kubernetes_cluster" {
  provider = google-beta
  name     = "sample-cluster"
  location = "us-central1"

  node_pool_auto_config {
    // set this on a preexisting cluster to prevent node pool from cycling involuntarily
    resource_manager_tags = null
  }

  fleet { }
}

Debug Output

Since the cluster must exist ahead before adding the fleet and I do not have the authority to create original clusters due to IP pressure, these are the truncated tails of runs on one of our staging environments (gists will not hold the whole log). Gist is located here

Expected Behavior

The resource_manager_tags map can be overwritten to null, and we have used this to prevent node pools from being recreated until we are ready. We would like the behavior to be consistent between having a fleet block and not having a fleet block (ideally allowing the null value).

Actual Behavior

When the resource_manager_tags is overwritten to null, the value is accepted and persisted as part of the terraform state until the fleet block is added. Once the fleet block is added to a pre-existing cluster, the state changes from null to {}, and triggers a rebuild of the node pool, which is not desirable.

Steps to reproduce

  1. terraform apply the first version of the configuration
  2. 'terraform plan' the second version
  3. Inspect the output pertaining to resource_manager_tags

Important Factoids

N/A

References

Support for resource_manager_tags was added as part of: #16614

b/356651267

@ghost ghost added the bug label Jul 19, 2024
@github-actions github-actions bot added forward/review In review; remove label to forward service/container labels Jul 19, 2024
@ggtisc ggtisc self-assigned this Jul 24, 2024
@ggtisc
Copy link
Collaborator

ggtisc commented Jul 31, 2024

Hi @Register-0!

I'm trying to replicate this issue, but I just obtained a result of updated in-place Which means that Terraform is modifying an existing resource without destroying and recreating it.

  • Is the initial value of resource_manager_tags null or did you assign another value before this?
  • Even if the result of updated in-place did you notice that after applying the changes the resource was destroyed and replaced?

@ggtisc ggtisc removed the forward/review In review; remove label to forward label Jul 31, 2024
@bijou-code
Copy link

bijou-code commented Jul 31, 2024

  • The initial value is considered null and then updated to be {} upon an upgrade from the TF provider version 4.x to 5.x. We upgraded the provider for a test cluster and noticed that this destroyed and recreated the nodes there. We then set resource_manager_tags=null for a second cluster, which prevented the node recreations, but now we cannot add the fleet block since it is causing the resource_manager_tags override to be ignored.
  • The diff looks like this on my end:
# module.cluster.google_container_cluster.kubernetes_cluster will be updated in-place
  ~ resource "google_container_cluster" "kubernetes_cluster" {
        id                                       = "projects/mycompany-nastaging-kubernetes/locations/us-east4/clusters/nastaging-us-east4-cluster"
        name                                     = "nastaging-us-east4-cluster"
        # (31 unchanged attributes hidden)

      + fleet {
          + project = "mycompany-nastaging-kubernetes"
        }

      ~ node_pool_auto_config {
          + resource_manager_tags = {}
        }

        # (25 unchanged blocks hidden)
    }

but after allowing the "update in-place", the nodes are destroyed and recreated. I think this is considered "in place" because the node-pool itself is not recreated, but the nodes are.

@ggtisc
Copy link
Collaborator

ggtisc commented Jul 31, 2024

It looks like even if terraform says that it is an update-in-place it is destroying and recreating the resource internally

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.