Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

for_each on-destroy provisioner failure #24381

Closed
AshMenhennett opened this issue Mar 16, 2020 · 13 comments
Closed

for_each on-destroy provisioner failure #24381

AshMenhennett opened this issue Mar 16, 2020 · 13 comments
Labels
bug core v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@AshMenhennett
Copy link

AshMenhennett commented Mar 16, 2020

Terraform Version

terraform --version
Terraform v0.12.24

Terraform Configuration Files

Module: modules/example/main.tf:

/**
 * Variables
 */
variable "parent_folder" {
  type = string
}

variable "billing_account" {
  type = string
}

variable "api_services" {
  type = set(string)
  default = [
    "compute.googleapis.com",
    "run.googleapis.com",
    "iam.googleapis.com",
    "iap.googleapis.com",
  ]
}


/**
 * Resources
 */
resource "random_id" "project_identifier_suffix" {
  byte_length = 2
}

resource "google_project" "project" {
  name                = "Testing Project"
  project_id          = format("%s-%s", "testing-project", random_id.project_identifier_suffix.hex)
  billing_account     = var.billing_account
  folder_id           = var.parent_folder
  auto_create_network = true
  skip_delete         = false

  depends_on = [random_id.project_identifier_suffix]
}

resource "google_project_service" "api" {
  for_each                   = var.api_services
  project                    = google_project.project.project_id
  service                    = each.value
  disable_on_destroy         = false
  disable_dependent_services = false

  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    on_failure  = fail
    when        = destroy
    command     = <<CMD
      echo 'dummy disable api service with retries'
    CMD
  }

  depends_on = [google_project.project]
}

Core Terraform main.tf:

module "example" {
  source = "./modules//example"

  billing_account = "DUMMY"
  parent_folder = "DUMMY"
}

Terragrunt Config (for completeness) terragrunt.hcl:

terraform_version_constraint = "0.12.24"

Debug Output

Linked Gist: https://gist.github.com/AshMenhennett/37936db1f66cc2354ef38afe76e3f808

Expected Behavior

local-exec destroy provisioner should run successfully

Actual Behavior

Error shown in console, provisioner not run.

Error: Invalid for_each argument: The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

Steps to Reproduce

terraform apply # initial apply
terraform apply # apply to update services after removing an api from the list

Additional Context

Running a local exec on destroy for temporary workaround where inter-dependency between google services cause consistent failure during pipeline execution. Local exec is not executed and produces above error when used with for_each.

Behaviour is not reproducible when not inside a child-module.

Refactor to using count works successfully.

Same outcome when running explicit Destroy or just an Apply, which results in resources being destroyed.

References

Fix seemingly implemented here: #24163
Similar issue here, which seems to be resolved: #24325
Debug Output: https://gist.github.com/AshMenhennett/37936db1f66cc2354ef38afe76e3f808

@AshMenhennett AshMenhennett changed the title for_each on-destroy provisioner inconsistent failure for_each on-destroy provisioner failure Mar 16, 2020
@jbardin
Copy link
Member

jbardin commented Mar 17, 2020

Hi @AshMenhennett

Thanks for filing the issue. This looks similar to the linked issues, but I'm not able to reproduce it with the current version os terraform.

Can you provide the full log and CLI output to help determine what is actually throwing the error in this case? While only the resource shown is what is being planned, is there anything else in the configuration that may be interacting with this resource?

Thanks!

@jbardin jbardin added the waiting-response An issue/pull request is waiting for a response from the community label Mar 17, 2020
@tlanghals-uturn
Copy link

tlanghals-uturn commented Mar 27, 2020

This is the same issue as here #24139 which has still not been resolved as of 0.12.24 even though this was closed. It's specific to using a destroy time provisioner when the for_each null_resource exists within a module. Causing a ton of issues as we cannot update any of our null resources that were previously created with 0.12.18.

Terraform Version

Terraform v0.12.24
+ provider.null v2.1.2

Actual Behavior

Error: Invalid for_each argument: The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.

Debug Output

https://gist.github.com/tlanghals-uturn/45da25c4de2e8e6cceb531f13566b3cf

@ghost ghost removed waiting-response An issue/pull request is waiting for a response from the community labels Mar 27, 2020
@AshMenhennett
Copy link
Author

AshMenhennett commented Mar 30, 2020

Hi @jbardin ,

Thank you for your response, I apologise for the delay in getting back to you!

I have updated the issue details above to reflect a full working case that produces the described errors, along with the DEBUG output. Preference would be to not update an existing set of details, but there were significant changes required, so this was updated directly.

This scenario seems unique to a child module that contains an on-destroy provisioner, and is not reproducible running the configuration directly.

Note:
This has been tested on 0.12.20, 12.23, 12.24 with the same results.

@AshMenhennett
Copy link
Author

Hi @tlanghals-uturn ,

Thanks for reaching out and including your details!

I have updated the details of the scenario in which I am encountering this behaviour, if you'd like to take a look above.

@tlanghals-uturn
Copy link

Hi @jbardin

Any updates on this, it's the same issues as documented in #24139 which was never fixed? This bug was introduced in 12.19 or 12.20

Thanks

@alwaysastudent
Copy link

The issue started on 12.19.

@jeremykatz
Copy link

Merge commit bf65b51 appears to bring a fix for #24139 into the master branch. Unfortunately it isn't part of the v0.12 branch.

The change works for my test case:

resource "null_resource" "resources" {
  for_each = var.instances

  provisioner "local-exec" {
    when = destroy
    on_failure = continue
    command = format("echo content was '%s'", file(each.key))
  }
}

@apparentlymart apparentlymart added the v0.12 Issues (primarily bugs) reported against v0.12 releases label Jun 4, 2020
@erniebilling
Copy link

+1 for getting this fix merged into the 0.12 branch.

@tlanghals-uturn
Copy link

The fix for this was merged in 12.26 and I've validated it's resolved my issue reported in #24139

@erniebilling
Copy link

12.26 works for each.key, but still fails for each.value.

@jeremykatz
Copy link

12.26 works for each.key, but still fails for each.value.

Unfortunately it's also unavailable in the 0.13 beta branch.

0.12.25 works using each.value, iirc.

@danieldreier
Copy link
Contributor

@AshMenhennett I've tested myself, and confirmed that the reproduction case that @tlanghals-uturn provided for #24139 is fixed in 0.12.26 and 0.13.0. The problem he reported looks identical to the one you did, so I'm confident this has been fixed.

@jeremykatz and @erniebilling I modified the reproduction case that @tlanghals-uturn provided to:

  provisioner "local-exec" {
    when    = destroy
    command = <<-EOT
    echo "Deleted role on ${each.key} / ${each.value}"
    EOT
  }

In 0.13.0, this now results in:

Initializing modules...
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Invalid reference from destroy provisioner

  on tftest/module.tf line 16, in resource "null_resource" "role":
  16:     command = <<-EOT
  17:     echo "Deleted role on ${each.key} / ${each.value}"
  18:     EOT

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.

This is the result of reducing the scope of allowable destroy provisioner references in 0.13.0.

So - I don't quite understand the comments @jeremykatz and @erniebilling left.

I think this is fixed, so I'm going to resolve it. If you are aware of another problem, please report it and reference this. If I made a mistake and am misunderstanding please let me know ([email protected]) and I'm happy to re-open this.

@ghost
Copy link

ghost commented Jul 17, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Jul 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

No branches or pull requests

8 participants