Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws.resourceAwsS3BucketRead: invalid memory address or nil pointer dereference #1336

Closed
icflournoy opened this issue Aug 3, 2017 · 6 comments
Labels
bug Addresses a defect in current functionality. crash Results from or addresses a Terraform crash or kernel panic.

Comments

@icflournoy
Copy link

Hi there,

I have recently updated my terraform binary from v0.9.9 to v0.10.0. Once previous stable plans are now crashing every time I do a terraform refresh.

Terraform Version

$ terraform --version
Terraform v0.10.0

Affected Resource(s)

It looks to me that refreshing the lifecycle configuration for an S3 bucket is causing the fault.

Terraform Configuration Files

This is inside a module, that is required by another module, that is pulled into my root terraform plan for the environment.

resource "aws_s3_bucket" "bucket" {
    bucket = "${var.prefix}-${var.service_name}-logs"
    acl = "private"
    lifecycle_rule {
        id = "expire-all"
        prefix = "/"
        enabled = "true"

        expiration {
            days = "${var.expiration_days}"
        }
    }
}

Panic/Debug Output

https://gist.github.com/icflournoy/0ec94596f6a760eb552e1c03f028f014

Expected Behavior

Terraform completes a refresh of the state.

Actual Behavior

During the refresh Terraform crashed.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform init
  2. terraform refresh

Important Factoids

Today I had refactored my plans and had a stable infrastructure under Terraform v0.9.9. I have upgraded to the v0.10.0 binary and am doing initial testing, but cannot move past the refresh phase.

I also have a secondary environment that shows the same stacktrace for the same resource, but much further back in the debug/trace log since the environment has more resources. The refresh of these resources then fails with either unexpected EOF or connection is shut down.

@bsiegel
Copy link

bsiegel commented Aug 3, 2017

Believe this duplicates #1314

@bflad
Copy link
Contributor

bflad commented Aug 3, 2017

Pinning AWS providers to 0.1.1 is working in our environment (until 0.1.4 is released)

@icflournoy
Copy link
Author

icflournoy commented Aug 3, 2017

I can confirm that when using 0.1.1 I no longer see an issue! (At least for terraform plan). Onwards with my testing...

@lra lra mentioned this issue Aug 3, 2017
@radeksimko
Copy link
Member

Hi @icflournoy
thanks for the report.

Marking as duplicate of #1314 which was fixed in #1316 and will be shipped as part of the next release 🔜 .

@radeksimko
Copy link
Member

Duplicate of #1314

@radeksimko radeksimko marked this as a duplicate of #1314 Aug 4, 2017
@radeksimko radeksimko added bug Addresses a defect in current functionality. crash Results from or addresses a Terraform crash or kernel panic. labels Aug 4, 2017
@ghost
Copy link

ghost commented Apr 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. crash Results from or addresses a Terraform crash or kernel panic.
Projects
None yet
Development

No branches or pull requests

4 participants