-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update instead of replace google_storage_bucket_object #10488
Comments
@rileykarson any update on this? It does cause us fairly regular pain. |
I assigned this to myself a while ago to write up a closure reason for, but it turns out to be a little more complicated than anticipated. The facts:
I think a reasonable solution here would be to add a field to the resource that controls the delete behaviour, I'll note that that approach doesn't handle permanent deletion well. We can't tell if we're doing a deletion due to removal from the config, I was assigned here to prepare this for triage, so I'll unassign myself now and it will go through that next week. |
That is... fun. Thanks a lot for all the info. So if I understand correctly the flow would be
If that understanding is correct, that flow makes sense to me and would be extremely useful despite the permanent deletion gotcha. It would be nice if there was a way to set it via a single flag on |
Ah, not quite- it'd have the same limitations those do, as does any destroy-time provider-level field that's similar. The value needs to be committed to state in an apply because Terraform short-circuits on deletes and does not update the state based on new changes to config (this makes sense for identify fields that get changed, for example, but doesn't work as well for these pseudo-lifecycle fields). In contrast, Terraform Core's Otherwise, you've got it! It's possible such an approach is overcomplicating things. We could always do whole-resource update in the update method, which would mean the resource would no longer recreate, although we'd need to detect the metadata-only case in just provider code. That can be tricky- the information Core gives us is lossy compared to what it had available in plan. |
I'm having this issue after applying
I'm not sure how to proceed after we hit this scenario. |
@gustavovalverde That is definitely a very different issue, I would recommend you create a separate issue since issues re-creating buckets is unrelated to issues updating objects within a bucket |
I did create a PR on the repo that generate the provider to fix this delete create issue, hope it will be approve and will help all of you as it will help me : GoogleCloudPlatform/magic-modules#10038 |
@rileykarson @BBBmau I think this can be closed now? Maybe something went wrong with the auto-close linking you normally do? |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Description
Please add a new argument for google_storage_bucket_object that will allow people to tell Terraform that Terraform should upload a new version of the object once all of the contents are known without deleting the old object.
Right now, if the contents of google_storage_bucket_object include the output from another Terraform resource or module and those output are not fully determined until that other resource has been applied, Terraform starts the apply by removing the existing google_storage_bucket_object. It then tries to apply the other resources before it tries to upload the new google_storage_bucket_object. However, if the apply of those other resources fails, you end up in a state where it never got to trying to upload the new google_storage_bucket_object so the object no longer exists until you fix everything and as such anything downstream that consumes the google_storage_bucket_object will fail because the object doesn't exist.
In our case, we do not want Terraform to touch the google_storage_bucket_object until all of the contents have been determined and due to the way that the GCS API works, it should simply upload the new object to the same path instead of deleting the old object and uploading a completely new one. That way if something fails with the current Terraform apply, there is a chance that downstream uses of the object will still be able to work with the older version of the object (we know that's not foolproof, especially if we are updating existing values in the object instead of just adding new ones, but our use case is mainly adding new values to the object to it would be preferable for us if we simply didn't pick up the new values until they have succeeded instead of having everything downstream break). It actually looks like there might already be a code path in the provider for this, we just haven't been able to figure out how to hit it.
Example of issue
Start with the following.
After ^ has been successfully applied, change the file to be the following
That results in the following output and order of operations from the apply:
At the end of the day,
test-object
does not exist in my bucket but my desired state would be thattest-object
still exists with the content "hi" until after I've fixed the configuration so thatgoogle_storage_bucket.bucket_two
has successfully created and the new content oftest-object
is known and has been successfully uploaded to GCS.New or Affected Resource(s)
Potential Terraform Configuration
To handcraft some rough terraform apply output, here is what I am envisioning:
References
The text was updated successfully, but these errors were encountered: