-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider hits code 429 Create Bucket quota limit
API errors when provisioning large numbers of google_storage_bucket
resources.
#18132
Provider hits code 429 Create Bucket quota limit
API errors when provisioning large numbers of google_storage_bucket
resources.
#18132
Comments
Hi @SarahFrench! As you can see in this link the Terraform provider Google version 5.9.0 doesn't exist. You need to check your configuration, terraform version and provider version and try again. If after this you are still having issues please share this information again(updated), verifying that everything is ok |
Hi @ggtisc - I'm reporting this bug on behalf of a customer. I might have carried over a typo from the internal ticket about the provider version but that doesn't mean that the lack of API error handling isn't present. |
And 5.9.0 is here: https://releases.hashicorp.com/terraform-provider-google/5.9.0/ |
Thanks @SarahFrench Please answer the next: Other good practices are:
Finally I let you here a link of how you can work effectively with big data |
I'm afraid I don't have that information from the customer - I just made this GH issue as the standard way to communicate customer issues between HashiCorp and Google. Here's my best effort to answer the questions:
The reproduction info above doesn't involve putting any objects in the GCS buckets - we're just creating a LOT of buckets in a short timeframe and triggering a rate limit. The provider should be updated to have a backoff to avoid that problem.
I don't have that info from the customer. The error message suggests it is a rate limit causing the problem, instead of a problem about the number of total buckets being made. From this page it looks like there isn't a limit on the total number of buckets being provisioned, but there is a limit on "Maximum bucket creation and deletion rate per project".
I'm afraid these don't appear to be relevant to the problem I'm imagining the solution to this GH issue is something like this: GoogleCloudPlatform/magic-modules#4094 |
Please be sure of this point or tell the user to contact us, because maybe the user or the account have a limitation due to budget or billing setup:
|
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform Version & Provider Version(s)
Customer reported using v5.9.0 of the provider. No further info available.
Affected Resource(s)
google_storage_bucket
Terraform Configuration
main.tf
module/main.tf
Debug Output
The full debug logs are too large to upload to GitHub as an attachment or a Gist, here's a truncated version showing the end of the output: https://gist.github.com/SarahFrench/e6acd4e5ca60344febd7f9ad03be7fa7
Expected Behavior
After terraform apply 1000 buckets are created, using the module above with
count=1000
Actual Behavior
Terraform continues applying the plan until it hits a timeout:
The provider is actually experiencing a rate limit issue, but this isn't triggering an error or being handled via a backoff. This means the provider continuously triggers the error and fails to provision all resources until the timeout is reached.
Steps to reproduce
terraform apply
using the config aboveImportant Factoids
The customer tried to solve the problem with sleeps (see config above) and other approaches like using
-parallelism=1
in the terraform apply command. This didn't help.References
No response
The text was updated successfully, but these errors were encountered: