-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Match batch size to payload limit in bytes for AWS #3892
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What would you like to be added:
The ability to configure
aws-batch-change-size
so that it can be set to the max number of bytes AWS accept rather than the number of records.Why is this needed:
The AWS limit on batches is a payload limit in bytes - 32000 (at time of writing). This doesn't match up with
aws-batch-change-size
which is the number of records. During scaling events, there can often be issues when the AWS R53 limit is reached. The most common scenario where this fails, is when we have a batch of 1000 records, but the names are much longer than usual (in the case of test events).It would be great if external-dns could understand the size of the payload it is about to send and then shape the payload to be under the limit AWS have set- 32000 (at time of writing).
The text was updated successfully, but these errors were encountered: