-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gep: add GEP-3388 HTTP Retry Budget #3488
gep: add GEP-3388 HTTP Retry Budget #3488
Conversation
Welcome @ericdbishop! |
Hi @ericdbishop. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
For the API implementation, I've been comparing approaches between introducing retry budgets as a part of HTTPRoute, or implementation via policy attachments. Retry budgets are the default retry policy for Linkerd, and are highly recommended by Envoy when configuring cluster circuit breaker thresholds, so simplicity will be a priority here. HTTPRoute
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ericdbishop!
…; minor improvements
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor suggestions, and I think we may have found a path for reconciliation between the slightly different implementations in Envoy and Linkerd, but generally this feels like it's in great shape!
Would appreciate 👀 from @robscott again and @kflynn @howardjohn too so hopefully we can get this merged as provisional by the deadline then move on to API design.
/assign @kflynn |
/assign @howardjohn |
Co-authored-by: Mike Morris <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ericdbishop! Left a couple comments, but otherwise generally LGTM. Do you want to add the API surface in this PR or leave that for a follow up? (Either approach is fine, but API surface needs to be merged by Jan 30)
|
||
#### Retry Budget Policy Attachment | ||
|
||
While current retry behavior is defined at the routing rule level within HTTPRoute, exposing retry budget configuration as a policy attachment offers some advantages: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know you're not proposing a specific policy to include this in yet, but I'd argue this is exactly the kind of thing we had in mind for BackendLBPolicy (cc @gcs278)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to respectfully disagree with @robscott here: the connection between retries and the "backend load balancer" is pretty tenuous (even in Linkerd where the component that decides which backend gets a given request is called the load balancer 😉).
That does not mean that I think we should have a retry policy and a circuit breaking policy and a timeout policy etc. etc., though. It means that:
a. I remain generally opposed to policy attachment for table-stakes features, and
b. If we have a catchall policy for configuring the way we interact with the backends, let's not call it BackendLBPolicy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can draft two proposed implementations (one for a new policy resource, another adding to BackendLBPolicy) in a followup PR to avoid blocking this provisional GEP on bikeshedding this now.
I do have some concerns about messaging/supportability if we start glomming several discrete optional features onto a single *Policy CRD - I suppose it's not much worse than what we already have with the core resources, but it's perhaps simpler for implementations to message "SpecificPolicy with X optional fields is supported" than "for BroadPolicy, X feature is supported with Y optional fields, Z feature is supported with Q optional fields, etc" and I think gets more difficult/important to promote subfields to standard channel independently rather than potentially advancing the entire resource at once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would help sidestep bikeshedding to describe the API simply as a stanza with relevant configuration, then have a separate discussion about where that stanza would be included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern here is that the sheer number of resources involved in using Gateway API is overwhelming for many users (especially new ones). If we keep on with a pattern of creating a unique policy for each topic, this problem is only going to get worse. Some of the most successful Kubernetes APIs are the ones that shoved a ton of concepts into a single resource (Service, Pod, etc). Although these APIs are overloaded, they continue to be remarkably popular.
If we have a catchall policy for configuring the way we interact with the backends, let's not call it BackendLBPolicy.
I think we'll need at least two backend policies - one for TLS config, and one for everything else. If you have any ideas for the name for the "everything else" one, I'd be open to them. I personally think BackendLBPolicy
is ok, but can be convinced that better names exist.
Longer term, I really like the idea @ptrivedi has in #3539 that would add a new backend-focused resource to the API that could replace Service for many Gateway API users. In that proposal, it's called EndpointSelector
, but the general idea would be to disconnect the "frontend" bits of a Service and instead have a resource exclusively focused on the backend bits. In that world, we could replace backend policies with inline fields. Not saying we should start with that for this specific GEP, but trying to provide a vision for a future that doesn't require all these backend policies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened #3573 to continue API design discussion in a followup (still intending to resolve that by January 30th deadline), hoping we can get this merged as provisional as-is.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ericdbishop, robscott The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
||
While current retry behavior is defined at the routing rule level within HTTPRoute, exposing retry budget configuration as a policy attachment offers some advantages: | ||
|
||
* Users could define a single policy, targeting a service, that would dynamically configure a retry threshold based on the percentage of active requests across *all routes* destined for that service's backends. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to the above: set aside, for the moment, the idea that policy attachment is the only way to extend Service (maybe we go with endpoint Gateways, maybe we wave our magic wand and have a Service extension point, I dunno, just let's set that aside for the moment). What would you want the budgeted-retry configuration to look like in that world? What are the user stories driving that design?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a magic world where we have extensible "mix-ins" or similar for core resources, I would envision a retry budget may be configured directly per-Service (or per-Gateway with #3539), but because one of the benefits of budgets is their adaptability as compared against a static count retry config, a user may still want a common policy for an entire namespace or all backends in a cluster (which is not currently in scope for this GEP but could be future extensibility pattern).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great starting point, @ericdbishop, many thanks for diving into this! 🙂
I left some comments and questions that, fundamentally, get into the user stories driving you to policy attachment. If you don't already know my biases here, well, we can talk about that offline (😉), but I'm asking about the stories because I want to understand what you're seeing people asking for...
@robscott @kflynn @mikemorris Seeking final approval on this so we can focus on the API design in a followup PR. |
Thanks @ericdbishop! /lgtm |
What type of PR is this?
/kind gep
What this PR does / why we need it:
To seek consensus on the ideal configuration of a "retry budget" in HTTPRoute, allowing application developers to dynamically limit the rate of client-side retries to their service based on a percentage of the active request volume.
Extends #1731
Which issue(s) this PR fixes:
Fixes #3388
Does this PR introduce a user-facing change?: