Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concurrency-limit: Share a limit across Services #429

Merged
merged 2 commits into from
Feb 19, 2020

Conversation

olix0r
Copy link
Member

@olix0r olix0r commented Feb 18, 2020

Tower's ConcurrencyLimit middleware instantiates a new semaphore for
each service created by the layer. However, in an upcoming proxy change,
I'd like to avoid having a single Service instance that all requests
must flow through and, instead, wrap a service created for each
connection. In order to enforce a global concurrency limit, the
middleware needs to be changed to share a semaphore across all services
created by the layer.

This presents no functional change to how the concurrency limit is
applied today.

This is basically copy/pasted from https://github.com/tower-rs/tower/blob/v0.1.x/tower-limit/src/concurrency/service.rs

Tower's `ConcurrencyLimit` middleware instantiates a new semaphore for
each service created by the layer. However, in an upcoming proxy change,
I'd like to avoid having a single `Service` instance that all requests
must flow through and, instead, wrap a service created for each
connection. In order to enforce a global concurrency limit, the
middleware needs to be changed to share a semaphore across all services
created by the layer.

This presents no functional change to how the concurrency limit is
applied today.
@olix0r olix0r requested a review from a team February 18, 2020 17:32
@olix0r olix0r changed the title Reimplement concurrency-limit to share a limit across Services. concurrency-limit: Share a limit across Services Feb 18, 2020
Copy link
Contributor

@kleimkuhler kleimkuhler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

Copy link
Contributor

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

linkerd/concurrency-limit/src/lib.rs Outdated Show resolved Hide resolved
@@ -0,0 +1,172 @@
//! A layer that limits the number of in-flight requests to inner service.
//!
//! Modified from tower-concurrency-limit so that the Layer holds a semaphore
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this a change we want to get into tower eventually? why/why not?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDK. Both behaviors are useful. In general, I think tower's impl would be more flexible if the service constructor took a Semaphore, though.

@olix0r olix0r merged commit e2c3271 into master Feb 19, 2020
@olix0r olix0r deleted the ver/shared-concurrency-limit branch February 19, 2020 16:18
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Feb 19, 2020
This release includes the results from continued profiling & performance
analysis. In addition to modifying internals to prevent unwarranted
memory growth, we've introduced new metrics to aid in debugging and
diagnostics: a new `request_errors_total` metric exposes the number of
requests that receive synthesized responses due to proxy errors; and a
suite of `stack_*` metrics expose proxy internals that can help us
identify unexpected behavior.

---

* trace: update `tracing-subscriber` dependency to 0.2.1 (linkerd/linkerd2-proxy#426)
* Reimplement the Lock middleware with tokio::sync (linkerd/linkerd2-proxy#427)
* Add the request_errors_total metric (linkerd/linkerd2-proxy#417)
* Expose the number of service instances in the proxy (linkerd/linkerd2-proxy#428)
* concurrency-limit: Share a limit across Services (linkerd/linkerd2-proxy#429)
* profiling: add benchmark and profiling scripts (linkerd/linkerd2-proxy#406)
* http-box: Box HTTP payloads via middleware (linkerd/linkerd2-proxy#430)
* lock: Generalize to protect a guarded value (linkerd/linkerd2-proxy#431)
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Feb 19, 2020
This release includes the results from continued profiling & performance
analysis. In addition to modifying internals to prevent unwarranted
memory growth, we've introduced new metrics to aid in debugging and
diagnostics: a new `request_errors_total` metric exposes the number of
requests that receive synthesized responses due to proxy errors; and a
suite of `stack_*` metrics expose proxy internals that can help us
identify unexpected behavior.

---

* trace: update `tracing-subscriber` dependency to 0.2.1 (linkerd/linkerd2-proxy#426)
* Reimplement the Lock middleware with tokio::sync (linkerd/linkerd2-proxy#427)
* Add the request_errors_total metric (linkerd/linkerd2-proxy#417)
* Expose the number of service instances in the proxy (linkerd/linkerd2-proxy#428)
* concurrency-limit: Share a limit across Services (linkerd/linkerd2-proxy#429)
* profiling: add benchmark and profiling scripts (linkerd/linkerd2-proxy#406)
* http-box: Box HTTP payloads via middleware (linkerd/linkerd2-proxy#430)
* lock: Generalize to protect a guarded value (linkerd/linkerd2-proxy#431)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants