Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BurstingRateLimiter for simulating super-bursty traffic. #14

Merged
merged 8 commits into from
Apr 23, 2019

Conversation

oschaaf
Copy link
Member

@oschaaf oschaaf commented Apr 18, 2019

Introduces a new --burst-size option. Implemented via BurstingRateLimiter, which accumulates acquisitions by the underlying configured RateLimiter, and starts releasing when the burst-size threshold is met. For example, --burst-size 100 --rps 1000 will make the client release 100 requests every 100 milliseconds.

Signed-off-by: Otto van der Schaaf [email protected]

@oschaaf oschaaf changed the title Draft of a wrapper for rate limiter that batches Add BurstingRateLimiter for simulating super-bursty traffic. Apr 21, 2019
@oschaaf oschaaf marked this pull request as ready for review April 21, 2019 20:41
oschaaf added 2 commits April 21, 2019 23:11
Introduces a new `--burst-size` option. Implemented via
`BurstingRateLimiter`, which accumulates acquisitions by the
underlying configured `RateLimiter`, and starts releasing
when the burst-size threshold is met.
For example, `--burst-size 100 --rps 1000` will make the client
release 100 requests every 100 milliseconds.

Signed-off-by: Otto van der Schaaf <[email protected]>
@oschaaf oschaaf force-pushed the batching-rate-limiter branch from 463060a to 4af229d Compare April 21, 2019 21:30
@oschaaf
Copy link
Member Author

oschaaf commented Apr 21, 2019

Rebased to master and squashed. Has an extra commit to fix the build that broke after the rebase:

  • move to HeaderString::getStringView()

Force pushed the squashed version, but I'm leaving a note here to make sure I don't forget about the earlier ASAN failure in CI:
https://circleci.com/gh/envoyproxy/nighthawk/209. This seems worth looking into, but it seems unlikely that it was related to this PR.

May need tweaking the coverage threshold in here, it may err because it didn't consider the test origin when I adjusted it earlier.

/assign @htuch

@oschaaf
Copy link
Member Author

oschaaf commented Apr 22, 2019

It looks like the hack to avoid runtime conflicts in the integration test for the client came back to haunt us :(
That test now flakes with assert failure: thread_advancing_time_->isCurrentThreadId(). Details: time should only be advanced on one thread in the context of a test.

@oschaaf
Copy link
Member Author

oschaaf commented Apr 22, 2019

Reading the code, there's a comment in the BaseIntegrationTest which remarks about a hack where it calls sleep() [1]. I think that is where the time system initializes its thread-id for further checking later on.
If it wouldn't do that we would be fine... Unfortunately, the constructor that accepts a time system ignores whatever we pass in, so it seems that we also cannot pass in a different time system that doesn't perform this check to avoid the assert.

[1] https://github.com/envoyproxy/envoy/blob/b41ba5925a4e93d22a86c6501d63314ccf0d79f3/test/integration/integration.cc#L239

@oschaaf
Copy link
Member Author

oschaaf commented Apr 22, 2019

I think we should land #15 first, and then rebase this.

@oschaaf
Copy link
Member Author

oschaaf commented Apr 22, 2019

Note: just merged master into this, which should clean up the diff now that #15 was merged over there.

@htuch
Copy link
Member

htuch commented Apr 22, 2019

@oschaaf can you merge master again? Thanks

@oschaaf
Copy link
Member Author

oschaaf commented Apr 22, 2019

@htuch done, merged master

Copy link
Member

@htuch htuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good modulo a few comments.
/wait

@@ -32,7 +32,7 @@ if [ "$VALIDATE_COVERAGE" == "true" ]
then
COVERAGE_VALUE=$(grep -Po '.*lines[.]*: \K(\d|\.)*' "${COVERAGE_SUMMARY}")
# TODO(oschaaf): The target is 97.5%, so up this whenever possible in follow ups.
COVERAGE_THRESHOLD=96.6
COVERAGE_THRESHOLD=96.8
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huzzah!

// the system is lagging behind.
while (rate_limiter_->tryAcquireOne()) {
accumulated_++;
if ((accumulated_ % burst_size_) == 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of % here together with both ++ on release and acquire is clever; it took me a minute to grok though, could you add a comment or two?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

accumulated_++;
ASSERT(accumulated_ <= burst_size_);
} else {
rate_limiter_->releaseOne();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ever possible to execute this line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, good point, this didn't make sense. I tightened this up.

* 1. First it will be accumulating acquisitions by forwarding calls to the wrapped
* rate limiter, until the accumulated acquisitions equals the specified burst size.
* 2. Release mode. In this mode, BatchingRateLimiter is in control and will be handling
* acquisition calls (returning true and substracting from the accumulated total until
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It cleverly uses increment to subtract! :)

EXPECT_CALL(unsafe_mock_rate_limiter, tryAcquireOne)
.Times(burst_size)
.WillRepeatedly(Return(true))
.RetiresOnSaturation();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is RetireOnSaturation? Is this effectively the same as InSequence?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, InSequence specifies the intent here more clearly and is more compact, so I switched it over to use that.

@oschaaf
Copy link
Member Author

oschaaf commented Apr 23, 2019

@htuch this is ready for another round

Copy link
Member

@htuch htuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/wait

void BurstingRateLimiter::releaseOne() {
ASSERT(accumulated_ < burst_size_);
ASSERT(previously_releasing_ != absl::nullopt && previously_releasing_ == true);
// The caller wasn't able to put its earlier successfull acquisition to good use, so we restore
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, this is actually different to what I thought you were doing. I thought you were doing:

  • During acquisition, keep increasing accumulated until % burst_size == 0.
  • During release, keep increasing accumulated until % burst_size ==0.

If you ++ and --, won't you run the risk of getting non-bursty behavior when released tokens get immediately reacquired and a thread spends a long period without requesting a burst?

@@ -41,6 +41,10 @@ SequencerPtr SequencerFactoryImpl::create(Envoy::TimeSource& time_source,
StatisticFactoryImpl statistic_factory(options_);
RateLimiterPtr rate_limiter =
std::make_unique<LinearRateLimiter>(time_source, Frequency(options_.requestsPerSecond()));
const uint64_t burst_size = options_.burstSize();
if (burst_size) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: prefer burst_size > 0

Copy link
Member

@htuch htuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarified points offline, thanks, will merge.

@htuch htuch merged commit 4cd0eff into envoyproxy:master Apr 23, 2019
@oschaaf oschaaf added this to the 0.1 milestone May 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants