-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Adds a caching to shard management counts to alleviate shard scanning O(n) #530
Conversation
Fixes confluentinc#60 - upgrade AK to 2.7.0. Adds 2.7.0 as a stable build.
This messes with the shutdown process of the Producer, i.e. committing final transactions, offsets etc.
…ncy issues under high pressure
refactor: Extract common Reactor and Vert.x parts
Prevents the extension modules from incorrectly inheriting core methods that would be broken to use. Step 1: new parent: rename
Remove deprecated test class
Base class refactor - removes core api from extension modules
Don't know how this made it through CI. Missing copyright and updated readme.
… see #maxCurrency Vert.x concurrency control previously relied on Vert.x WebClient controlling concurrency setting per host. This breaks things when you use multiple hosts - no the max concurrency can go beyond the setting. This change migrates to the new ExternalEngine system - which controls concurrency properly. Turn off performance comparison unit test - too brittle for CI.
… test-jar dependency handling bug
…out of order in the mock "partition"
Under unrealistically high load with no-op processing, broker poller unblocking a partition could cause ProcessingShard to skip forward in its entries and take work out of order. Was discovered when fixing a synthetic high performance benchmark, after an O(n) algo was fixed to O(1), creating the state for the race condition to appear. Probably could not happen without the fix, as it's related to the performance of certain parts of the system.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
notes
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ProcessingShard.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ProcessingShard.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ProcessingShard.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ShardManager.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ShardManager.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ShardManager.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/WorkManager.java
Outdated
Show resolved
Hide resolved
parallel-consumer-core/src/test/java/io/confluent/csid/utils/LatchTestUtils.java
Outdated
Show resolved
Hide resolved
@@ -72,12 +72,31 @@ void backPressureShouldPreventTooManyMessagesBeingQueuedForProcessing() throws O | |||
|
|||
var completes = LongStreamEx.of(numberOfRecords).filter(x -> !blockedOffsets.contains(x)).boxed().toList(); | |||
|
|||
|
|||
{ | |||
var totalSizeOfAllShards = wm.getTotalSizeOfAllShards(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
…e out of order processing (confluentinc#534) Under unrealistically high load with no-op processing, broker poller unblocking a partition could cause ProcessingShard to skip forward in its entries and take work out of order. This was discovered when fixing a synthetic high performance benchmark, after PR#530 (O(n) algo was fixed to O(1)), creating the state for the race condition to appear. Probably could not happen without the fix, as it's related to the performance of certain parts of the system.
…counts # Conflicts: # parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/PartitionState.java # parallel-consumer-core/src/main/java/io/confluent/parallelconsumer/state/ProcessingShard.java
|
Description...
Checklist