Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combine the read access for replication config #9849

Merged
merged 2 commits into from
Nov 24, 2022

Conversation

snleee
Copy link
Contributor

@snleee snleee commented Nov 23, 2022

Currently, we have a separate configuration for replication. Offline and HLC reads from replication and LLC reads from replicasPerPartition. This PR combines the read access for the replication config.

@snleee
Copy link
Contributor Author

snleee commented Nov 23, 2022

#8804

@snleee snleee force-pushed the clean-up-replication branch from a15a690 to 291aee5 Compare November 23, 2022 06:02
@snleee snleee force-pushed the clean-up-replication branch 2 times, most recently from 17d555d to db6366c Compare November 23, 2022 06:10
@codecov-commenter
Copy link

codecov-commenter commented Nov 23, 2022

Codecov Report

Merging #9849 (9aa1a7a) into master (d62a867) will increase coverage by 0.01%.
The diff coverage is 67.74%.

@@             Coverage Diff              @@
##             master    #9849      +/-   ##
============================================
+ Coverage     70.36%   70.38%   +0.01%     
+ Complexity     5469     5013     -456     
============================================
  Files          1972     1972              
  Lines        105675   105687      +12     
  Branches      15989    15988       -1     
============================================
+ Hits          74361    74386      +25     
+ Misses        26116    26103      -13     
  Partials       5198     5198              
Flag Coverage Δ
integration1 25.26% <25.80%> (+0.04%) ⬆️
integration2 24.65% <29.03%> (-0.07%) ⬇️
unittests1 67.73% <61.11%> (-0.02%) ⬇️
unittests2 15.76% <32.25%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...mmon/assignment/InstanceAssignmentConfigUtils.java 14.63% <0.00%> (ø)
...oller/api/resources/PinotTableRestletResource.java 55.31% <0.00%> (+0.14%) ⬆️
...e/assignment/segment/OfflineSegmentAssignment.java 93.93% <ø> (-0.18%) ⬇️
.../assignment/segment/RealtimeSegmentAssignment.java 92.39% <ø> (-0.09%) ⬇️
...he/pinot/segment/local/utils/TableConfigUtils.java 68.53% <0.00%> (+0.38%) ⬆️
...ig/table/SegmentsValidationAndRetentionConfig.java 96.07% <ø> (ø)
...roller/helix/core/PinotTableIdealStateBuilder.java 88.00% <50.00%> (+6.98%) ⬆️
...org/apache/pinot/spi/config/table/TableConfig.java 65.65% <78.57%> (+2.12%) ⬆️
...e/pinot/controller/helix/SegmentStatusChecker.java 87.35% <100.00%> (-0.15%) ⬇️
...ntroller/helix/core/PinotHelixResourceManager.java 71.15% <100.00%> (-0.17%) ⬇️
... and 41 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@snleee snleee force-pushed the clean-up-replication branch from db6366c to 072ff53 Compare November 23, 2022 07:40
Currently, we have a separate configuration for replication.
Offline and HLC reads from `replication` and LLC reads from
`replicasPerPartition`. This PR combines the read access for
the replication config.
@snleee snleee force-pushed the clean-up-replication branch from 072ff53 to 58e56da Compare November 23, 2022 07:42
Copy link
Contributor

@Jackie-Jiang Jackie-Jiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM otherwise

@snleee snleee force-pushed the clean-up-replication branch from 14cde3e to 9aa1a7a Compare November 24, 2022 03:54
@snleee snleee merged commit 555e5a0 into apache:master Nov 24, 2022
@snleee snleee deleted the clean-up-replication branch November 24, 2022 06:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants