Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segment replicas leading to split query views #7850

Open
mapshen opened this issue Dec 1, 2021 · 12 comments
Open

Segment replicas leading to split query views #7850

mapshen opened this issue Dec 1, 2021 · 12 comments

Comments

@mapshen
Copy link

mapshen commented Dec 1, 2021

We run Pinot 0.8.0. When setting up a realtime table consuming from a single-partition kafka topic, we set replicasPerPartition to 2, which means there are two consuming segments, running on two separate pinot servers. However, when you take one server down, wait for a while and then bring it back up, your query could still hit either of the two consuming segment although one is lagging behind, hence leading to inconsistent/incorrect query results. As a user, we expect to see the query gets routes to the consuming segment that has newer data.

Steps to reproduce:

  1. Set up a realtime table with replicasPerPartition set to 2. Also set realtime.segment.flush.threshold.time to something like 12h to make sure there is no segment flush during testing.
  2. Have the 2 replica segments, distributed on Host A and B respectively, consume for 5 minutes.
  3. Stop the pinot server on Host A. Wait for 5 minutes and then start it.
  4. Wait for both consuming segments to come back online.
  5. Run a PQL query like select * from <table> order by <columnA> desc that scans all segments in the UI repeatedly and you will see that the numDocsScanned alternates as the query gets routed to different consuming segments.
@Jackie-Jiang
Copy link
Contributor

I believe this can be addressed with #7274

@sajjad-moradi What's the current state of the development of the feature? I saw a PR is merged, but the issue is still open

@sajjad-moradi
Copy link
Contributor

@mapshen you can check #7267 and #7753. It basically makes sure that the consumption is caught up to the latest offset in the stream before enabling query execution.

@sajjad-moradi What's the current state of the development of the feature? I saw a PR is merged, but the issue is still open

I just closed the issue.

@mapshen
Copy link
Author

mapshen commented Dec 2, 2021

@sajjad-moradi this applies to consuming segments only, correct?

When we consume from the earliest, there might be segments completed along the way and those will be available for querying once finished?

@sajjad-moradi
Copy link
Contributor

That's correct. The completed segments will be available for querying. Basically when a consuming segment completes, the last ingested offset (+1) will be used as starting offset of the next consuming segment. If there's a server restart, the start offset of the consuming segment, that's written in segment zk metadata, will be used as the starting point for consumption. The mentioned PR's basically disable querying after startup and let the consuming segments catch up consumption to the latest stream offset.

@mapshen
Copy link
Author

mapshen commented Dec 2, 2021

@sajjad-moradi It seems the latest offset is only fetched once when the segment starts to consume. However, in cases where it takes quite a while for a segment to catch, this approach could still lead to split views? What is the need for avoiding "chasing a moving target"?

@sajjad-moradi
Copy link
Contributor

We have seen some use cases for which the stream has a bursty traffic pattern. Let's say we wanted to fetch the latest stream offset on every status call. If every time when the status check happens, there's some new events available on the stream that have not yet been ingested/processed, then this status checker declares that the consumption is not caught up while indeed there are few messages left. That would prevent using the data in this server for querying which wouldn't be desirable.

@sajjad-moradi
Copy link
Contributor

@mapshen if the behaviour of offset based status checker is still not desirable for your use case, you can disable it and use the consumption catch-up wait time. If you do that, at startup time, the server will not serve queries until the the wait time is over.

@sajjad-moradi
Copy link
Contributor

@mcvsubbu did we have any other reason for not chasing the moving target for consumption?

@mapshen
Copy link
Author

mapshen commented Dec 9, 2021

@mapshen if the behaviour of offset based status checker is still not desirable for your use case, you can disable it and use the consumption catch-up wait time. If you do that, at startup time, the server will not serve queries until the the wait time is over.

The consumption catch-up wait time leads to the split views described at the beginning, doesn't it? That's why we are have these conversations.

There seems to be a way to solve the problem the with offset based status checker: instead of continuously fetching the latest stream offset and chasing the moving target, it only fetches the latest offset again when it has reached the last fetched offset and keeps doing so till they converge before returning ServiceStatus.GOOD.

What do you think?

@sajjad-moradi
Copy link
Contributor

The consumption catch-up wait time leads to the split views described at the beginning, doesn't it? That's why we are have these conversations.

Not really. The server doesn't serve queries until consumption catch-up wait is over. So all queries go to the other server which has not been restarted and has ingested the latest stream events.

@mapshen
Copy link
Author

mapshen commented Dec 22, 2021

The consumption catch-up wait time leads to the split views described at the beginning, doesn't it? That's why we are have these conversations.

Not really. The server doesn't serve queries until consumption catch-up wait is over. So all queries go to the other server which has not been restarted and has ingested the latest stream events.

It does lead to split views in our experiments.

As you mentioned in #7274, the time threshold doesn't guarantee the server will be all caught up when the wait is over.

Realtime consumption status checker is used at startup time in Pinot Server to define a catch-up period in which no query execution happens and stream consumers try to catch up to the latest messages available in different streams. The current realtime consumption status checker defines a time threshold for startup consumption. When the wait time after Pinot Server starts up passes that time threshold, the status checker returns ServiceStatus.GOOD.

@mapshen
Copy link
Author

mapshen commented Jan 7, 2022

The issue also manifests when a segment is flushed and built. Since sync the built segment to the non-committer servers may take some time and the committer server will first start consuming. You will see different results when the query is routed to different servers till the non-committer servers catch up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants