-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segment replicas leading to split query views #7850
Comments
I believe this can be addressed with #7274 @sajjad-moradi What's the current state of the development of the feature? I saw a PR is merged, but the issue is still open |
@mapshen you can check #7267 and #7753. It basically makes sure that the consumption is caught up to the latest offset in the stream before enabling query execution.
I just closed the issue. |
@sajjad-moradi this applies to consuming segments only, correct? When we consume from the earliest, there might be segments completed along the way and those will be available for querying once finished? |
That's correct. The completed segments will be available for querying. Basically when a consuming segment completes, the last ingested offset (+1) will be used as starting offset of the next consuming segment. If there's a server restart, the start offset of the consuming segment, that's written in segment zk metadata, will be used as the starting point for consumption. The mentioned PR's basically disable querying after startup and let the consuming segments catch up consumption to the latest stream offset. |
@sajjad-moradi It seems the latest offset is only fetched once when the segment starts to consume. However, in cases where it takes quite a while for a segment to catch, this approach could still lead to split views? What is the need for avoiding "chasing a moving target"? |
We have seen some use cases for which the stream has a bursty traffic pattern. Let's say we wanted to fetch the latest stream offset on every status call. If every time when the status check happens, there's some new events available on the stream that have not yet been ingested/processed, then this status checker declares that the consumption is not caught up while indeed there are few messages left. That would prevent using the data in this server for querying which wouldn't be desirable. |
@mapshen if the behaviour of offset based status checker is still not desirable for your use case, you can disable it and use the consumption catch-up wait time. If you do that, at startup time, the server will not serve queries until the the wait time is over. |
@mcvsubbu did we have any other reason for not chasing the moving target for consumption? |
The consumption catch-up wait time leads to the split views described at the beginning, doesn't it? That's why we are have these conversations. There seems to be a way to solve the problem the with offset based status checker: instead of continuously fetching the latest stream offset and chasing the moving target, it only fetches the latest offset again when it has reached the last fetched offset and keeps doing so till they converge before returning ServiceStatus.GOOD. What do you think? |
Not really. The server doesn't serve queries until consumption catch-up wait is over. So all queries go to the other server which has not been restarted and has ingested the latest stream events. |
It does lead to split views in our experiments. As you mentioned in #7274, the time threshold doesn't guarantee the server will be all caught up when the wait is over.
|
The issue also manifests when a segment is flushed and built. Since sync the built segment to the non-committer servers may take some time and the committer server will first start consuming. You will see different results when the query is routed to different servers till the non-committer servers catch up. |
We run Pinot 0.8.0. When setting up a realtime table consuming from a single-partition kafka topic, we set
replicasPerPartition
to 2, which means there are two consuming segments, running on two separate pinot servers. However, when you take one server down, wait for a while and then bring it back up, your query could still hit either of the two consuming segment although one is lagging behind, hence leading to inconsistent/incorrect query results. As a user, we expect to see the query gets routes to the consuming segment that has newer data.Steps to reproduce:
replicasPerPartition
set to 2. Also setrealtime.segment.flush.threshold.time
to something like12h
to make sure there is no segment flush during testing.select * from <table> order by <columnA> desc
that scans all segments in the UI repeatedly and you will see that thenumDocsScanned
alternates as the query gets routed to different consuming segments.The text was updated successfully, but these errors were encountered: