-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only trim logs to min persisted lsn across all known nodes #1781
Comments
tillrohrmann
added a commit
to tillrohrmann/restate
that referenced
this issue
Aug 2, 2024
Until we can share partition processor snapshots between Restate nodes (e.g. by fetching them from S3), we can only trim the log if all known nodes have reached the trim point. Otherwise, we risk that a node that is currently not available needs log entries which were trimmed. One crucial assumption is that no new nodes will join the cluster once the first log trimming has happened. For this to work, we also need the sharing of partition processor snapshots. This fixes restatedev#1781.
tillrohrmann
added a commit
to tillrohrmann/restate
that referenced
this issue
Aug 2, 2024
Until we can share partition processor snapshots between Restate nodes (e.g. by fetching them from S3), we can only trim the log if all known nodes have reached the trim point. Otherwise, we risk that a node that is currently not available needs log entries which were trimmed. One crucial assumption is that no new nodes will join the cluster once the first log trimming has happened. For this to work, we also need the sharing of partition processor snapshots. This fixes restatedev#1781.
tillrohrmann
added a commit
to tillrohrmann/restate
that referenced
this issue
Aug 6, 2024
Until we can share partition processor snapshots between Restate nodes (e.g. by fetching them from S3), we can only trim the log if all known nodes have reached the trim point. Otherwise, we risk that a node that is currently not available needs log entries which were trimmed. One crucial assumption is that no new nodes will join the cluster once the first log trimming has happened. For this to work, we also need the sharing of partition processor snapshots. This fixes restatedev#1781.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Until we have support for creating a state snapshot and making this state snapshot accessible to all nodes, we must not trim the log if one of the known nodes lags behind. If we wanted to support that new nodes can join the cluster at any point in time, then we must never trim the log because a new node will have to replay the log from the beginning for a given partition processor.
The text was updated successfully, but these errors were encountered: