How to read entire log? #378
-
Hello In the documentation, there is a hard coded message count of 100, at which point it stops reading. I have a situation where I want to read the log from start until the end, then stop listening. I could implement the documentation example, replacing the hardcoded count with a check if Would this be the correct approach? It wouldn't work if the stream was empty. I don't really want to rely on timeouts to figure out if I'm at the end of the stream either. I think ideally I would like to read the stream metedata somehow to get the latest offset that way without actually having to try to consume the stream. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Hi @CoenraadS
There is no way to understand the If you don't have messages for some time, you can consider it the end of the stream. The We will think about providing stats about that. ( Not sure we will give it) |
Beta Was this translation helpful? Give feedback.
-
Thanks for the ideas, I'm aware it's a moving target, unfortunately it's a
requirement I have to fulfill. Basically that a client can download all
messages at time of connection, over a potential bad network (hence trying
to avoid timeouts)
I will try idea 2, it sounds doable
…On Tue, May 28, 2024, 6:13 PM Arnaud Cogoluègnes ***@***.***> wrote:
Note the "last offset" of a stream is a moving target, unless the stream
is read-only (no application publishing to it).
There are different ways to read the whole stream:
- Use the StreamStats command and use the committedChunkId value. This
is a good enough approximation of the end of the stream at a given time.
The value - how precise it is - is stale as soon as you receive it if
messages are published to the stream anyway.
- Create a consumer with the LAST offset specification and calculate
the last offset with the chunk information (use the chunk ID and the number
of messages in the chunk). This will require to use the low-level API of
the client library. Then you can create a consumer and stop consuming when
you reach the "last" offset.
- Create a consumer with the FIRST offset specification, process
messages, and use a timeout mechanism to stop the consumer after no
messages have been received in x seconds (10, 20 seconds should be
appropriate values).
—
Reply to this email directly, view it on GitHub
<#378 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKVVYPR7W5CS4FL7TGHQLTZEQ4C3AVCNFSM6AAAAABICXYQWOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TKNZXHAYDA>
.
You are receiving this because you were mentioned.Message ID:
<rabbitmq/rabbitmq-stream-dotnet-client/repo-discussions/378/comments/9577800
@github.com>
|
Beta Was this translation helpful? Give feedback.
Note the "last offset" of a stream is a moving target, unless the stream is read-only (no application publishing to it).
There are different ways to read the whole stream:
committedChunkId
value. This is a good enough approximation of the end of the stream at a given time. The value - how precise it is - is stale as soon as you receive it if messages are published to the stream anyway.LAST
offset specification and calculate the last offset with the chunk information (use the chunk ID and the number of messages in the chunk). This will require to use the low-level API of the client library. Then you can create a consumer …