You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tracking snapshots are written at the start of new segments, if the max segment size is very small and is smaller than the tracking snapshot osiris_log will enter a loop where it will continuously reach the max size by just writing the snapshot (or even header).
Reproduction steps
...
Expected behavior
only user and tracking delta chunks should trigger the max size.
Additional context
adf
The text was updated successfully, but these errors were encountered:
I confirm that same problem happened to me as well and will add some information that could help with debugging.
In my app integration tests, I tested that I can submit small message and consume it, message was just about 20 bytes, so I set the max segment size to 1000 bytes and retention of 10 seconds because I do not really need to keep those messages in automated tests.
The issue did not occur consistently for me, but It does happen regardless if I submitted the message into stream directly through rabbit web management api, or through client app (ruling out that some loop happened in either of those).
The segment was produced infinitely and strangely stops only if I submit another message. If i did not submit a message, it kept creating thousands of these files until rabbit instance ran out of memory and the whole docker container with it crashed. It could then not be restarted because the segments would be loaded into memory on startup, crashing immediately again (this is probably a separate issue), and the datafiles had to be deleted manually from mnesia/stream folder to be able to start rabbit.
I attached sample file of the segment that is about 1200 bytes in size, which is larger than max segment size. 00000000000000000874.segment.zip
Solution proposed by @kjnilsson sounds reasonable.
Or at least add information into documentation that the max segment size should be at least 1MB or something. I noticed some sort of metadata chunk that was about 10_000 characters so Im not sure how large these metadata chunks can be.
Describe the bug
tracking snapshots are written at the start of new segments, if the max segment size is very small and is smaller than the tracking snapshot
osiris_log
will enter a loop where it will continuously reach the max size by just writing the snapshot (or even header).Reproduction steps
...
Expected behavior
only user and tracking delta chunks should trigger the max size.
Additional context
adf
The text was updated successfully, but these errors were encountered: