Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A tracking snaphshot chunk should not trigger retention eval #170

Open
kjnilsson opened this issue Nov 1, 2024 · 1 comment
Open

A tracking snaphshot chunk should not trigger retention eval #170

kjnilsson opened this issue Nov 1, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@kjnilsson
Copy link
Contributor

Describe the bug

tracking snapshots are written at the start of new segments, if the max segment size is very small and is smaller than the tracking snapshot osiris_log will enter a loop where it will continuously reach the max size by just writing the snapshot (or even header).

Reproduction steps

...

Expected behavior

only user and tracking delta chunks should trigger the max size.

Additional context

adf

@kjnilsson kjnilsson added the bug Something isn't working label Nov 1, 2024
@DominikFirla
Copy link

I confirm that same problem happened to me as well and will add some information that could help with debugging.

In my app integration tests, I tested that I can submit small message and consume it, message was just about 20 bytes, so I set the max segment size to 1000 bytes and retention of 10 seconds because I do not really need to keep those messages in automated tests.

The issue did not occur consistently for me, but It does happen regardless if I submitted the message into stream directly through rabbit web management api, or through client app (ruling out that some loop happened in either of those).

The segment was produced infinitely and strangely stops only if I submit another message. If i did not submit a message, it kept creating thousands of these files until rabbit instance ran out of memory and the whole docker container with it crashed. It could then not be restarted because the segments would be loaded into memory on startup, crashing immediately again (this is probably a separate issue), and the datafiles had to be deleted manually from mnesia/stream folder to be able to start rabbit.

I attached sample file of the segment that is about 1200 bytes in size, which is larger than max segment size.
00000000000000000874.segment.zip

Solution proposed by @kjnilsson sounds reasonable.
Or at least add information into documentation that the max segment size should be at least 1MB or something. I noticed some sort of metadata chunk that was about 10_000 characters so Im not sure how large these metadata chunks can be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants