-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restoring chunks with same metadata #2698
Comments
hi. I couldn't reproduce it. Are there any other steps?
|
In my example, chunks are still on stage (buffer.b* filename) when fluentd restarts. I think that makes a big difference since the behavior of the resume function in buf_file is different according to chunk.state: fluentd/lib/fluent/plugin/buf_file.rb Line 163 in e1c8ed5
|
@ganmacs Any news ? The problem is still there. To give you context, the Fluentd service (dockerized) is reloaded weekly due to the rotation of the logs and some orphan chunks may remain due to this problem. The use of |
Sorry for the delay. |
Describe the bug
When Fluentd restarts, it restores all chunks that are stored in buffer files and intends to flush them but only the last chunk restored is flushed.
To Reproduce
(100 buffer files are created)
Expected behavior
All chunks are restored and flushed when Fluentd restarts
Your Environment
NAME="Alpine Linux"
VERSION_ID=3.9.4
Your Configuration
Your Error Log
Before Fluentd reloading, all chunks are flushed correctly, but at restarts:
Only the last restored chunk is flushed. If we reload as many times as the remaining chunks, they will be finally all flushed.
Additional context
When the chunk_limit_records is reached, a new chunk is created but it has the same metadata as the previous. If those chunks have not been flushed before a restart, when it resumes, the @stage map
fluentd/lib/fluent/plugin/buffer.rb
Line 185 in e1c8ed5
fluentd/lib/fluent/plugin/buf_file.rb
Line 165 in e1c8ed5
Maybe we could add an information in metadata that a chunk has been created because chunk_limit_records has been reached and then, each of theses chunks could have unique metadata ?
The text was updated successfully, but these errors were encountered: