-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Falied to flush the buffer, and the file buffer directory is filled with bad logs. #2534
Comments
This is already resolved since v1.4.0 https://github.com/fluent/fluentd/blob/master/CHANGELOG.md#enhancements-2 |
@repeatedly Thanks reply! So do I just need to upgrade the fluentd version to 1.4.0 and add the secondary section? |
@repeatedly Hi, thanks for you suggestion, after adding <secondary> in my buffer configuration, the fd works well. But I did not find secondary file's directory. <match **>
@id elasticsearch
@type elasticsearch
@log_level info
type_name flatten
include_tag_key true
host elasticsearch-logging
port 9200
logstash_format true
logstash_prefix flatten-only-secondary
flatten_hashes true
flatten_hashes_separator _
#index_name kubernetes-%Y%m%d-1
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
timekey 1h
retry_type exponential_backoff
flush_thread_count 4
flush_interval 5s
#retry_forever
retry_timeout 1h
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 100
overflow_action block
</buffer>
# Secondary
<secondary>
@type secondary_file
directory /tmp/fluentd
basename bad-chunk-${chunk_id}
</secondary>
</match> |
@repeatedly hi, after adding secoondary tag, the logs that could not be pushed to elasticsearch are moved to secondary path, but could I re-push these logs to elasticsearch some times manually? |
Maybe you can do such a thing by:
|
Describe the bug
Hi, I want to use Fluentd collecting my kuberbetes working nodes' logs and send them to ElasticSearch. At first, every thing is ok, but there are some logs under fd-agent's file buffer directory with some characters that could not be encoded, I use cat to check these files on MobaXTerm just look like below:
So, the Fluentd could not send them to ES.
I am under the impression that Fluentd could move these bad chunks to tmp directory, but now it does not work. A large number of unprocessed logs take up a lot of disk space.
To Reproduce
unexpected error while checking flushed chunks
issue before fluentd pod restarting because of OOM.failed to flush the buffer
errors logged by the two OOMkilled fluentd pods before.Expected behavior
I expect these bad chunks will be moved to tmp directory, they take up nearly 920M in buffer dictory.
Your Environment
-fluentd 1.2.4
Your Configuration
My fluentd configuration and pod yaml like https://github.com/kubernetes/kubernetes/blob/a5fcaa87f6bd06af87c9e10eff2a8c539f571cf8/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml and https://github.com/kubernetes/kubernetes/blob/a5fcaa87f6bd06af87c9e10eff2a8c539f571cf8/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml
I just change output section as below:
Your Error Log
Additional context
There are thousands of *.log and *.log.meta files in fluentd file buffer directory.
The text was updated successfully, but these errors were encountered: