-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Filebeat] cloudtrail bulk index error #16293
Comments
Pinging @elastic/siem (Team:SIEM) |
@brummelbaer Thank you for reporting this. The ingest pipeline has 5 scripts in it, would it be accurate to say that you had more than 20,000 individual cloudtrail events that were trying to be processed in 5 minutes? If so I think I can rewrite the pipeline so we only have 1 script execution per cloudtrail event. That would at least get you up to to the 100000/5m you tried. |
@brummelbaer we had one other idea. It could be that there are enough scripts running that they are being evicted from the cache and have to be recompiled. Could you try setting
|
@leehinman Thank you for your quick reply and your help! I'm currently in an "demo" environment where I'm facing spikes of around 1k events in 5 minutes, but not 20k. That is why I got confused why my settings would not work. Greetings & Thanks brummelbaer |
Hi
I'm facing an issue with the new filebeat module for aws-cloudtrail. I set it up according to the guidelines specified in the documentation (output directly to elasticsearch) and it kind of works but not necessarily as expected. The module is capable of indexing single and small amounts of events, but when it tries to index multiple events it always produces a bulk index error as specified below. I tried setting the "script.max_compilations_rate" to higher values (1000/5m and 100000/5m), but I still get the same error. Is this a problem with my configuration or is this a known bug?
Any help is apreciated!
Greetings
brummelbaer
Cluster information
elasticsearch version: 7.5.2
filebeat version: 7.7.0 (branch 7.x)
filebeat.yml
filebeat.modules:
module: aws
s3access:
enabled: false
elb:
enabled: false
vpcflow:
enabled: false
cloudtrail:
enabled: true
var.queue_url: https://sqs.eu-central-1.amazonaws.com/queue-url
var.shared_credential_file: /path/to/credential/file
debug error
2020-02-13T07:25:02.555Z DEBUG [elasticsearch] elasticsearch/client.go:523 Bulk item insert failed (i=46, status=500): {"type":"illegal_state_exception","reason":"pipeline with id [filebeat-7.7.0-aws-cloudtrail-pipeline] could not be loaded, caused by [ElasticsearchParseException[Error updating pipeline with id [filebeat-7.7.0-aws-cloudtrail-pipeline]]; nested: GeneralScriptException[Failed to compile inline script [if (ctx.json.serviceEventDetails != null) {\n ctx.aws.cloudtrail.service_event_details = ctx.json.serviceEventDetails.toString();\n}\n] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting];; GeneralScriptException[Failed to compile inline script [if (ctx.json.serviceEventDetails != null) {\n ctx.aws.cloudtrail.service_event_details = ctx.json.serviceEventDetails.toString();\n}\n] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting];; org.elasticsearch.common.breaker.CircuitBreakingException: [script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting]"}
The text was updated successfully, but these errors were encountered: