Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Filebeat] cloudtrail bulk index error #16293

Closed
brummelbaer opened this issue Feb 13, 2020 · 4 comments
Closed

[Filebeat] cloudtrail bulk index error #16293

brummelbaer opened this issue Feb 13, 2020 · 4 comments
Labels
Filebeat Filebeat

Comments

@brummelbaer
Copy link

Hi

I'm facing an issue with the new filebeat module for aws-cloudtrail. I set it up according to the guidelines specified in the documentation (output directly to elasticsearch) and it kind of works but not necessarily as expected. The module is capable of indexing single and small amounts of events, but when it tries to index multiple events it always produces a bulk index error as specified below. I tried setting the "script.max_compilations_rate" to higher values (1000/5m and 100000/5m), but I still get the same error. Is this a problem with my configuration or is this a known bug?

Any help is apreciated!

Greetings
brummelbaer

Cluster information

elasticsearch version: 7.5.2
filebeat version: 7.7.0 (branch 7.x)

filebeat.yml

filebeat.modules:

debug error

2020-02-13T07:25:02.555Z DEBUG [elasticsearch] elasticsearch/client.go:523 Bulk item insert failed (i=46, status=500): {"type":"illegal_state_exception","reason":"pipeline with id [filebeat-7.7.0-aws-cloudtrail-pipeline] could not be loaded, caused by [ElasticsearchParseException[Error updating pipeline with id [filebeat-7.7.0-aws-cloudtrail-pipeline]]; nested: GeneralScriptException[Failed to compile inline script [if (ctx.json.serviceEventDetails != null) {\n ctx.aws.cloudtrail.service_event_details = ctx.json.serviceEventDetails.toString();\n}\n] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting];; GeneralScriptException[Failed to compile inline script [if (ctx.json.serviceEventDetails != null) {\n ctx.aws.cloudtrail.service_event_details = ctx.json.serviceEventDetails.toString();\n}\n] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting];; org.elasticsearch.common.breaker.CircuitBreakingException: [script] Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting]"}

@brummelbaer brummelbaer changed the title filebeat cloudtrail bulk index error [Filebeat] cloudtrail bulk index error Feb 13, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/siem (Team:SIEM)

@leehinman
Copy link
Contributor

@brummelbaer Thank you for reporting this.

The ingest pipeline has 5 scripts in it, would it be accurate to say that you had more than 20,000 individual cloudtrail events that were trying to be processed in 5 minutes? If so I think I can rewrite the pipeline so we only have 1 script execution per cloudtrail event. That would at least get you up to to the 100000/5m you tried.

@leehinman
Copy link
Contributor

@brummelbaer we had one other idea. It could be that there are enough scripts running that they are being evicted from the cache and have to be recompiled. Could you try setting

script.cache.max_size to 500 (default is 100)

https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#modules-scripting-using-caching

@brummelbaer
Copy link
Author

@leehinman Thank you for your quick reply and your help!

I'm currently in an "demo" environment where I'm facing spikes of around 1k events in 5 minutes, but not 20k. That is why I got confused why my settings would not work.
Setting the script.cache.max_size to 500 however worked fine for now. I'm able to index all events into my cluster. I will continue monitoring this behaviour to see if it works in larger environments.

Greetings & Thanks

brummelbaer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Filebeat Filebeat
Projects
None yet
Development

No branches or pull requests

5 participants