-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 input and large buckets #14
Comments
It looks like the plugin loops through every object in the bucket before processing them. As you add objects, this list grows and it takes longer to loop through. I need to do a bit more testing on this theory - but in processing a ton of files (> 80K) that's what I seemed to see. I can't quite tell if it queues them all up, or if it is processing them while looping through them. Amazon's API calls limit you to 1K objects at a time, but some of the libraries abstract this and add paging, such that a loop will go through everything. It would be nice to have the option of limiting how many objects it processes at a time. |
list_new_files runs through and looks for keys that match the prefix and dont match the excludes, storing things in the sincedb. If you move objects to another bucket or prefix after they have been processed, this should speed up run times as the list to run through and check would be much smaller. Not a solution, but a workaround at least |
The root problem is that in ruby-aws-sdk, if you iterate through a bucket, checking |
@lexelby I didn't know aws-sdk was doing a round trip when requesting the Concerning adding the proxy support, this is an easy fix to add the option on our base aws mixin https://github.com/logstash-plugins/logstash-mixin-aws . I've looked rapidly at the aws-sdk and how they use the |
Oh, that mixin looks perfect. I see that the SQS input uses it, for example. Here's the upstream bug, which they claim is fixed in a more recent version than logstash ships with: aws/aws-sdk-ruby#734 |
This is related I believe: aws/aws-sdk-ruby#588 So since this uses aws-sdk < 2 I think we're SOL until its upgraded. I'll see about it if I have some time here, but there is a ticket open to get the mixin updated too |
I found a workaround, for my use-case at least. A couple, really. First, there's a pull request around somewhere for a fog-based s3 input, called s3fog. Like its author, I wanted to use the s3 input to pull cloudtrail logs into my ELK stack. I ended up using this: https://bitbucket.org/atlassianlabs/cloudtrailimporter. It's designed to skip logstash, which I think is kind of limiting, so I hacked on it: http://github.com/lexelby/cloudtrail-logstash/. Works quite nicely. Set up the SNS/SQS stuff as per this: https://github.com/AppliedTrust/traildash. I dumped traildash because I couldn't figure out how to build the darned thing. |
Well the switchover to v2 of the sdk was quick but I can't seem to install the updated plugin locally for testing. :( This didnt help either: elastic/logstash#2779 |
If anyone else wants to give testing a shot, checkout my fork over here: https://github.com/DanielRedOak/logstash-input-s3 spec tests pass but I haven't gotten to updating the integration tests. |
PR submitted so this can be closed if/when merged: #25 |
I want to send log of ELB to s3 bucket. ELB logs of different service will be in different directory of my main log bucket. When I tried to put that in my input s3 conf, I am not getting any log Here is my s3 input conf file: But If I set a name of filepath as prefix then I can view the log in Kibana. (example: elb/production-XXXX/AWSLogs/XXXXXX/elasticloadbalancing/us-east-1/2016/02/24/). But I want to send log from all subdirectory of my bucket |
@nowshad-amin Did you find a workarround for this issue? Using the version with patch from @DanielRedOak worked like a charm. |
The corresponding PR for this issue was merged, and I've updated to logstash 5.0 and s3-input 3.1.1, but I'm still seeing slower than expected processing times for S3 access logs. This could perhaps be due to the fact that fully utilizing available CPU (hovering around 10-20%). Take this w/ a pinch salt, as I'm running everything on localhost as an ELK stack orchestrated with
and my Mac's CPU & network utilization are both pretty low. Any ideas? |
I tried upping the pipeline workers & batch size, but didn't notice a huge increase in utilization. Probably just rookie mistakes combined with input size and runtime environment. |
+1 |
any updates to speeding this up? |
migrated from: https://logstash.jira.com/browse/LOGSTASH-2125
S3 input is takeing a long time until the first logfile is processed:
Running it with
shows me that the bucket is used. As soon as I start logstash I see via tcpdump that there is a lot of traffic between the host and s3 going on.
Now that bucket has right now 4451 .gz files just in the root folder. Subfolders have even more files.
If I create now another bucket and put only one of the log files in it I can see that this logfile is more or less immediately downloaded and processed.
The text was updated successfully, but these errors were encountered: