-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cat endpoint for ingest pipelines #31954
Comments
Pinging @elastic/es-core-infra |
We are discussing adding stats like numbers of processed/failed ingestions to existing stats apis. Given that the configured names are not generally useful from an operator point of view, I'm going to close this in favor of that future work. |
As the use of ingest pipelines picks up with Beats modules, so does the need to eventually purge older unused and unwanted pipelines. Having a Consider an older cluster that was created with 5.6 that is now running 7.0. It would have these pipelines if it accepts data from the Filebeat Nginx module:
That is 662 lines of JSON in the Dev Console. There are several pipelines here that need to be removed and most, if not all, could be unknown to the admin anyway. And this is just one Filebeat module. Other modules ship with their own pipelines, hence eventually some management is needed here (there is probably a separate issue needed for Beats). Could we give this feature request another consideration? cc @ruflin |
A When there is no pipeline defined, Note that |
Pinging @elastic/es-core-features (:Core/Features/Ingest) |
Note that a "summary" option was added to the existing |
That sounds great! For the record, I ended up using the
Note I had commented above only, I am not the original poster. |
This would be a great feature for us in our environment. The ability to keep track of certain pipelines would help us expand as we try to make new pipelines that reference old pipelines. |
Describe the feature:
It's be great if there was a
_cat
endpoint that showed defined pipelines.It might mean we need to save a bit more info into cluster state with each pipeline, perhaps things like created + updated times. I don't know if there's more metric based info we are planning for these, but things like processed/failed events (for eg) might also be useful.
The text was updated successfully, but these errors were encountered: