-
Notifications
You must be signed in to change notification settings - Fork 861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low/No GC API #6517
Comments
@shashapat have you seen this: #6469 We've recently released a much more memory-friendly option for most of the SDK implementation. |
@jkwatson Thanks a ton! This is actually exactly what I was looking for. Is there docs on how to use this / where should I look in the documentation for this API? |
You can see how to enable it here: https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk-extensions/autoconfigure#exporters (the |
Ah so if I understand this correctly, there is no low allocation exporter for traces right? |
It looks like it is supported. I think our docs might need to be updated to reflect that: https://github.com/open-telemetry/opentelemetry-java/blob/main/exporters/otlp/all/src/main/java/io/opentelemetry/exporter/otlp/http/trace/OtlpHttpSpanExporter.java#L85 |
Oh that's perfect, thanks! I will close this issue then. Appreciate the help. |
Hi! Thank you for all the amazing work being done on this project.
I cannot seem to find the answer to my question in the docs so if it is there please point me there and feel free to close the issue.
One of the applications I am working on would love to use this package to export traces and metrics but is extremely sensitive in terms of performance. To that end, I was wondering if there was an API for exporting said traces and metrics in a manner that caused no/low GCs. In essence, I was wondering if there was some sort of mutable API that would let me allocate my objects upfront for creating traces and metrics and would cause no allocations in the usual running of the program.
Thanks for your help in advance!
The text was updated successfully, but these errors were encountered: