Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FiberId$Runtime instances do not release and continuously accumulate #537

Closed
Gregory-Berkman-Imprivata opened this issue Jul 28, 2023 · 3 comments

Comments

@Gregory-Berkman-Imprivata

Mentioned in #513

We are running into a very similar performance problem. We are not using streaming but we notice that when we receive grpc requests over time, the number of FiberId$Runtime instances continuously increases and never drops. Eventually performance degrades significantly and Kubernetes kills the node.

#515 was supposed to fix this issue but I tested it on the new release candidate (rc6) for 0.6.0 and the issue is still present.

/service $ jmap -histo 1 | grep FiberId
 num     #instances         #bytes  class name (module)
  10:        109775        3512800  zio.FiberId$Runtime

As you can see the number of instances and bytes will continue to increase.

@ghostdogpr
Copy link
Contributor

I tried to expose a simple gRPC server with RC6 and I am not able to reproduce this issue so maybe it has another cause. Could you provide a reproducer?

In my case I can see the number of zio.FiberId$Runtime increasing but upon looking at a memory dump, there are all unreachable from GC roots and if I force a GC they all disappear immediately. They also disappear if I keep doing client calls.

@Gregory-Berkman-Imprivata
Copy link
Author

I tested this out and I now believe this is an issue with the version of zio-telemetry we were using. The zio-telemetry library provides two options for its Tracing environment, live and propagating we were using propagating. When I set the environment type to live we started to see the GC clean up the FiberId.Runtimes. Sorry for any inconvenience this issue does not seem to be caused by zio-grpc

@thesamet
Copy link
Contributor

thesamet commented Aug 1, 2023

Thanks for closing the loop on this @Gregory-Berkman-Imprivata and @ghostdogpr for attempting to reproduce this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants