-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
help request: 3.2.1 memory leak #10618
Comments
the
|
Hey @wklken I have noticed a similar issue and was able to remove it by forcing the I use the following in my dockerfile to patch the issue. # Patch https://github.com/apache/apisix/blob/3.7.0/apisix/plugins/prometheus/exporter.lua#L228 to avoid metrics per consumer.
RUN sed -i \
-e 's/ctx.consumer_name or ""/""/g' \
/usr/local/apisix/apisix/plugins/prometheus/exporter.lua Hope this helps. |
Thanks @boekkooi-lengoo we have patched some settings to disable official metrics, which will cause the cpu 100% if too much records present. currently only the bandwidth left. I’m not certain whether the increasing memory usage is caused by the Prometheus plugin or not, nor do I understand why it is consuming so much memory. |
@wklken have you solved your problem? |
please check if the memory leak happens in lua or in c
|
|
u can assign the issue to me , i will follow it |
@wklken I think this will help you. |
@Vacant2333 we don't use the service discovery on production. |
thanks @theweakgod , I will check that.(apisix 3.2.1 use the |
It does seem to be one of the reasons. Is it possible to test this possibility (upgrade |
Has the problem been solved? |
@wklken 这个问题有什么进展不? |
不能在生产上验, 暂时没有同样的环境可以验证, 需要想办法复现生产一样的流量压一段时间(近期都不一定有时间处理; 验证后我会更新到这个issue) It cannot be tested in production. We do not have the same environment to verify for the time being. We need to find a way to reproduce the same traffic pressure in production for a period of time (I may not have time to deal with it in the near future; I will update to this issue after verification). |
@wklken Has the problem been solved? |
提供个线索,在上传图片跟上传文件接口特别容易出现这种现象 |
![]() We rolling update another release, and the memory didn't increase after about 1 week. @theweakgod I still can't reproduce the memory increasing on my own cluster yet, will try again later. |
👌 |
need huge metrics |
but from the
Here's a revised version of the text with corrected grammar and improved clarity: From the provided chart:
If the pull request bugfix: limit lookup table size is effective, the memory usage should not exceed 5.59 GB and should only show an increase for no more than 7 days. |
This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions. |
This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time. |
Description
after deployed online for 2 weeks, we reschedule the pods, then got the chart below. from 3.7G to 6G.
We have no ext-plugins.
about 45000 routes
I have some suspicion that it is caused by the prometheus plugin, when all routes are all presented, the keys in prometheus is stable?
is there any tool to analysis this, while we don't have xray.
Environment
apisix version
): 3.2.1uname -a
):openresty -V
ornginx -V
): openresty/1.21.4.1curl http://127.0.0.1:9090/v1/server_info
):luarocks --version
):The text was updated successfully, but these errors were encountered: