-
-
Notifications
You must be signed in to change notification settings - Fork 7k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No objects ever released by the GC, potential memory leak? #4649
Comments
Are you using |
What uvicorn version are you using? Do you have a health check that sends an TCP ping? If answers above are: "not the latest" and "yes", then bump uvicorn to the latest one. |
Is the application running in the docker container? In the container, python recognizes the memory and CPUs of the host, not the resources limited of the container, which may cause the GC not to be actually executed. Similar problems have occurred in my application before. I solved them with reference to this issue: #596 (comment) |
I have solved this issue with following settings:
|
Running on docker in I didn't have memory leak issues with fastapi 0.65.2 and uvicorn 0.14.0 in my project before. I then did a binary search of different fastapi versions (using uvicorn 0.17.6) to see where the memory leaks first appear. |
0.69.0 was the introduction of AnyIO on FastAPI. Release notes: https://fastapi.tiangolo.com/release-notes/#0690 |
I tested using uvicorn 0.17.6 and both FastAPI 0.68.2 and 0.75.0. On 0.68.2, memory usage settled on 358 MB after 1M requests, and on 0.75.0, it was 359 MB. Is there something surprising about these results? |
I can't exactly say, my container is limited to 512MiB and the base consumption of my app before was already ~220 MiB, so having an additional 350 MiB and then for it to settle would be well within what I can observe. It's just that for me prior to 0.69.0, I don't have any sharp memory increase at all: |
Can anybody else reproduce these results? |
How do we go about this? The issue is marked as question but the memory leak certainly is a problem for me for updating fastapi. New ticket as a "problem"? |
To start with... People need to reply @agronholm's question. |
I definitely have this same memory behaviour in some of my more complex services, i.e. memory utilization just keeps climbing and seemingly nothing is ever released, but I haven't been able to reduce it to a simple service that displays the same memory behaviour. |
Not sure if directly related but i detected a leak when saving objects to the request state. the following code will retain the large array in memory even after the request was handled:
a working workaround is to null the The complete example with a test script can be found here: I'll note in addition that I tried to run this code with older versions of FastAPI and got the same results (even if i went as far as 0.65.2 as was suggested in an earlier note). hence...not sure it's directly related. |
In my case where I'm seeing it, I'm attaching a kafka producer to the request.app variable (i.e. My questionthen is how do I create a kafka producer on startup that's accessible to endpoints without causing this leak issue: I want to avoid creating a new kafka producer on every single request because is really inefficient as startup of a kafka producer takes some time. |
@Atheuz wouldn't be so sure that your use case creates a memory leak. in the case i'm showing a new object is created for every request, which is what grows the memory usage for each request. As long as you stick to the same object you should be fine. |
Well that is indeed strange behaviour. I found that it is not just FastAPI, this actually manifest itself in Starlette directly. I rewrote your main.py to test this: from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from starlette.requests import Request
async def homepage(request: Request):
request.state.test = [x for x in range(999999)]
return JSONResponse({'hello': 'world'})
app = Starlette(routes=[
Route('/', homepage),
]) And this goes crazy on the memory as well. When assigning the big object to a random variable, the memory usage remains normal. I would recommend to raise this in the Starlette repo, fundamentally the fix must be implemented in that code base anyway. |
Cheers @JarroVGIT. Started a discussion there. Used your example, hope you don't mind. |
The memory leak in uvicorn is probably not the cause of my issue though. First of all, it only happens with FastAPI >=0.69.0, and I also had apps where that happens where I don't even use |
@agronholm @Kludex Env Setting
Serverfrom fastapi import FastAPI, APIRouter
from starlette.middleware.base import (
BaseHTTPMiddleware,
RequestResponseEndpoint,
)
from starlette.requests import Request
class Middleware(BaseHTTPMiddleware):
async def dispatch(self, req: Request, call_next: RequestResponseEndpoint):
return await call_next(req)
router = APIRouter()
@router.get("/_ping")
async def ping():
return "pong"
app = FastAPI()
app.include_router(router)
for _ in range(3):
app.add_middleware(Middleware)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=14000) Clientimport requests
import multiprocessing as mp
def do():
while 1:
rsp = requests.get("http://127.0.0.1:14000/_ping")
print(rsp.status_code)
for _ in range(20):
p = mp.Process(target=do, daemon=True)
p.start()
import time
time.sleep(1000000) |
Would you mind bumping |
@Kludex Thanks for reply ! |
I cannot reproduce the leak. Can you share your results and tooling? |
It would also help to test if you can reproduce the problem on Starlette alone. |
This is the dockerfile that can reproduce the leak. FROM continuumio/anaconda3:2019.07
SHELL ["/bin/bash", "--login", "-c"]
RUN apt update && \
apt install -y procps \
vim
RUN pip install fastapi==0.81.0 \
uvicorn==0.18.3
WORKDIR /home/root/leak
COPY client.py client.py
COPY server.py server.py Run the following commands, and the docker build -t leak-debug:latest -f Dockerfile .
docker run -it leak-debug:latest bash
# in container
nohup python server.py &
nohup python client.py &
top The memory goes to 1GB in about 3mins. |
@agronholm Thanks. # server.py
from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.routing import Route
from starlette.middleware.base import (
BaseHTTPMiddleware,
RequestResponseEndpoint,
)
from starlette.requests import Request
from starlette.responses import PlainTextResponse
class TestMiddleware(BaseHTTPMiddleware):
async def dispatch(self, req: Request, call_next: RequestResponseEndpoint):
return await call_next(req)
async def ping(request):
return PlainTextResponse("pong")
app = Starlette(
routes=[Route("/_ping", endpoint=ping)],
middleware=[Middleware(TestMiddleware)] * 3,
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=14000) |
That Dockerfile won't build for me:
I tried using the official |
@agronholm FROM python:3.7.12
RUN pip install fastapi==0.81.0 \
uvicorn==0.18.3 \
requests
WORKDIR /home/root/leak
COPY client.py client.py
COPY server.py server.py |
I can reproduce it on Python 3.7.13, but it's not reproducible from 3.8+. Notes:
I'll not spend more time on this issue. My recommendation is to bump your Python version. In any case, this issue doesn't belong to FastAPI. |
@Kludex |
Hi, is there any workaround or solution how to avoid this? Its happening on latest version of libs. |
Can you prove it with a reproducible code sample? |
Also, what version of Python are you running? |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
First Check
Commit to Help
Example Code
Description
Use the minimal example provided in the documentation, call the API 1M times. You will see that the memory usage piles up and up but never goes down. The GC can't free any objects. It's very noticeable once you have a real use case like a file upload that DoS'es your service.
Here some examples from a real service in k8s via lens metrics:
Operating System
Linux, macOS
Operating System Details
No response
FastAPI Version
0.74.1
Python Version
Python 3.10.1
Additional Context
No response
The text was updated successfully, but these errors were encountered: