-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in client when using python wrapper to send request to service #822
Comments
Do you mind explicating why you think there will be a memory leak? |
This problem is found in our project, I need time to seperate test code to reproduce it. As you see, there is a 1MB data member in response data struct, each time when I receive a response from servcer, I see a about 1MB memory used for client node, so a memory usage is increased as test continues. |
Thanks for the explanation! It would be great if you could provide a self-contained example that could be used for stress-testing. |
Hi, I list the key information, I think it enough and easy to reproduce the problem. 1、Request/Response data struct:
2、Service code(C++):
3、Client Code (Python):
BTW, python wrapper performance is very slow when your data struct include a big byte array as you see in the above example, but it is another problem. |
Yeah, that's a known problem: ros2/rosidl_python#134 (Edit: looks like you're aware of this already) I will try the example that you provided. Thanks! |
Hi, have you reproduced the problem? |
…rvice ros2/rclpy#822 Signed-off-by: Tomoya Fujita <[email protected]>
problem confirmed, in the process space there is a lot of heap memory area mapped. as long as client/service running, virtual/physical memory increases. under colcon envirnoment,
CC: @Barry-Xu-2018 @iuhilnehc-ynos could you take a look if you have time? i guess this is memory leak, if i am not mistaken... |
I can confirm the reported issue as well using the sample program linked above. For discussion/debugging convenience, I've produced a couple of charts that show what's happening. Use this script to reproduce said charts when demonstrating a fix. I won't have the bandwidth to return to this issue for a while, but from a cursory overview it does look like a memory leak in the client. |
I'd like to share something about this issue. __convert_to_py(void * raw_ros_message) doesn't own the Service::service_take_request
auto taken_request = create_from_py(pyrequest_type); // allocate a buffer
...
result_tuple[0] = convert_to_py(taken_request.get(), pyrequest_type);
taken_request.release(); // Delete this line because this function have the responsibility to deallocate the buffer
...
|
@llapx you are right. For reference, I believe replacing with just |
You don't need to do reset() manually, just let the unique_ptr with its Refer to rclpy/rclpy/src/rclpy/action_client.cpp Lines 102 to 121 in 691e4fb
|
@iuhilnehc-ynos In my understanding, release function will take the owership of buffer from rclcpp to caller, right? but python wrapper does not get the unique pointer, so this buffer will never have a chance to be freed. Correct me if I am wrong! |
either of you, can you make PR against this issue? let's review and fix the problem in the mainline. |
okay, i see #828, one step behind... 😢 sorry! |
i will go ahead to close this. |
I found this problem in latest Galactic release, it is simple to reproduce, write a simple service(C++) and a client (Python), memory leak will definitly happen. It will not happen when using C++ in client.
You can write a simple service code, just receive the request and do nothing and send a response to client.
The text was updated successfully, but these errors were encountered: