-
-
Notifications
You must be signed in to change notification settings - Fork 507
Continuous SUBSCRIBE cause memory consumption rise #181
Comments
Thanks for reporting this! It's really the kind of issues I would like to get solved inside Mosca. I think this happens because the connections are buffering data to be sent, and those are never deleted correctly. Can you replicate it using just MQTT.js, and upload the gist? So I can try solving it here? Thanks. I think it may be related to #170, i.e. we are not disconnecting clients anymore for unauthorized subscribes. |
One more question: which version of node.js are you running? |
node version is v0.10.28 |
The code that plays cracker role is like this https://gist.github.com/mocheng/d5d7aa532c12c8ac8b24 |
This might be due to one problem in our application code. For easy debugging, we have below code to hook var mqtt_lib = require(path_to_mqtt);
function log_receiving_packets() {
var parsePayload = mqtt_lib.MqttConnection.prototype._parsePayload;
mqtt_lib.MqttConnection.prototype._parsePayload = function(parse) {
logger.mosca.info('@@@receive message: ' + this.packet.cmd);
var result = parsePayload.call(this, parse),
logger.mosca.info(result);
return result;
};
} After removing lines that have I'm not clear about the reason, but it seems there is some closure memory leak here. |
This really depends on where you put that code. It does not look like problematic, but it really depends on where you create that closure. If it creates a reference loop between objects, you might screw the garbage collector. Anyway, monkey patching libraries is usually a very bad idea. You can get the same feature by hooking Let me know if we can consider this solved. |
After removing the code leads to leak, I ran another script storming Mosca with PUBLISH as https://gist.github.com/mocheng/d5d7aa532c12c8ac8b24#file-storm-with-publish . Then, the Mosca server memory consumption rises rapidly. I guess this is expected. Mosca server is not supposed to handle limitless PUBLISH request, right? |
It should rise rapidly, as data arrives, but it should stabilize and eventually reduce it when it's relieved. Or the memory consumption leads to |
Plus, the current version of Mosca cannot handle backpressure over MQTT, but it will. |
It stabilizes and dropped to OK level after pressure is relieved, though it takes about 15 minutes. Work as expected. |
Anyway, it's a backpressure issue. Data is arriving too fast for Mosca, and it should slow it down and eventually disconnect the offending client. Unfortunately the 'bug' is in MQTT.js, so I'll try solving that first, and then we'll see. |
This happens in production. One bug in our client side exposed this problem.
When a client failed to subscribe a topic, it just keeps reconnecting and continue subscribing. However, there is a bug in client side application. Every time it reconnects, it send 2 more SUBSCRIBE packets. That is, Mosca server got a lot of CONNECT and then a bunch of SUBSCRIBE commands.
This causes memory consumption rise as below image:
We reproduce this issue and got some heapdump. There are a lot of
WriteReq
objects created during the storming of SUBSCRIBE. Something wrong instream
?Even though this is caused by a client side bug, it could be a weakness point for malicious attack.
The text was updated successfully, but these errors were encountered: