-
-
Notifications
You must be signed in to change notification settings - Fork 507
Duplication of some forwardOfflineMessages? #271
Comments
An offline message is resent if Mosca did non receive a PUBACK. Can you check your client is sending it? |
yes it is sending, and my onDelivered event which is called on receiving puback is called 2, 3 times within 120 millis. (it only happens for clients which are on Mobile Data (slow) networks) |
It's resent only because Mosca see the device as disconnected and reconnect it. There is no automatic retry (there was in an old version). What's the keepalive of the device? |
keepAlive = 120s |
check in the logs that if it sees the device as disconnected. I found out that on mobile networks the best keepalive is 30 minutes. |
even with keepalive 15mins, I saw 6x message forward (6x pubacks) for a single offline message after reconnect. (time between disconnect/reconnect was about 2hours) |
There is no code that does retransmissions. Only after a connect the What Mosca backend are you using? Which client?
|
I see @mcollina , I just can't interpret what that is happening... |
may it be the client sending multiple pubacks to a single message packet forwarded by Mosca !? |
Do you have any way to check if that's the case? Turn mosca logs to debug
|
|
it shows that a singel messageId is forwarded multiple times by |
Are you sure it's the same problem? It doesn't seems related to timing at all and retransmission. You are getting multiple times the same message, but in the same timeframe (at connection!). |
Can you please add the logs where that message is stored? Probably something bad happens there, as I see no flaw in the code. Here is the relevant logic: https://github.com/mcollina/mosca/blob/master/lib/persistence/redis.js#L333-L396 Which version of Redis are you using? |
I see the same behavior with our ObjectiveC mqtt client also, Seeing this more frequently recently. Hopefully is producing many times this week (I can't find why but seeing this recently, no changes in my code which may concern to this... but it should be due to our overall configuration/setup which produces this recently) I will try to check redis keys when storing messages... that should explain why mosca is forwarding multiple times a single packet to a single client id. a question: can multiple subscriptions of the same clientId to a single topic produce this? taking that our clients are re-subscribing on reconnects accidentally. |
can you please check if it is related to the cluster? I don't think this is relevant to multiple subscriptions: see https://github.com/mcollina/mosca/blob/master/lib/persistence/redis.js#L246-L264. Also the redis key where subscriptions are stored is showed there. |
good suspect! thank you @mcollina |
it is a storePacket issue when using cluster, all other processes are adding duplicate submatchers when storeSubscriptions is called for a client: https://github.com/mcollina/mosca/blob/master/lib/persistence/redis.js#L107 for each packet then each process matches multiple items for a client https://github.com/mcollina/mosca/blob/master/lib/persistence/redis.js#L327 If you think this is a valid workaround, I can create a PR. |
All clustered instances of Mosca needs to have different 'ids': https://github.com/mcollina/mosca/blob/master/lib/server.js#L154. Probably in your case they all have the same, and this bad behavior happen. Anyway, send a PR that fixes it :D. |
hmmm! remember days I was using process-id in my mosca instance ids, and the issue was not happening :)) this was the change that made the problem pop out hopefully. I changed mosca id to my machine unique name so that I have a unified Is it recommended to use separate per process mosca ids? |
You have to use separate, per-process mosca ids. It might be worth it guard the system against this, as bad things happens when uniqueness is not respected. That id is used internally for some things, not just for logging. I'm more inclined in guarding against id collision (crash if there is an id collision), than supporting this use case, as it might lead to other issues I cannot foresee now. This comment is definitely wrong: https://github.com/mcollina/mosca/blob/master/lib/server.js#L153 |
then I should roll back my commit and use separate ids per process for now !? |
That should be the best approach, yes. This needs to be handled anyway, we might want to have something for the logs, and a non-overridable, per-process identifier. What do you think? |
If you mean the broker id should be set internally by the broker, I agree. |
@mcollina I'm confirming that rolling back my change, and using different broker ids is regenerating the issue! |
what happens to this @mcollina ? I think we can merge this in |
I see my client occasionally receives an offline multiple times (within 60-70 milliseconds), it seems mosca is forwarding a message twice!?
I can't find a clue looking at the code @mcollina
The text was updated successfully, but these errors were encountered: