-
Notifications
You must be signed in to change notification settings - Fork 534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
token.Wait() does not wait for completion #563
Comments
I'm not sure I'd go quite that far - I'm guessing that it was intended at some point in the past (possibly when persistence was initially introduced) and changing it now is likely to break existing usages. When this library was originally created tokens were seen as a good idea but that is no longer the case (which is why the v5 client is a complete rewrite). This behaviour does make some sense because what should matter to the client is that messages will be delivered. Whether this means that they have been transmitted or stored on the client itself for transmission in the future should not really be the concern of the user (because the library automates this). I accept that this can be confusing but returning an error would also be confusing (because that could be taken to indicate that the message will not be sent when it will be resent when the client reconnects). In your case you are attempting to work around a buggy broker and a slightly unusual requirement (connecting to deliver one message). There are a few options:
|
Yes, but that is not happening at the moment. The library is saying it has been delivered yet no message has been sent.
Well the error should not be returned when the client reconnects but rather when the message fails to be delivered (or .Wait should hang forever if it can never be delivered).
yeah, and if that is the case then we could document the behavior instead, as it was quite difficult to understand, at least for me.
Perhaps, but the broker is working as intended (as I intended it :D) I managed to make a working solution with |
Thanks for the pull request - I'll review shortly.
The challenge here is that the library has heaps of options which makes documenting this kind of thing fairly complicated. When using a In your situation you are using the memstore; this is only active as long as the application is running. The main library appears to have been written with the assumption that the store is persistent which is why no error is returned when the connection is lost.
Unfortunately this exposes one of the issues with the whole token method. A frequent usage pattern is to kick off a go routine to monitor the token. This means that leaving the token hanging is likely to result in goroutine leaks which we really want to avoid!. This means that, as written, the token needs to complete (either with an error or without) and neither option is great in this situation. I would add that the token does not need to complete when the client will initiate a reconnect (and it would be nice if this only happened when no reconnect would be attempted). Looking at the code further the situation is more complex than I had thought. In many situations (where auto reconnection is not used) on loss of connection the tokens will just hang (because I'm also not sure how much work to put into this; I'd rather put available time (not much at the moment) into adding support for persistence into the v5 client (which does not use tokens and is a lot simpler to work with because of this!). |
alright, I see. If I understand you correctly it seems like the design of the library (with persistence) makes it very tricky to know if messages has been delivered or not. Based on my rather limited understanding on the library perhaps this could be outsourced to the library-user by notifying the user of all
Yeah, for sure. I was almost about to suggest we add another option that disables the filestore :)
Not much I'd say. It is clear this is a very tricky situation and it is manageable by simply handling reconnects manually. I think this issue and the added caveat in the README is more than enough.
Personally I haven't spent time figuring out if migrating to mqtt v5 is always worthwhile, I guess I should look into that. That said, given what you just said it sounds like persistence is very tricky, is it worth adding that complexity to the v5 client also? |
Kind of. The issue is that if persistence (using a persistent store - not memstore) is enabled then the message should be delivered eventually (even after a restart). If an error is returned then some users may assume this means they need to resend their message which would lead to double ups.
Persistence is part of the MQTT spec so it's something that a full client implementation needs - without it you cannot trust the QOS 1+. In many use-cases it really simplifies things. For example I generally use MQTT as a transport for messages from a variety of monitoring systems; I want to be able to call If the library does not support this then users need to implement it themselves and, given the complexity, there are likely to be bugs in their implementations. It seems better to have an implementation within the library so that we all benefit from each others efforts to improve it (I've put a lot of work into persistence on the V3 client because it was something I needed). The plan with the v5 client is to implement persistence as an add-on so that the complexity is kept out of the main library. Unfortunately coming up with a design that achieves this is taking a lot longer than I hoped! |
The V5 client now offers session persistence (when you get the master branch; we hope to make a release this month). I'm going to leave this issue open; if anyone wants to work on a solution to the issues mentioned then they are welcome to have a go (but probably best to discuss the proposed solutions first). |
I'm trying to use this library in a test. What I want to do is send some messages and then exit. My broker will disconnect the mqtt connection if I try to send messages too early so I want to retry until I know the message is delivered so I can exit when that is done.
The issue is that
token.Wait()
doesn't always wait until the message is actually delivered. Reading through issues, especially this comment from @MattBrittan it seems like this is "intended" behavior.I'm using paho.mqtt.golang v1.3.5
Here is some code to reproduce the issue:
and the logs:
My understanding of what is happening is this:
c.claimID(token, details.messageId)
is called here, resulting in the original token be considered completec.Disconnect(250)
is called astoken.Wait()
completed successfullyTo solve my immedaite issue I will try to disable auto-reconnect and handle reconnects myself.
Would it not make sense to have
token.Wait()
wait until the message is actually delivered, regardless of reconnects?At least documenting the caveat would be nice
The text was updated successfully, but these errors were encountered: