-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low-volume consumers regularly replay old partition offsets #17
Comments
Hmm, I don't see this happening locally. At least not trivially: initial run
5 kafka messages from a core transaction
waited 5+ seconds then aborted
spinning up the node again
5 more transactions
immediately quit the node and restarted
the 5 previous messages are reconsumed
5 more messages
waited 5+ seconds then aborted/restarted
|
I tried killing the node with |
@rwdaigle Can you link me to a librato view showing an event? |
A week a bit too far out :-D This seems like a point? https://metrics.librato.com/s/spaces/356792?duration=473&end_time=1487944831 |
I don't know when that app was restarted (is that midnight to Heroku?), but it looks like all those errors consumed at that time are old. So the theory is that the app was restarted and then consumed all those errors again. So that was about 1am GMT? |
Right, that's my understanding as well. At least from this issue so far 😄 |
I wonder if this is an issue with having a consumer group with a lot of topics and partitions. Because the offset is committed per partition maybe things can breakdown? |
Maybe. I think I've seen it to some degree in Keyster, it has two topics, each with 32 partitions. |
When you dig into potential issues with the underlying client and find your own issue talking about how these things fit together 😀 |
I'm actually noticing replay of events in low volume topics now too; any advice? |
@rawkode I think that will happen any time the messages in the topic that tracks the offsets (which is internal to Kafka) expires before the messages themselves. There may be a configuration setting for this in the Kafka broker. Alternatively, if the last offset is recommitted, I believe that refreshes it with the broker and you won't get the replay. I've not tested this, however. This would get more complicated across server restarts, of course. 😉 |
We're experiencing a recurring issue with low-volume consumers (like for error topics) where the partition offset isn't being regularly ack'd back to the brokers causing the same messages to be replayed. While some offset replay is expected with higher volume topics, I don't think it should be happening with low volume ones as well.
Consider the following evidence:
Every time the canary (kafkacat-consumer) dyno restarts, it reads in a few messages. That's suspect (knowing our error message production - from other dynos - is not that even).
Consider also that at each of these dyno restarts, the partition offsets of the messages received by the consumer don't always increase! See partition29 as an example, where it drifts sideways for two measurements (which also should never happen), then up, then down.
This tells me that Kaffe consumers for low-volume topics don't ack back their offsets to the broker frequently enough and when they're restarted they start back on an offset that was already received. I know we've looked at this in the past, and I believe you said every 5s the partition is ack'd, but I see reason to believe that's not the case. It appears to be more volume based than anything (once very x messages?).
We should make sure low-volume consumption behaves more predictably.
The text was updated successfully, but these errors were encountered: