-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do election in order based on failed primary rank to avoid voting conflicts #1018
base: unstable
Are you sure you want to change the base?
Conversation
…flicts When multiple primary nodes fail simultaneously, the cluster can not recover within the default effective time (data_age limit). The main reason is that the vote is without ranking among multiple replica nodes, which case too many epoch conflicts. Therefore, we introduced into ranking based on the failed primary node name. Introduced a new failed_primary_rank var, this var means the rank of this myself instance in the context of all failed primary list. This var will be used in failover and we will do the failover election packets in order based on the rank, this can effectively avoid the voting conflicts. Signed-off-by: Binbin <[email protected]>
@@ -64,3 +64,34 @@ start_cluster 3 4 {tags {external:skip cluster} overrides {cluster-ping-interval | |||
} | |||
|
|||
} ;# start_cluster | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test may be time-consuming. It basically cannot pass before the patch, but can pass locally after the patch.
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Signed-off-by: Binbin <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1018 +/- ##
============================================
+ Coverage 70.61% 70.62% +0.01%
============================================
Files 114 114
Lines 61664 61694 +30
============================================
+ Hits 43541 43571 +30
Misses 18123 18123
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. I like this idea. Thanks @enjoy-binbin!
continue; | ||
} | ||
|
||
if (memcmp(node->name, myself->replicaof->name, CLUSTER_NAMELEN) < 0) rank++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it make sense to sort by shard_id
? replicaof
is not as reliable/up-to-date as shard_id
. There is the chain replication and there is still the replicaof
cycle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to make sense, I'll think about it later.
* Specifically 0.5 second * rank. This way those failed primaries will be | ||
* elected in rank to avoid the vote conflicts. */ | ||
server.cluster->failover_failed_primary_rank = clusterGetFailedPrimaryRank(); | ||
server.cluster->failover_auth_time += server.cluster->failover_failed_primary_rank * 500; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious - how did you arrive at 500? Given that CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST
is broadcast and answered pretty much right away, unless the voter is busy, I would think the network round trip time between any two nodes should be significantly less than 50 ms for all deployments. I wonder if we could tighten it up a bit to like 250 or 200?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This 500 is just the experience points gained from here. I usually think that one election round can be completed between 500ms - 1s. Yes, i think the numbers may be adjustable, but I haven't experimented with it.
server.cluster->failover_auth_time = mstime() +
500 + /* Fixed delay of 500 milliseconds, let FAIL msg propagate. */
random() % 500; /* Random delay between 0 and 500 milliseconds. */
When multiple primary nodes fail simultaneously, the cluster can not recover
within the default effective time (data_age limit). The main reason is that
the vote is without ranking among multiple replica nodes, which case too many
epoch conflicts.
Therefore, we introduced into ranking based on the failed primary node name.
Introduced a new failed_primary_rank var, this var means the rank of this
myself instance in the context of all failed primary list. This var will be
used in failover and we will do the failover election packets in order based
on the rank, this can effectively avoid the voting conflicts.