You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 15, 2022. It is now read-only.
The current implementation of this Elasticsearch charm protects against "Split Brain" problem but dynamically setting the minimum master nodes parameter. However if the administrator chooses to deploy exactly two nodes then a network partition can still lead to a split brain because each node will find it has quorum (1 minimum master node). How do we protect against this problem ? Shall we explicitly document this in the README ? Is there a better way ?
The text was updated successfully, but these errors were encountered:
The normal rules for this is floor(N/2)+1 guaranteeing a majority. So for 2
nodes they would both have to be together 2/2+1=2.
That is why people recommend always going to odd configurations. Because
even ones still have the chance to fail but don't have the extra redundancy
to keep going. (if either node fails you shouldn't proceed because you
don't know if you have split brain)
On Fri, Oct 23, 2020, 05:50 Balbir Thomas ***@***.***> wrote:
The current implementation of this Elasticsearch charm protects against
"Split Brain" problem but dynamically setting the minimum master nodes
parameter. However if the administrator chooses to deploy exactly two nodes
then a network partition can still lead to a split brain because each node
will find it has quorum (1 minimum master node). How do we protect against
this problem ? Shall we explicitly document this in the README ? Is there a
better way ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#21>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABRQ7MZIQUFBTGXTBM4QTDSMFNWLANCNFSM4S4MUKXQ>
.
The current implementation of this Elasticsearch charm protects against "Split Brain" problem but dynamically setting the minimum master nodes parameter. However if the administrator chooses to deploy exactly two nodes then a network partition can still lead to a split brain because each node will find it has quorum (1 minimum master node). How do we protect against this problem ? Shall we explicitly document this in the README ? Is there a better way ?
The text was updated successfully, but these errors were encountered: