-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cassandra nodes becomes unreachable to each other #171
Comments
I think you need a Under the section "For separate machines (ie, two VMs ..." |
I don't really see anything we can change in the image to make this easier, unfortunately. The best I can recommend from here is to try the Docker Community Forums, the Docker Community Slack, or Stack Overflow for further help setting up and configuring a cluster. |
(Additionally, |
I have 3 nodes of elassandra running in docker containers.
Containers created like:
Cluster was working fine for a couple of days since created, elastic, cassandra all was perfect.
Currently however all cassandra nodes became unreachable to each other:
Nodetool status on all nodes is like
Where the UN is the current host 10.0.0.1
Same on all other nodes.
Nodetool describecluster on 10.0.0.1 is like
When attached to the first node its only repeating these infos:
After a while when some node is restarted:
Tried so far:
Restarting all containers at the same time
Restarting all containers one after another
Restarting cassandra in all containers like : service cassandra restart
Nodetool disablegossip then enable it
Nodetool repair : Repair command #1 failed with error Endpoint not alive: /10.0.0.2
Seems that all node schemas are different, but I still dont understand why they are marked as down to each other.
The text was updated successfully, but these errors were encountered: