-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Be smarter about initial reconfigure #78
Comments
@dcosson Yea this has been somewhat of an annoying bug to us as well. We mitigated it slightly at Yelp by reducing the amount of zookeeper connecting we do and using the state_file so that at the very least as soon as the watcher gets information we can quickly enable the backend over the stat socket, but this just shortens the window of sad. How are you thinking about implementing this? The watchers all live in separate threads, but perhaps we can communicate back to the main thread a la |
Now that I'm looking at it, the zookeeper watchers do setup before returning because they don't use a Thread.new in the start method (they rely on the zk-ruby gem's async callbacks to multithread themselves). @dcosson I imagine you're using the ec2 tag watcher or some such? |
Oops, didn't mean to close, sorry about that. |
Synapse is designed to re-write the haproxy config & reload haproxy when it starts (set here).
This means that any time synapse is restarted there is a period between when it starts and when the watcher first registers where the defaults are being used (and if there are no defaults you will return 503's from haproxy).
I can see why you would want an initial reconfigure so that any time you restart synapse you know any changes unrelated to backends will get picked up, but it seems like it should be smarter and wait until all watchers have checked in once before doing its initial reconfiguration. I'm happy to submit a PR if this sounds reasonable.
The text was updated successfully, but these errors were encountered: