-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow registrations to be manifested on the file system and add 'use_previous_backends' option #115
Conversation
* This option defaults to true: if there are no default servers and a watcher reports no backends, then use the previous backends that we already know about. * Factor out code that updates '@backends' into the watcher base class.
Add 'use_previous_backends' option.
If the configuration specifies a file_output key, synapse will manifest and update registrations on the filesystem in an atomic way. This is useful for applications that do not wish to communicate with service backends via haproxy
Allow registrations to be manifested on the file system
Note that at this time only the Yelp fork of haproxy has support for redispatching on every retry.
Allow the option allredisp option to haproxy.
Explicitly deduplicate registrations
To give some context on the allredisp option: http://marc.info/?l=haproxy&m=142740715826651&w=2 To give some context on the deduplication, we currently restart nerve gracefully (i.e. not de-registering a whole box) by starting two nerves and relying on synapse to deduplicate. |
Hi Joseph, First of all, thanks for the PR! Secondly, I have a question on deduplication. How does it work with process managers that control forking? Precisely, I've had issues with running synapse via Supervisord (#94) where it would not restart when a change in backends is detected, unless I manually specify to run 4 processes of Synapse in Supervisord. |
My deduplication comment was not the clearest. The deduping logic is just referring to synapse ignoring duplicated service backends, e.g. (host1, 1234), (host1, 1234), (host2, 1234) => (host1, 1234), (host2, 1234). I would want to run some tests since supervisord is not the setup we use (we use a cron job that restarts synapse whenever our synapse config file changes), but the deduplication is implemented by synapse filtering out duplicate backends by the tuple of (host, port, name?), so I imagine that all of the forked synapse instances would deduplicate in the same way. To be honest I am not sure without digging into #94 more. |
attr_reader :opts, :name | ||
|
||
def initialize(opts) | ||
super() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure what this is calling out to, this class does not inherit from anything
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed it does not, I'll fix that up
done with first pass, i left a couple of questions/comments. mostly, it looks great! |
Add rate limiter.
This reverts commit 136d1fe.
Increase HAProxy restart interval
ZooKeeper connection pooling.
We can allow socket updates to go through any time that the config was updated, but restarts must be rate limited for stability
Rate limit restarts but not stats socket updates
Add state file
Sorry that I've just been adding a ton more commits to this "de-fork" pull-request. We ended up hitting some serious reliability problems when we turned on some new multi-datacenter functionality and had to focus on fixing that up. I'll be getting this ready for merge ASAP. |
Fix bug in caching logic
Add support for the weight key added in nerve
We are experiencing some very slow updates in production and I think it may be due to the connection pooling, try :per_callback threading to see if that helps. Also fixes default values for do_writes, do_reloads, do_socket
Try out :per_callback threads and get more debug information
This fixes a bug where we could have a session expiry but not fail pings because the pooling code would not actually tear down the connection
Turns out it's important to handle session disconnects correctly
We're going to take care of this in #130 |
This pull request pulls in the Yelp fork of synapse. It accomplishes:
Both of these changes are functioning in production at Yelp.