-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swaps out the redis implementation for a different shard #48
Conversation
…ms to stay closed making specs fail.
The CI failures are because Ameba doesn't support Crystal < 1.4.0. Ameba isn't necessary here. I can either take it back out, or we can just lock Cable up to 1.4.0+ too. (This is what we did with Lucky) |
@jgaskins if you're poking around this, you'll see that even though the specs pass, they throw all those IO::Error https://github.com/cable-cr/cable/runs/7380811910?check_suite_focus=true#step:6:7 if that helps any. |
…he shutdown process
…e redis pinger doesn't leak memory and can be closed properly
@@ -24,6 +24,9 @@ module Cable | |||
# Use a single connection | |||
getter redis_subscribe = Redis::Connection.new(URI.parse(Cable.settings.url)) | |||
getter fiber_channel = ::Channel({String, String}).new | |||
getter pinger : Cable::RedisPinger do | |||
Cable::RedisPinger.new(self) | |||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💯
Good news is the memory leak is fixed.... bad news is there's still some issue where when either the docker container swaps, or redis disconnects, it requires a full reboot to fix. It didn't seem to happen before 6b11799 It felt like it was handling autoreconnect just fine, but less than 24 hours after deploying that change I had to reboot the whole machine. Just to add to this, it looks like sending messages is fine, but it's on the broadcast back where they fail. It's definitely something in cable since I was getting the same issue on master branch. Something that happens over time, so it's not immediately reproducible. Could be related to how the Cluster is handled? |
The folks at EatApp had a Redis disconnect issue related to pub/sub, too, related to message size. Does any of the discussion in jgaskins/redis#7 help? |
Hey @jwoertink @jgaskins, I'm going to create another PR with just the error-handling stuff on its own as we added some error rate handling logic which has been super useful. Although I love the idea of putting @jgaskins Redis shard in here, as we use it also, I guess long term, shall we consider an adapter pattern? There are some differences in the way each Redis shard handles types. def fetch_statuses
return unless (vals = redis.hgetall(cache_key))
vals = vals.as(Array(Redis::Value))
if vals.try(&.empty?)
nil
else
data = {} of String => Int64 | Int32
vals.each_with_index do |val, idx|
if idx.even?
data[val.to_s] = vals[idx + 1].to_s.to_i
end
end
data
end
end It was mildly painful but worth it since we have our own version of the cable shard. But for others, upgrading could result in substantial breaking changes. Which maybe just needs to happen anyways to make this more stable. I was also toying with the idea of having a Postgres PubSub backend a while back when Redis was giving me a massive headache. Rails supports a few other backends. We could have 1 for each of the Redis shards as a v1. |
I definitely want to add a postgres backend #49 but doing a redis backend gets a little tricky. We ran in to this issue with Mosquito. The thing is, we can't include both redis shards in to the same project at the same time. So in order for Cable to have a generic pluggable redis interface, we'd need some sort of mock-redis object as a "default", then we'd have to maintain 2 additional shards (i.e. I agree with the type handling. There's this issue jgaskins/redis#32 which I think will make that a lot nicer. I appreciate the info @mjeffrey18! It sounds like you're running in to a lot of the same stuff we are, so that's a huge help 🙏 |
rescue e : IO::Error | ||
Cable::Logger.error { "#{e.class.name} Exception: #{e.message} -> #{self.class.name}#call { socket.on_message(message) }" } | ||
# Redis may have some error, restart Cable | ||
socket.close(HTTP::WebSocket::CloseCode::NormalClosure, "Farewell") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #54 @mjeffrey18 added the socket.close
, but this rescue branch doesn't exist on master.... Should we be closing the socket here too? If so, then which CloseCode should be used?
This PR is no longer necessary. The current implementation allows for choosing which Redis shard you want to use. |
Fixes #46
This PR swaps out the redis implementation to a new shard. With the new shard, the monkey patch is no longer required, and several stability improvements are made. This redis shard includes better connection pooling, and support for things like redis cluster. This also includes a new RedisPinger to ensure redis stays alive.
The main breaking change is that some of the settings have been removed. These settings will need to be removed from your config after upgrading: