Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust to new redlock behaviour #78

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

nduitz
Copy link

@nduitz nduitz commented Jun 25, 2024

There have been some changes to the behaviour of redis-client and redlock-rb which lead to a different behaviour in this gem.

Before leandromoreira/redlock-rb#110 was merged, redlock would return false if there was an error with the connection, now it raises the error.
Before this behaviour could have been handled by configuring on_conflict. Now the behaviour should be more fine grained.

This adds a on_error option, which can be either nil (leading to a reraise) or a proc leading to the evaluation of said proc.

This allows users to handle connections errors to reduce the amount of jobs that get lost because of said connection errors.

Copy link
Member

@sharshenov sharshenov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @nduitz
Thank you for the PR. I'm happy to accept it. But at the same time it would be nice to preserve compatibility with older versions of Ruby.

def handle_error(resource:, on_error:, error:)
raise error unless on_error

on_error.call(job, resource:, error:)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This syntax is not available in Rubies pre-3.1

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the fast feedback :) will change.

You've been fast, in the meantime i've changed the naming a bit and added support for a global config

@nduitz nduitz force-pushed the adjust-to-new-redlock-behaviour branch from eeb46cb to 19cbeb6 Compare June 25, 2024 14:16
@nduitz nduitz force-pushed the adjust-to-new-redlock-behaviour branch from 7b1bf6a to 805bd1c Compare June 25, 2024 14:45
@nduitz nduitz force-pushed the adjust-to-new-redlock-behaviour branch from 805bd1c to 97de8fa Compare June 25, 2024 14:50
@nduitz
Copy link
Author

nduitz commented Jul 3, 2024

I've been running this PR in production for some days now.
I added some logging around that callback, but it appears to be silent. So this might not be an issue at all. Even though this PR might still be a feature worth adding.

Most of the errors that we encounter come from delete_lock. So I will try and investigate that if I find the time, maybe there is something we can do about that.
redlock's strategy is to simply fire and forget the unlock attempt, rescuing every error that might occur. But I don't know about that, since redlock is more about process based locking. Trying the same might lead to a deadlock situation.

@louiswdavis
Copy link

Think we've also been encountering this issue after doing an upgrade with a job with the unique config of; unique :until_executed, lock_ttl: 10.minutes, on_conflict: :log.

Is the gem going to include these changes in the near future?

@rmaspero
Copy link

I think we are experiencing a similar issue related to the redis-client and redlock-rb changes. @nduitz are you still running this branch?

@sharshenov are we able to merge or get an official fix?

@nduitz
Copy link
Author

nduitz commented Oct 22, 2024

I think we are experiencing a similar issue related to the redis-client and redlock-rb changes. @nduitz are you still running this branch?

@sharshenov are we able to merge or get an official fix?

Hey, yes I am still running the branch.
But as I said before, this PR doesn't change anything on the situation. We still have loads of RedisClient::ConnectionError which are in fact triggered by delete_lock.

Since we never experienced any real issues I suspect this arrises through a raise condition and the lock already being deleted but I did not investigate it further

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants