-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected Behavior of ShouldBeUniqueUntilProcessing
and WithoutOverlap
in Jobs
#48013
Comments
I believe this may be a niche use case where both |
Thank you for reporting this issue! As Laravel is an open source project, we rely on the community to help us diagnose and fix issues as it is not possible to research and fix every issue reported to us via GitHub. If possible, please make a pull request fixing the issue you have described, along with corresponding tests. All pull requests are promptly reviewed by the Laravel team. Thank you! |
Same here, found this Unexpected Behavior last week in my project too |
Could you elaborate on your reasoning? I'm trying to understand if we should reevaluate if we need to be using both. From our perspective using |
Closing this issue because it's inactive, already solved, old or not relevant anymore. Feel to open up a new issue if you're still experiencing this. |
@crynobone when you have time could you reply to my comment above please? |
Laravel Version
10.18.0
PHP Version
8.2.3
Database Driver & Version
No response
Description
A job that is released back to the queue (e.g. by
WithoutOverlap
) does not acquire theUniqueLock
and therefore does not prevent new jobs implementing theShouldBeUnique
interface from being added to the queue.Context
We have a job that runs for ~10 seconds, goes through the content of our app and updates something in a cache. This job is triggered by various event listeners and therefore is not run in a regular interval.
Because the job updates the cache for all content, there only ever needs to be one job in the queue to keep the cache up to date. This is why we updated our job to implement the
ShouldBeUnique
interface. However, this can result in an outdated cache, because no new job is pushed to the queue if a job is already being processed.That's why we wanted to update the job to implement the
ShouldBeUniqueUntilProcessing
interface and theWithoutOverlap
middleware. That way, a new job would be added to the queue if one is already being processed, but no two jobs would be processed simultaneously, because that could lead to issues. Instead, they would be released back to the queue.It seems that
ShouldBeUniqueUntilProcessing
andWithoutOverlap
do not work together as we expect it to.Details
Expected flow
TestJob 1
is dispatched and added to the queueTestJob 1
is being processed by worker 1TestJob 2
is dispatched and added to the queueTestJob 2
is popped by worker 2 and released back to the queueTestJob 3
is dispatched and should be discarded, because the releasedTestJob 2
is already in the queueTestJob 2
is popped by a worker and processed (whenTestJob 1
has finished processing)Actual flow
TestJob 1
is dispatched and added to the queueTestJob 1
is being processed by worker 1TestJob 2
is dispatched and added to the queueTestJob 2
is popped by worker 2 and released back to the queueTestJob 3
is dispatched and added to the queueTestJob 3
is being processed by worker 2TestJob 2
is popped by worker 1 and processed (whenTestJob 1
has finished processing)Question
Is the actual behaviour intended and our expectation is wrong, or is this an actual issue?
Steps To Reproduce
Our `TestJob`
Additional event listeners to debug job processing
Test
Issue: Job 2 is released back to the queue. Job 3 however is still added to the queue, even though job 2 is now in the queue (to be processed after 8 seconds) again.
Log
The text was updated successfully, but these errors were encountered: