You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was a little surprised to see that the default for poll_retries is 16 which means that it takes, generally, over a second to give up on creating a Lockfile if one already exists. Why isn't it zero (or maybe more precisely, why is this polling step even necessary? Isn't that the purpose of the :retries argument?)
The text was updated successfully, but these errors were encountered:
the strategy lockfile uses is not arbitrary, it was tuned over years running on various shared file systems in production environments. in summary, shared file systems lie, cache inodes, and generally cannot be trusted. the strategy is puncuated sawtooth pattern, 'try hard in rapid succession', back off incrementally (become more patient), but eventually become impatient again...
it's very important to have multiple tries and a random access pattern in large distributed file systems which is the only reason you'd use lockfile. otherwise you'd just use DATA.flock(File::LOCK_EX | File::LOCK_NB )
I was a little surprised to see that the default for poll_retries is 16 which means that it takes, generally, over a second to give up on creating a Lockfile if one already exists. Why isn't it zero (or maybe more precisely, why is this polling step even necessary? Isn't that the purpose of the :retries argument?)
The text was updated successfully, but these errors were encountered: