Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement PaddedSpinlock #190

Open
wants to merge 6 commits into
base: v1.10.2+RAI
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 40 additions & 10 deletions base/locks-mt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

import .Base: unsafe_convert, lock, trylock, unlock, islocked, wait, notify, AbstractLock

export SpinLock
export AbstractSpinLock, PaddedSpinLock, SpinLock

# Important Note: these low-level primitives defined here
# are typically not for general usage
Expand All @@ -12,33 +12,63 @@ export SpinLock
##########################################

"""
SpinLock()
abstract type AbstractSpinLock <: AbstractLock end

Create a non-reentrant, test-and-test-and-set spin lock.
Non-reentrant, test-and-set spin locks.
Recursive use will result in a deadlock.
This kind of lock should only be used around code that takes little time
to execute and does not block (e.g. perform I/O).
In general, [`ReentrantLock`](@ref) should be used instead.

Each [`lock`](@ref) must be matched with an [`unlock`](@ref).
If [`!islocked(lck::SpinLock)`](@ref islocked) holds, [`trylock(lck)`](@ref trylock)
If [`!islocked(lck::AbstractSpinLock)`](@ref islocked) holds, [`trylock(lck)`](@ref trylock)
succeeds unless there are other tasks attempting to hold the lock "at the same time."

Test-and-test-and-set spin locks are quickest up to about 30ish
contending threads. If you have more contention than that, different
synchronization approaches should be considered.
"""
mutable struct SpinLock <: AbstractLock
abstract type AbstractSpinLock <: AbstractLock end

"""
SpinLock() <: AbstractSpinLock

Spinlocks are not padded, and so may suffer from false sharing.
"""
mutable struct SpinLock <: AbstractSpinLock
# we make this much larger than necessary to minimize false-sharing
@atomic owned::Int
SpinLock() = new(0)
end

# TODO: Use CPUID to figure out the cache line size? Meanwhile, this is correct for most
# processors.
const CACHE_LINE_SIZE = 64

"""
PaddedSpinLock() <: AbstractSpinLock

PaddedSpinLocks are padded so that each is guaranteed to be on its own cache line, to avoid
false sharing.
"""
mutable struct PaddedSpinLock <: AbstractSpinLock
# We make this much larger than necessary to minimize false-sharing.

# Strictly speaking, this is a little bit larger than it needs to be. For a 64-byte
# cache line, this results in the size being 120 bytes. Because these objects are
# 16-byte aligned, it would be enough if `PaddedSpinLock` was 112 bytes (with 48 bytes
# in the before padding and 56 bytes in the after padding).
_padding_before::NTuple{max(0, CACHE_LINE_SIZE - sizeof(Int)), UInt8}
@atomic owned::Int
_padding_after::NTuple{max(0, CACHE_LINE_SIZE - sizeof(Int)), UInt8}
PaddedSpinLock() = new(0)
end

# Note: this cannot assert that the lock is held by the correct thread, because we do not
# track which thread locked it. Users beware.
Base.assert_havelock(l::SpinLock) = islocked(l) ? nothing : Base.concurrency_violation()
Base.assert_havelock(l::AbstractSpinLock) = islocked(l) ? nothing : Base.concurrency_violation()

function lock(l::SpinLock)
function lock(l::AbstractSpinLock)
while true
if @inline trylock(l)
return
Expand All @@ -49,7 +79,7 @@ function lock(l::SpinLock)
end
end

function trylock(l::SpinLock)
function trylock(l::AbstractSpinLock)
if l.owned == 0
GC.disable_finalizers()
p = @atomicswap :acquire l.owned = 1
Expand All @@ -61,7 +91,7 @@ function trylock(l::SpinLock)
return false
end

function unlock(l::SpinLock)
function unlock(l::AbstractSpinLock)
if (@atomicswap :release l.owned = 0) == 0
error("unlock count must match lock count")
end
Expand All @@ -70,6 +100,6 @@ function unlock(l::SpinLock)
return
end

function islocked(l::SpinLock)
function islocked(l::AbstractSpinLock)
return (@atomic :monotonic l.owned) != 0
end
8 changes: 8 additions & 0 deletions doc/src/base/multi-threading.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,14 @@ Base.@threadcall

These building blocks are used to create the regular synchronization objects.

```@docs
Base.Threads.AbstractSpinLock
```

```@docs
Base.Threads.SpinLock
```

```@docs
Base.Threads.PaddedSpinLock
```
23 changes: 23 additions & 0 deletions test/threads_exec.jl
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,29 @@ if threadpoolsize() > 1
end
end

@testset "basic padded spin lock check"
if threadpoolsize() > 1
let lk = PaddedSpinLock()
c1 = Base.Event()
c2 = Base.Event()
@test trylock(lk)
@test !trylock(lk)
t1 = Threads.@spawn (notify(c1); lock(lk); unlock(lk); trylock(lk))
t2 = Threads.@spawn (notify(c2); trylock(lk))
Libc.systemsleep(0.1) # block our thread from scheduling for a bit
wait(c1)
wait(c2)
@test !fetch(t2)
@test istaskdone(t2)
@test !istaskdone(t1)
unlock(lk)
@test fetch(t1)
@test istaskdone(t1)
end
end
end


# threading constructs

let a = zeros(Int, 2 * threadpoolsize())
Expand Down