LLVM supports instructions which are well-defined in the presence of threads and asynchronous signals.
The atomic instructions are designed specifically to provide readable IR and optimized code generation for the following:
Atomic and volatile in the IR are orthogonal; “volatile” is the C/C++ volatile, which ensures that every volatile load and store happens and is performed in the stated order. A couple examples: if a SequentiallyConsistent store is immediately followed by another SequentiallyConsistent store to the same address, the first store can be erased. This transformation is not allowed for a pair of volatile stores. On the other hand, a non-volatile non-atomic load can be moved across a volatile load freely, but not an Acquire load.
This document is intended to provide a guide to anyone either writing a frontend for LLVM or working on optimization passes for LLVM with a guide for how to deal with instructions with special semantics in the presence of concurrency. This is not intended to be a precise guide to the semantics; the details can get extremely complicated and unreadable, and are not usually necessary.
The basic 'load' and 'store' allow a variety of optimizations, but can lead to undefined results in a concurrent environment; see NotAtomic. This section specifically goes into the one optimizer restriction which applies in concurrent environments, which gets a bit more of an extended description because any optimization dealing with stores needs to be aware of it.
From the optimizer’s point of view, the rule is that if there are not any instructions with atomic ordering involved, concurrency does not matter, with one exception: if a variable might be visible to another thread or signal handler, a store cannot be inserted along a path where it might not execute otherwise. Take the following example:
/* C code, for readability; run through clang -O2 -S -emit-llvm to get
equivalent IR */
int x;
void f(int* a) {
for (int i = 0; i < 100; i++) {
if (a[i])
x += 1;
}
}
The following is equivalent in non-concurrent situations:
int x;
void f(int* a) {
int xtemp = x;
for (int i = 0; i < 100; i++) {
if (a[i])
xtemp += 1;
}
x = xtemp;
}
However, LLVM is not allowed to transform the former to the latter: it could indirectly introduce undefined behavior if another thread can access x at the same time. (This example is particularly of interest because before the concurrency model was implemented, LLVM would perform this transformation.)
Note that speculative loads are allowed; a load which is part of a race returns undef, but does not have undefined behavior.
For cases where simple loads and stores are not sufficient, LLVM provides various atomic instructions. The exact guarantees provided depend on the ordering; see Atomic orderings.
load atomic and store atomic provide the same basic functionality as non-atomic loads and stores, but provide additional guarantees in situations where threads and signals are involved.
cmpxchg and atomicrmw are essentially like an atomic load followed by an atomic store (where the store is conditional for cmpxchg), but no other memory operation can happen on any thread between the load and store.
A fence provides Acquire and/or Release ordering which is not part of another operation; it is normally used along with Monotonic memory operations. A Monotonic load followed by an Acquire fence is roughly equivalent to an Acquire load, and a Monotonic store following a Release fence is roughly equivalent to a Release store. SequentiallyConsistent fences behave as both an Acquire and a Release fence, and offer some additional complicated guarantees, see the C++11 standard for details.
Frontends generating atomic instructions generally need to be aware of the target to some degree; atomic instructions are guaranteed to be lock-free, and therefore an instruction which is wider than the target natively supports can be impossible to generate.
In order to achieve a balance between performance and necessary guarantees, there are six levels of atomicity. They are listed in order of strength; each level includes all the guarantees of the previous level except for Acquire/Release. (See also LangRef Ordering.)
NotAtomic is the obvious, a load or store which is not atomic. (This isn’t really a level of atomicity, but is listed here for comparison.) This is essentially a regular load or store. If there is a race on a given memory location, loads from that location return undef.
Unordered is the lowest level of atomicity. It essentially guarantees that races produce somewhat sane results instead of having undefined behavior. It also guarantees the operation to be lock-free, so it does not depend on the data being part of a special atomic structure or depend on a separate per-process global lock. Note that code generation will fail for unsupported atomic operations; if you need such an operation, use explicit locking.
Monotonic is the weakest level of atomicity that can be used in synchronization primitives, although it does not provide any general synchronization. It essentially guarantees that if you take all the operations affecting a specific address, a consistent ordering exists.
Acquire provides a barrier of the sort necessary to acquire a lock to access other memory with normal loads and stores.
Release is similar to Acquire, but with a barrier of the sort necessary to release a lock.
AcquireRelease (acq_rel in IR) provides both an Acquire and a Release barrier (for fences and operations which both read and write memory).
SequentiallyConsistent (seq_cst in IR) provides Acquire semantics for loads and Release semantics for stores. Additionally, it guarantees that a total ordering exists between all SequentiallyConsistent operations.
Predicates for optimizer writers to query:
To support optimizing around atomic operations, make sure you are using the right predicates; everything should work if that is done. If your pass should optimize some atomic operations (Unordered operations in particular), make sure it doesn’t replace an atomic load or store with a non-atomic operation.
Some examples of how optimizations interact with various kinds of atomic operations:
Atomic operations are represented in the SelectionDAG with ATOMIC_* opcodes. On architectures which use barrier instructions for all atomic ordering (like ARM), appropriate fences can be emitted by the AtomicExpand Codegen pass if setInsertFencesForAtomic() was used.
The MachineMemOperand for all atomic operations is currently marked as volatile; this is not correct in the IR sense of volatile, but CodeGen handles anything marked volatile very conservatively. This should get fixed at some point.
One very important property of the atomic operations is that if your backend supports any inline lock-free atomic operations of a given size, you should support ALL operations of that size in a lock-free manner.
When the target implements atomic cmpxchg or LL/SC instructions (as most do) this is trivial: all the other operations can be implemented on top of those primitives. However, on many older CPUs (e.g. ARMv5, SparcV8, Intel 80386) there are atomic load and store instructions, but no cmpxchg or LL/SC. As it is invalid to implement atomic load using the native instruction, but cmpxchg using a library call to a function that uses a mutex, atomic load must also expand to a library call on such architectures, so that it can remain atomic with regards to a simultaneous cmpxchg, by using the same mutex.
AtomicExpandPass can help with that: it will expand all atomic operations to the proper __atomic_* libcalls for any size above the maximum set by setMaxAtomicSizeInBitsSupported (which defaults to 0).
On x86, all atomic loads generate a MOV. SequentiallyConsistent stores generate an XCHG, other stores generate a MOV. SequentiallyConsistent fences generate an MFENCE, other fences do not cause any code to be generated. cmpxchg uses the LOCK CMPXCHG instruction. atomicrmw xchg uses XCHG, atomicrmw add and atomicrmw sub use XADD, and all other atomicrmw operations generate a loop with LOCK CMPXCHG. Depending on the users of the result, some atomicrmw operations can be translated into operations like LOCK AND, but that does not work in general.
On ARM (before v8), MIPS, and many other RISC architectures, Acquire, Release, and SequentiallyConsistent semantics require barrier instructions for every such operation. Loads and stores generate normal instructions. cmpxchg and atomicrmw can be represented using a loop with LL/SC-style instructions which take some sort of exclusive lock on a cache line (LDREX and STREX on ARM, etc.).
It is often easiest for backends to use AtomicExpandPass to lower some of the atomic constructs. Here are some lowerings it can do:
For an example of all of these, look at the ARM backend.
There are two kinds of atomic library calls that are generated by LLVM. Please note that both sets of library functions somewhat confusingly share the names of builtin functions defined by clang. Despite this, the library functions are not directly related to the builtins: it is not the case that __atomic_* builtins lower to __atomic_* library calls and __sync_* builtins lower to __sync_* library calls.
The first set of library functions are named __atomic_*. This set has been “standardized” by GCC, and is described below. (See also GCC’s documentation)
LLVM’s AtomicExpandPass will translate atomic operations on data sizes above MaxAtomicSizeInBitsSupported into calls to these functions.
There are four generic functions, which can be called with data of any size or alignment:
void __atomic_load(size_t size, void *ptr, void *ret, int ordering)
void __atomic_store(size_t size, void *ptr, void *val, int ordering)
void __atomic_exchange(size_t size, void *ptr, void *val, void *ret, int ordering)
bool __atomic_compare_exchange(size_t size, void *ptr, void *expected, void *desired, int success_order, int failure_order)
There are also size-specialized versions of the above functions, which can only be used with naturally-aligned pointers of the appropriate size. In the signatures below, “N” is one of 1, 2, 4, 8, and 16, and “iN” is the appropriate integer type of that size; if no such integer type exists, the specialization cannot be used:
iN __atomic_load_N(iN *ptr, iN val, int ordering)
void __atomic_store_N(iN *ptr, iN val, int ordering)
iN __atomic_exchange_N(iN *ptr, iN val, int ordering)
bool __atomic_compare_exchange_N(iN *ptr, iN *expected, iN desired, int success_order, int failure_order)
Finally there are some read-modify-write functions, which are only available in the size-specific variants (any other sizes use a __atomic_compare_exchange loop):
iN __atomic_fetch_add_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_sub_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_and_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_or_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_xor_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_nand_N(iN *ptr, iN val, int ordering)
This set of library functions have some interesting implementation requirements to take note of:
Note that it’s possible to write an entirely target-independent implementation of these library functions by using the compiler atomic builtins themselves to implement the operations on naturally-aligned pointers of supported sizes, and a generic mutex implementation otherwise.
Some targets or OS/target combinations can support lock-free atomics, but for various reasons, it is not practical to emit the instructions inline.
There’s two typical examples of this.
Some CPUs support multiple instruction sets which can be swiched back and forth on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly, has the Thumb ISA. In MIPS16 and earlier versions of Thumb, the atomic instructions are not encodable. However, those instructions are available via a function call to a function with the longer encoding.
Additionally, a few OS/target pairs provide kernel-supported lock-free atomics. ARM/Linux is an example of this: the kernel provides a function which on older CPUs contains a “magically-restartable” atomic sequence (which looks atomic so long as there’s only one CPU), and contains actual atomic instructions on newer multicore models. This sort of functionality can typically be provided on any architecture, if all CPUs which are missing atomic compare-and-swap support are uniprocessor (no SMP). This is almost always the case. The only common architecture without that property is SPARC – SPARCV8 SMP systems were common, yet it doesn’t support any sort of compare-and-swap operation.
In either of these cases, the Target in LLVM can claim support for atomics of an appropriate size, and then implement some subset of the operations via libcalls to a __sync_* function. Such functions must not use locks in their implementation, because unlike the __atomic_* routines used by AtomicExpandPass, these may be mixed-and-matched with native instructions by the target lowering.
Further, these routines do not need to be shared, as they are stateless. So, there is no issue with having multiple copies included in one binary. Thus, typically these routines are implemented by the statically-linked compiler runtime support library.
LLVM will emit a call to an appropriate __sync_* routine if the target ISelLowering code has set the corresponding ATOMIC_CMPXCHG, ATOMIC_SWAP, or ATOMIC_LOAD_* operation to “Expand”, and if it has opted-into the availability of those library functions via a call to initSyncLibcalls().
The full set of functions that may be called by LLVM is (for N being 1, 2, 4, 8, or 16):
iN __sync_val_compare_and_swap_N(iN *ptr, iN expected, iN desired)
iN __sync_lock_test_and_set_N(iN *ptr, iN val)
iN __sync_fetch_and_add_N(iN *ptr, iN val)
iN __sync_fetch_and_sub_N(iN *ptr, iN val)
iN __sync_fetch_and_and_N(iN *ptr, iN val)
iN __sync_fetch_and_or_N(iN *ptr, iN val)
iN __sync_fetch_and_xor_N(iN *ptr, iN val)
iN __sync_fetch_and_nand_N(iN *ptr, iN val)
iN __sync_fetch_and_max_N(iN *ptr, iN val)
iN __sync_fetch_and_umax_N(iN *ptr, iN val)
iN __sync_fetch_and_min_N(iN *ptr, iN val)
iN __sync_fetch_and_umin_N(iN *ptr, iN val)
This list doesn’t include any function for atomic load or store; all known architectures support atomic loads and stores directly (possibly by emitting a fence on either side of a normal load or store.)
There’s also, somewhat separately, the possibility to lower ATOMIC_FENCE to __sync_synchronize(). This may happen or not happen independent of all the above, controlled purely by setOperationAction(ISD::ATOMIC_FENCE, ...).