Namespaces
Variants
Views
Actions

std::atomic_thread_fence

From cppreference.com
< cpp‎ | atomic
Revision as of 10:01, 5 September 2024 by Space Mission (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
 
 
Concurrency support library
Threads
(C++11)
(C++20)
this_thread namespace
(C++11)
(C++11)
(C++11)
Cooperative cancellation
Mutual exclusion
(C++11)
Generic lock management
(C++11)
(C++11)
(C++11)
(C++11)
(C++11)
Condition variables
(C++11)
Semaphores
Latches and Barriers
(C++20)
(C++20)
Futures
(C++11)
(C++11)
(C++11)
(C++11)
Safe Reclamation
(C++26)
Hazard Pointers
Atomic types
(C++11)
(C++20)
Initialization of atomic types
(C++11)(deprecated in C++20)
(C++11)(deprecated in C++20)
Memory ordering
atomic_thread_fence
(C++11)
Free functions for atomic operations
Free functions for atomic flags
 
Defined in header <atomic>
extern "C" void atomic_thread_fence( std::memory_order order ) noexcept;
(since C++11)

Establishes memory synchronization ordering of non-atomic and relaxed atomic accesses, as instructed by order, without an associated atomic operation. Note however, that at least one atomic operation is required to set up the synchronization, as described below.

Contents

[edit] Fence-atomic synchronization

A release fence F in thread A synchronizes-with atomic acquire operation Y in thread B, if

  • there exists an atomic store X (with any memory order),
  • Y reads the value written by X (or the value would be written by release sequence headed by X if X were a release operation),
  • F is sequenced-before X in thread A.

In this case, all non-atomic and relaxed atomic stores that are sequenced-before F in thread A will happen-before all non-atomic and relaxed atomic loads from the same locations made in thread B after Y.

[edit] Atomic-fence synchronization

An atomic release operation X in thread A synchronizes-with an acquire fence F in thread B, if

  • there exists an atomic read Y (with any memory order),
  • Y reads the value written by X (or by the release sequence headed by X),
  • Y is sequenced-before F in thread B.

In this case, all non-atomic and relaxed atomic stores that are sequenced-before X in thread A will happen-before all non-atomic and relaxed atomic loads from the same locations made in thread B after F.

[edit] Fence-fence synchronization

A release fence FA in thread A synchronizes-with an acquire fence FB in thread B, if

  • there exists an atomic object M,
  • there exists an atomic write X (with any memory order) that modifies M in thread A,
  • FA is sequenced-before X in thread A,
  • there exists an atomic read Y (with any memory order) in thread B,
  • Y reads the value written by X (or the value would be written by release sequence headed by X if X were a release operation),
  • Y is sequenced-before FB in thread B.

In this case, all non-atomic and relaxed atomic stores that are sequenced-before FA in thread A will happen-before all non-atomic and relaxed atomic loads from the same locations made in thread B after FB.

[edit] Parameters

order - the memory ordering executed by this fence

[edit] Return value

(none)

[edit] Notes

On x86 (including x86-64), atomic_thread_fence functions issue no CPU instructions and only affect compile-time code motion, except for std::atomic_thread_fence(std::memory_order_seq_cst).

atomic_thread_fence imposes stronger synchronization constraints than an atomic store operation with the same std::memory_order. While an atomic store-release operation prevents all preceding reads and writes from moving past the store-release, an atomic_thread_fence with std::memory_order_release ordering prevents all preceding reads and writes from moving past all subsequent stores.

Fence-fence synchronization can be used to add synchronization to a sequence of several relaxed atomic operations, for example:

// Global
std::string computation(int);
void print(std::string);
 
std::atomic<int> arr[3] = {-1, -1, -1};
std::string data[1000]; //non-atomic data
 
// Thread A, compute 3 values.
void ThreadA(int v0, int v1, int v2)
{
//  assert(0 <= v0, v1, v2 < 1000);
    data[v0] = computation(v0);
    data[v1] = computation(v1);
    data[v2] = computation(v2);
    std::atomic_thread_fence(std::memory_order_release);
    std::atomic_store_explicit(&arr[0], v0, std::memory_order_relaxed);
    std::atomic_store_explicit(&arr[1], v1, std::memory_order_relaxed);
    std::atomic_store_explicit(&arr[2], v2, std::memory_order_relaxed);
}
 
// Thread B, prints between 0 and 3 values already computed.
void ThreadB()
{
    int v0 = std::atomic_load_explicit(&arr[0], std::memory_order_relaxed);
    int v1 = std::atomic_load_explicit(&arr[1], std::memory_order_relaxed);
    int v2 = std::atomic_load_explicit(&arr[2], std::memory_order_relaxed);
    std::atomic_thread_fence(std::memory_order_acquire);
//  v0, v1, v2 might turn out to be -1, some or all of them.
//  Otherwise it is safe to read the non-atomic data because of the fences:
    if (v0 != -1)
        print(data[v0]);
    if (v1 != -1)
        print(data[v1]);
    if (v2 != -1)
        print(data[v2]);
}

[edit] Example

Scan an array of mailboxes, and process only the ones intended for us, without unnecessary synchronization. This example uses atomic-fence synchronization.

const int num_mailboxes = 32;
std::atomic<int> mailbox_receiver[num_mailboxes];
std::string mailbox_data[num_mailboxes];
 
// The writer threads update non-atomic shared data 
// and then update mailbox_receiver[i] as follows:
mailbox_data[i] = ...;
std::atomic_store_explicit(&mailbox_receiver[i], receiver_id, std::memory_order_release);
 
// Reader thread needs to check all mailbox[i], but only needs to sync with one.
for (int i = 0; i < num_mailboxes; ++i)
    if (std::atomic_load_explicit(&mailbox_receiver[i],
        std::memory_order_relaxed) == my_id)
    {
        // synchronize with just one writer
        std::atomic_thread_fence(std::memory_order_acquire);
        // guaranteed to observe everything done in the writer thread
        // before the atomic_store_explicit()
        do_work(mailbox_data[i]);
    }

[edit] See also

defines memory ordering constraints for the given atomic operation
(enum) [edit]
fence between a thread and a signal handler executed in the same thread
(function) [edit]
C documentation for atomic_thread_fence