linux/Documentation/memory-barriers.txt
<<
>>
Prefs
   1                         ============================
   2                         LINUX KERNEL MEMORY BARRIERS
   3                         ============================
   4
   5By: David Howells <dhowells@redhat.com>
   6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
   7
   8Contents:
   9
  10 (*) Abstract memory access model.
  11
  12     - Device operations.
  13     - Guarantees.
  14
  15 (*) What are memory barriers?
  16
  17     - Varieties of memory barrier.
  18     - What may not be assumed about memory barriers?
  19     - Data dependency barriers.
  20     - Control dependencies.
  21     - SMP barrier pairing.
  22     - Examples of memory barrier sequences.
  23     - Read memory barriers vs load speculation.
  24     - Transitivity
  25
  26 (*) Explicit kernel barriers.
  27
  28     - Compiler barrier.
  29     - CPU memory barriers.
  30     - MMIO write barrier.
  31
  32 (*) Implicit kernel memory barriers.
  33
  34     - Locking functions.
  35     - Interrupt disabling functions.
  36     - Sleep and wake-up functions.
  37     - Miscellaneous functions.
  38
  39 (*) Inter-CPU locking barrier effects.
  40
  41     - Locks vs memory accesses.
  42     - Locks vs I/O accesses.
  43
  44 (*) Where are memory barriers needed?
  45
  46     - Interprocessor interaction.
  47     - Atomic operations.
  48     - Accessing devices.
  49     - Interrupts.
  50
  51 (*) Kernel I/O barrier effects.
  52
  53 (*) Assumed minimum execution ordering model.
  54
  55 (*) The effects of the cpu cache.
  56
  57     - Cache coherency.
  58     - Cache coherency vs DMA.
  59     - Cache coherency vs MMIO.
  60
  61 (*) The things CPUs get up to.
  62
  63     - And then there's the Alpha.
  64
  65 (*) Example uses.
  66
  67     - Circular buffers.
  68
  69 (*) References.
  70
  71
  72============================
  73ABSTRACT MEMORY ACCESS MODEL
  74============================
  75
  76Consider the following abstract model of the system:
  77
  78                            :                :
  79                            :                :
  80                            :                :
  81                +-------+   :   +--------+   :   +-------+
  82                |       |   :   |        |   :   |       |
  83                |       |   :   |        |   :   |       |
  84                | CPU 1 |<----->| Memory |<----->| CPU 2 |
  85                |       |   :   |        |   :   |       |
  86                |       |   :   |        |   :   |       |
  87                +-------+   :   +--------+   :   +-------+
  88                    ^       :       ^        :       ^
  89                    |       :       |        :       |
  90                    |       :       |        :       |
  91                    |       :       v        :       |
  92                    |       :   +--------+   :       |
  93                    |       :   |        |   :       |
  94                    |       :   |        |   :       |
  95                    +---------->| Device |<----------+
  96                            :   |        |   :
  97                            :   |        |   :
  98                            :   +--------+   :
  99                            :                :
 100
 101Each CPU executes a program that generates memory access operations.  In the
 102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
 103perform the memory operations in any order it likes, provided program causality
 104appears to be maintained.  Similarly, the compiler may also arrange the
 105instructions it emits in any order it likes, provided it doesn't affect the
 106apparent operation of the program.
 107
 108So in the above diagram, the effects of the memory operations performed by a
 109CPU are perceived by the rest of the system as the operations cross the
 110interface between the CPU and rest of the system (the dotted lines).
 111
 112
 113For example, consider the following sequence of events:
 114
 115        CPU 1           CPU 2
 116        =============== ===============
 117        { A == 1; B == 2 }
 118        A = 3;          x = B;
 119        B = 4;          y = A;
 120
 121The set of accesses as seen by the memory system in the middle can be arranged
 122in 24 different combinations:
 123
 124        STORE A=3,      STORE B=4,      y=LOAD A->3,    x=LOAD B->4
 125        STORE A=3,      STORE B=4,      x=LOAD B->4,    y=LOAD A->3
 126        STORE A=3,      y=LOAD A->3,    STORE B=4,      x=LOAD B->4
 127        STORE A=3,      y=LOAD A->3,    x=LOAD B->2,    STORE B=4
 128        STORE A=3,      x=LOAD B->2,    STORE B=4,      y=LOAD A->3
 129        STORE A=3,      x=LOAD B->2,    y=LOAD A->3,    STORE B=4
 130        STORE B=4,      STORE A=3,      y=LOAD A->3,    x=LOAD B->4
 131        STORE B=4, ...
 132        ...
 133
 134and can thus result in four different combinations of values:
 135
 136        x == 2, y == 1
 137        x == 2, y == 3
 138        x == 4, y == 1
 139        x == 4, y == 3
 140
 141
 142Furthermore, the stores committed by a CPU to the memory system may not be
 143perceived by the loads made by another CPU in the same order as the stores were
 144committed.
 145
 146
 147As a further example, consider this sequence of events:
 148
 149        CPU 1           CPU 2
 150        =============== ===============
 151        { A == 1, B == 2, C = 3, P == &A, Q == &C }
 152        B = 4;          Q = P;
 153        P = &B          D = *Q;
 154
 155There is an obvious data dependency here, as the value loaded into D depends on
 156the address retrieved from P by CPU 2.  At the end of the sequence, any of the
 157following results are possible:
 158
 159        (Q == &A) and (D == 1)
 160        (Q == &B) and (D == 2)
 161        (Q == &B) and (D == 4)
 162
 163Note that CPU 2 will never try and load C into D because the CPU will load P
 164into Q before issuing the load of *Q.
 165
 166
 167DEVICE OPERATIONS
 168-----------------
 169
 170Some devices present their control interfaces as collections of memory
 171locations, but the order in which the control registers are accessed is very
 172important.  For instance, imagine an ethernet card with a set of internal
 173registers that are accessed through an address port register (A) and a data
 174port register (D).  To read internal register 5, the following code might then
 175be used:
 176
 177        *A = 5;
 178        x = *D;
 179
 180but this might show up as either of the following two sequences:
 181
 182        STORE *A = 5, x = LOAD *D
 183        x = LOAD *D, STORE *A = 5
 184
 185the second of which will almost certainly result in a malfunction, since it set
 186the address _after_ attempting to read the register.
 187
 188
 189GUARANTEES
 190----------
 191
 192There are some minimal guarantees that may be expected of a CPU:
 193
 194 (*) On any given CPU, dependent memory accesses will be issued in order, with
 195     respect to itself.  This means that for:
 196
 197        Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
 198
 199     the CPU will issue the following memory operations:
 200
 201        Q = LOAD P, D = LOAD *Q
 202
 203     and always in that order.  On most systems, smp_read_barrier_depends()
 204     does nothing, but it is required for DEC Alpha.  The READ_ONCE()
 205     is required to prevent compiler mischief.  Please note that you
 206     should normally use something like rcu_dereference() instead of
 207     open-coding smp_read_barrier_depends().
 208
 209 (*) Overlapping loads and stores within a particular CPU will appear to be
 210     ordered within that CPU.  This means that for:
 211
 212        a = READ_ONCE(*X); WRITE_ONCE(*X, b);
 213
 214     the CPU will only issue the following sequence of memory operations:
 215
 216        a = LOAD *X, STORE *X = b
 217
 218     And for:
 219
 220        WRITE_ONCE(*X, c); d = READ_ONCE(*X);
 221
 222     the CPU will only issue:
 223
 224        STORE *X = c, d = LOAD *X
 225
 226     (Loads and stores overlap if they are targeted at overlapping pieces of
 227     memory).
 228
 229And there are a number of things that _must_ or _must_not_ be assumed:
 230
 231 (*) It _must_not_ be assumed that the compiler will do what you want
 232     with memory references that are not protected by READ_ONCE() and
 233     WRITE_ONCE().  Without them, the compiler is within its rights to
 234     do all sorts of "creative" transformations, which are covered in
 235     the COMPILER BARRIER section.
 236
 237 (*) It _must_not_ be assumed that independent loads and stores will be issued
 238     in the order given.  This means that for:
 239
 240        X = *A; Y = *B; *D = Z;
 241
 242     we may get any of the following sequences:
 243
 244        X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
 245        X = LOAD *A,  STORE *D = Z, Y = LOAD *B
 246        Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
 247        Y = LOAD *B,  STORE *D = Z, X = LOAD *A
 248        STORE *D = Z, X = LOAD *A,  Y = LOAD *B
 249        STORE *D = Z, Y = LOAD *B,  X = LOAD *A
 250
 251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
 252     discarded.  This means that for:
 253
 254        X = *A; Y = *(A + 4);
 255
 256     we may get any one of the following sequences:
 257
 258        X = LOAD *A; Y = LOAD *(A + 4);
 259        Y = LOAD *(A + 4); X = LOAD *A;
 260        {X, Y} = LOAD {*A, *(A + 4) };
 261
 262     And for:
 263
 264        *A = X; *(A + 4) = Y;
 265
 266     we may get any of:
 267
 268        STORE *A = X; STORE *(A + 4) = Y;
 269        STORE *(A + 4) = Y; STORE *A = X;
 270        STORE {*A, *(A + 4) } = {X, Y};
 271
 272And there are anti-guarantees:
 273
 274 (*) These guarantees do not apply to bitfields, because compilers often
 275     generate code to modify these using non-atomic read-modify-write
 276     sequences.  Do not attempt to use bitfields to synchronize parallel
 277     algorithms.
 278
 279 (*) Even in cases where bitfields are protected by locks, all fields
 280     in a given bitfield must be protected by one lock.  If two fields
 281     in a given bitfield are protected by different locks, the compiler's
 282     non-atomic read-modify-write sequences can cause an update to one
 283     field to corrupt the value of an adjacent field.
 284
 285 (*) These guarantees apply only to properly aligned and sized scalar
 286     variables.  "Properly sized" currently means variables that are
 287     the same size as "char", "short", "int" and "long".  "Properly
 288     aligned" means the natural alignment, thus no constraints for
 289     "char", two-byte alignment for "short", four-byte alignment for
 290     "int", and either four-byte or eight-byte alignment for "long",
 291     on 32-bit and 64-bit systems, respectively.  Note that these
 292     guarantees were introduced into the C11 standard, so beware when
 293     using older pre-C11 compilers (for example, gcc 4.6).  The portion
 294     of the standard containing this guarantee is Section 3.14, which
 295     defines "memory location" as follows:
 296
 297        memory location
 298                either an object of scalar type, or a maximal sequence
 299                of adjacent bit-fields all having nonzero width
 300
 301                NOTE 1: Two threads of execution can update and access
 302                separate memory locations without interfering with
 303                each other.
 304
 305                NOTE 2: A bit-field and an adjacent non-bit-field member
 306                are in separate memory locations. The same applies
 307                to two bit-fields, if one is declared inside a nested
 308                structure declaration and the other is not, or if the two
 309                are separated by a zero-length bit-field declaration,
 310                or if they are separated by a non-bit-field member
 311                declaration. It is not safe to concurrently update two
 312                bit-fields in the same structure if all members declared
 313                between them are also bit-fields, no matter what the
 314                sizes of those intervening bit-fields happen to be.
 315
 316
 317=========================
 318WHAT ARE MEMORY BARRIERS?
 319=========================
 320
 321As can be seen above, independent memory operations are effectively performed
 322in random order, but this can be a problem for CPU-CPU interaction and for I/O.
 323What is required is some way of intervening to instruct the compiler and the
 324CPU to restrict the order.
 325
 326Memory barriers are such interventions.  They impose a perceived partial
 327ordering over the memory operations on either side of the barrier.
 328
 329Such enforcement is important because the CPUs and other devices in a system
 330can use a variety of tricks to improve performance, including reordering,
 331deferral and combination of memory operations; speculative loads; speculative
 332branch prediction and various types of caching.  Memory barriers are used to
 333override or suppress these tricks, allowing the code to sanely control the
 334interaction of multiple CPUs and/or devices.
 335
 336
 337VARIETIES OF MEMORY BARRIER
 338---------------------------
 339
 340Memory barriers come in four basic varieties:
 341
 342 (1) Write (or store) memory barriers.
 343
 344     A write memory barrier gives a guarantee that all the STORE operations
 345     specified before the barrier will appear to happen before all the STORE
 346     operations specified after the barrier with respect to the other
 347     components of the system.
 348
 349     A write barrier is a partial ordering on stores only; it is not required
 350     to have any effect on loads.
 351
 352     A CPU can be viewed as committing a sequence of store operations to the
 353     memory system as time progresses.  All stores before a write barrier will
 354     occur in the sequence _before_ all the stores after the write barrier.
 355
 356     [!] Note that write barriers should normally be paired with read or data
 357     dependency barriers; see the "SMP barrier pairing" subsection.
 358
 359
 360 (2) Data dependency barriers.
 361
 362     A data dependency barrier is a weaker form of read barrier.  In the case
 363     where two loads are performed such that the second depends on the result
 364     of the first (eg: the first load retrieves the address to which the second
 365     load will be directed), a data dependency barrier would be required to
 366     make sure that the target of the second load is updated before the address
 367     obtained by the first load is accessed.
 368
 369     A data dependency barrier is a partial ordering on interdependent loads
 370     only; it is not required to have any effect on stores, independent loads
 371     or overlapping loads.
 372
 373     As mentioned in (1), the other CPUs in the system can be viewed as
 374     committing sequences of stores to the memory system that the CPU being
 375     considered can then perceive.  A data dependency barrier issued by the CPU
 376     under consideration guarantees that for any load preceding it, if that
 377     load touches one of a sequence of stores from another CPU, then by the
 378     time the barrier completes, the effects of all the stores prior to that
 379     touched by the load will be perceptible to any loads issued after the data
 380     dependency barrier.
 381
 382     See the "Examples of memory barrier sequences" subsection for diagrams
 383     showing the ordering constraints.
 384
 385     [!] Note that the first load really has to have a _data_ dependency and
 386     not a control dependency.  If the address for the second load is dependent
 387     on the first load, but the dependency is through a conditional rather than
 388     actually loading the address itself, then it's a _control_ dependency and
 389     a full read barrier or better is required.  See the "Control dependencies"
 390     subsection for more information.
 391
 392     [!] Note that data dependency barriers should normally be paired with
 393     write barriers; see the "SMP barrier pairing" subsection.
 394
 395
 396 (3) Read (or load) memory barriers.
 397
 398     A read barrier is a data dependency barrier plus a guarantee that all the
 399     LOAD operations specified before the barrier will appear to happen before
 400     all the LOAD operations specified after the barrier with respect to the
 401     other components of the system.
 402
 403     A read barrier is a partial ordering on loads only; it is not required to
 404     have any effect on stores.
 405
 406     Read memory barriers imply data dependency barriers, and so can substitute
 407     for them.
 408
 409     [!] Note that read barriers should normally be paired with write barriers;
 410     see the "SMP barrier pairing" subsection.
 411
 412
 413 (4) General memory barriers.
 414
 415     A general memory barrier gives a guarantee that all the LOAD and STORE
 416     operations specified before the barrier will appear to happen before all
 417     the LOAD and STORE operations specified after the barrier with respect to
 418     the other components of the system.
 419
 420     A general memory barrier is a partial ordering over both loads and stores.
 421
 422     General memory barriers imply both read and write memory barriers, and so
 423     can substitute for either.
 424
 425
 426And a couple of implicit varieties:
 427
 428 (5) ACQUIRE operations.
 429
 430     This acts as a one-way permeable barrier.  It guarantees that all memory
 431     operations after the ACQUIRE operation will appear to happen after the
 432     ACQUIRE operation with respect to the other components of the system.
 433     ACQUIRE operations include LOCK operations and smp_load_acquire()
 434     operations.
 435
 436     Memory operations that occur before an ACQUIRE operation may appear to
 437     happen after it completes.
 438
 439     An ACQUIRE operation should almost always be paired with a RELEASE
 440     operation.
 441
 442
 443 (6) RELEASE operations.
 444
 445     This also acts as a one-way permeable barrier.  It guarantees that all
 446     memory operations before the RELEASE operation will appear to happen
 447     before the RELEASE operation with respect to the other components of the
 448     system. RELEASE operations include UNLOCK operations and
 449     smp_store_release() operations.
 450
 451     Memory operations that occur after a RELEASE operation may appear to
 452     happen before it completes.
 453
 454     The use of ACQUIRE and RELEASE operations generally precludes the need
 455     for other sorts of memory barrier (but note the exceptions mentioned in
 456     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
 457     pair is -not- guaranteed to act as a full memory barrier.  However, after
 458     an ACQUIRE on a given variable, all memory accesses preceding any prior
 459     RELEASE on that same variable are guaranteed to be visible.  In other
 460     words, within a given variable's critical section, all accesses of all
 461     previous critical sections for that variable are guaranteed to have
 462     completed.
 463
 464     This means that ACQUIRE acts as a minimal "acquire" operation and
 465     RELEASE acts as a minimal "release" operation.
 466
 467
 468Memory barriers are only required where there's a possibility of interaction
 469between two CPUs or between a CPU and a device.  If it can be guaranteed that
 470there won't be any such interaction in any particular piece of code, then
 471memory barriers are unnecessary in that piece of code.
 472
 473
 474Note that these are the _minimum_ guarantees.  Different architectures may give
 475more substantial guarantees, but they may _not_ be relied upon outside of arch
 476specific code.
 477
 478
 479WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
 480----------------------------------------------
 481
 482There are certain things that the Linux kernel memory barriers do not guarantee:
 483
 484 (*) There is no guarantee that any of the memory accesses specified before a
 485     memory barrier will be _complete_ by the completion of a memory barrier
 486     instruction; the barrier can be considered to draw a line in that CPU's
 487     access queue that accesses of the appropriate type may not cross.
 488
 489 (*) There is no guarantee that issuing a memory barrier on one CPU will have
 490     any direct effect on another CPU or any other hardware in the system.  The
 491     indirect effect will be the order in which the second CPU sees the effects
 492     of the first CPU's accesses occur, but see the next point:
 493
 494 (*) There is no guarantee that a CPU will see the correct order of effects
 495     from a second CPU's accesses, even _if_ the second CPU uses a memory
 496     barrier, unless the first CPU _also_ uses a matching memory barrier (see
 497     the subsection on "SMP Barrier Pairing").
 498
 499 (*) There is no guarantee that some intervening piece of off-the-CPU
 500     hardware[*] will not reorder the memory accesses.  CPU cache coherency
 501     mechanisms should propagate the indirect effects of a memory barrier
 502     between CPUs, but might not do so in order.
 503
 504        [*] For information on bus mastering DMA and coherency please read:
 505
 506            Documentation/PCI/pci.txt
 507            Documentation/DMA-API-HOWTO.txt
 508            Documentation/DMA-API.txt
 509
 510
 511DATA DEPENDENCY BARRIERS
 512------------------------
 513
 514The usage requirements of data dependency barriers are a little subtle, and
 515it's not always obvious that they're needed.  To illustrate, consider the
 516following sequence of events:
 517
 518        CPU 1                 CPU 2
 519        ===============       ===============
 520        { A == 1, B == 2, C = 3, P == &A, Q == &C }
 521        B = 4;
 522        <write barrier>
 523        WRITE_ONCE(P, &B)
 524                              Q = READ_ONCE(P);
 525                              D = *Q;
 526
 527There's a clear data dependency here, and it would seem that by the end of the
 528sequence, Q must be either &A or &B, and that:
 529
 530        (Q == &A) implies (D == 1)
 531        (Q == &B) implies (D == 4)
 532
 533But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
 534leading to the following situation:
 535
 536        (Q == &B) and (D == 2) ????
 537
 538Whilst this may seem like a failure of coherency or causality maintenance, it
 539isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
 540Alpha).
 541
 542To deal with this, a data dependency barrier or better must be inserted
 543between the address load and the data load:
 544
 545        CPU 1                 CPU 2
 546        ===============       ===============
 547        { A == 1, B == 2, C = 3, P == &A, Q == &C }
 548        B = 4;
 549        <write barrier>
 550        WRITE_ONCE(P, &B);
 551                              Q = READ_ONCE(P);
 552                              <data dependency barrier>
 553                              D = *Q;
 554
 555This enforces the occurrence of one of the two implications, and prevents the
 556third possibility from arising.
 557
 558A data-dependency barrier must also order against dependent writes:
 559
 560        CPU 1                 CPU 2
 561        ===============       ===============
 562        { A == 1, B == 2, C = 3, P == &A, Q == &C }
 563        B = 4;
 564        <write barrier>
 565        WRITE_ONCE(P, &B);
 566                              Q = READ_ONCE(P);
 567                              <data dependency barrier>
 568                              *Q = 5;
 569
 570The data-dependency barrier must order the read into Q with the store
 571into *Q.  This prohibits this outcome:
 572
 573        (Q == B) && (B == 4)
 574
 575Please note that this pattern should be rare.  After all, the whole point
 576of dependency ordering is to -prevent- writes to the data structure, along
 577with the expensive cache misses associated with those writes.  This pattern
 578can be used to record rare error conditions and the like, and the ordering
 579prevents such records from being lost.
 580
 581
 582[!] Note that this extremely counterintuitive situation arises most easily on
 583machines with split caches, so that, for example, one cache bank processes
 584even-numbered cache lines and the other bank processes odd-numbered cache
 585lines.  The pointer P might be stored in an odd-numbered cache line, and the
 586variable B might be stored in an even-numbered cache line.  Then, if the
 587even-numbered bank of the reading CPU's cache is extremely busy while the
 588odd-numbered bank is idle, one can see the new value of the pointer P (&B),
 589but the old value of the variable B (2).
 590
 591
 592The data dependency barrier is very important to the RCU system,
 593for example.  See rcu_assign_pointer() and rcu_dereference() in
 594include/linux/rcupdate.h.  This permits the current target of an RCU'd
 595pointer to be replaced with a new modified target, without the replacement
 596target appearing to be incompletely initialised.
 597
 598See also the subsection on "Cache Coherency" for a more thorough example.
 599
 600
 601CONTROL DEPENDENCIES
 602--------------------
 603
 604A load-load control dependency requires a full read memory barrier, not
 605simply a data dependency barrier to make it work correctly.  Consider the
 606following bit of code:
 607
 608        q = READ_ONCE(a);
 609        if (q) {
 610                <data dependency barrier>  /* BUG: No data dependency!!! */
 611                p = READ_ONCE(b);
 612        }
 613
 614This will not have the desired effect because there is no actual data
 615dependency, but rather a control dependency that the CPU may short-circuit
 616by attempting to predict the outcome in advance, so that other CPUs see
 617the load from b as having happened before the load from a.  In such a
 618case what's actually required is:
 619
 620        q = READ_ONCE(a);
 621        if (q) {
 622                <read barrier>
 623                p = READ_ONCE(b);
 624        }
 625
 626However, stores are not speculated.  This means that ordering -is- provided
 627for load-store control dependencies, as in the following example:
 628
 629        q = READ_ONCE(a);
 630        if (q) {
 631                WRITE_ONCE(b, p);
 632        }
 633
 634Control dependencies pair normally with other types of barriers.  That
 635said, please note that READ_ONCE() is not optional! Without the
 636READ_ONCE(), the compiler might combine the load from 'a' with other
 637loads from 'a', and the store to 'b' with other stores to 'b', with
 638possible highly counterintuitive effects on ordering.
 639
 640Worse yet, if the compiler is able to prove (say) that the value of
 641variable 'a' is always non-zero, it would be well within its rights
 642to optimize the original example by eliminating the "if" statement
 643as follows:
 644
 645        q = a;
 646        b = p;  /* BUG: Compiler and CPU can both reorder!!! */
 647
 648So don't leave out the READ_ONCE().
 649
 650It is tempting to try to enforce ordering on identical stores on both
 651branches of the "if" statement as follows:
 652
 653        q = READ_ONCE(a);
 654        if (q) {
 655                barrier();
 656                WRITE_ONCE(b, p);
 657                do_something();
 658        } else {
 659                barrier();
 660                WRITE_ONCE(b, p);
 661                do_something_else();
 662        }
 663
 664Unfortunately, current compilers will transform this as follows at high
 665optimization levels:
 666
 667        q = READ_ONCE(a);
 668        barrier();
 669        WRITE_ONCE(b, p);  /* BUG: No ordering vs. load from a!!! */
 670        if (q) {
 671                /* WRITE_ONCE(b, p); -- moved up, BUG!!! */
 672                do_something();
 673        } else {
 674                /* WRITE_ONCE(b, p); -- moved up, BUG!!! */
 675                do_something_else();
 676        }
 677
 678Now there is no conditional between the load from 'a' and the store to
 679'b', which means that the CPU is within its rights to reorder them:
 680The conditional is absolutely required, and must be present in the
 681assembly code even after all compiler optimizations have been applied.
 682Therefore, if you need ordering in this example, you need explicit
 683memory barriers, for example, smp_store_release():
 684
 685        q = READ_ONCE(a);
 686        if (q) {
 687                smp_store_release(&b, p);
 688                do_something();
 689        } else {
 690                smp_store_release(&b, p);
 691                do_something_else();
 692        }
 693
 694In contrast, without explicit memory barriers, two-legged-if control
 695ordering is guaranteed only when the stores differ, for example:
 696
 697        q = READ_ONCE(a);
 698        if (q) {
 699                WRITE_ONCE(b, p);
 700                do_something();
 701        } else {
 702                WRITE_ONCE(b, r);
 703                do_something_else();
 704        }
 705
 706The initial READ_ONCE() is still required to prevent the compiler from
 707proving the value of 'a'.
 708
 709In addition, you need to be careful what you do with the local variable 'q',
 710otherwise the compiler might be able to guess the value and again remove
 711the needed conditional.  For example:
 712
 713        q = READ_ONCE(a);
 714        if (q % MAX) {
 715                WRITE_ONCE(b, p);
 716                do_something();
 717        } else {
 718                WRITE_ONCE(b, r);
 719                do_something_else();
 720        }
 721
 722If MAX is defined to be 1, then the compiler knows that (q % MAX) is
 723equal to zero, in which case the compiler is within its rights to
 724transform the above code into the following:
 725
 726        q = READ_ONCE(a);
 727        WRITE_ONCE(b, p);
 728        do_something_else();
 729
 730Given this transformation, the CPU is not required to respect the ordering
 731between the load from variable 'a' and the store to variable 'b'.  It is
 732tempting to add a barrier(), but this does not help.  The conditional
 733is gone, and the barrier won't bring it back.  Therefore, if you are
 734relying on this ordering, you should make sure that MAX is greater than
 735one, perhaps as follows:
 736
 737        q = READ_ONCE(a);
 738        BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
 739        if (q % MAX) {
 740                WRITE_ONCE(b, p);
 741                do_something();
 742        } else {
 743                WRITE_ONCE(b, r);
 744                do_something_else();
 745        }
 746
 747Please note once again that the stores to 'b' differ.  If they were
 748identical, as noted earlier, the compiler could pull this store outside
 749of the 'if' statement.
 750
 751You must also be careful not to rely too much on boolean short-circuit
 752evaluation.  Consider this example:
 753
 754        q = READ_ONCE(a);
 755        if (q || 1 > 0)
 756                WRITE_ONCE(b, 1);
 757
 758Because the first condition cannot fault and the second condition is
 759always true, the compiler can transform this example as following,
 760defeating control dependency:
 761
 762        q = READ_ONCE(a);
 763        WRITE_ONCE(b, 1);
 764
 765This example underscores the need to ensure that the compiler cannot
 766out-guess your code.  More generally, although READ_ONCE() does force
 767the compiler to actually emit code for a given load, it does not force
 768the compiler to use the results.
 769
 770Finally, control dependencies do -not- provide transitivity.  This is
 771demonstrated by two related examples, with the initial values of
 772x and y both being zero:
 773
 774        CPU 0                     CPU 1
 775        =======================   =======================
 776        r1 = READ_ONCE(x);        r2 = READ_ONCE(y);
 777        if (r1 > 0)               if (r2 > 0)
 778          WRITE_ONCE(y, 1);         WRITE_ONCE(x, 1);
 779
 780        assert(!(r1 == 1 && r2 == 1));
 781
 782The above two-CPU example will never trigger the assert().  However,
 783if control dependencies guaranteed transitivity (which they do not),
 784then adding the following CPU would guarantee a related assertion:
 785
 786        CPU 2
 787        =====================
 788        WRITE_ONCE(x, 2);
 789
 790        assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
 791
 792But because control dependencies do -not- provide transitivity, the above
 793assertion can fail after the combined three-CPU example completes.  If you
 794need the three-CPU example to provide ordering, you will need smp_mb()
 795between the loads and stores in the CPU 0 and CPU 1 code fragments,
 796that is, just before or just after the "if" statements.  Furthermore,
 797the original two-CPU example is very fragile and should be avoided.
 798
 799These two examples are the LB and WWC litmus tests from this paper:
 800http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
 801site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
 802
 803In summary:
 804
 805  (*) Control dependencies can order prior loads against later stores.
 806      However, they do -not- guarantee any other sort of ordering:
 807      Not prior loads against later loads, nor prior stores against
 808      later anything.  If you need these other forms of ordering,
 809      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
 810      later loads, smp_mb().
 811
 812  (*) If both legs of the "if" statement begin with identical stores to
 813      the same variable, then those stores must be ordered, either by
 814      preceding both of them with smp_mb() or by using smp_store_release()
 815      to carry out the stores.  Please note that it is -not- sufficient
 816      to use barrier() at beginning of each leg of the "if" statement,
 817      as optimizing compilers do not necessarily respect barrier()
 818      in this case.
 819
 820  (*) Control dependencies require at least one run-time conditional
 821      between the prior load and the subsequent store, and this
 822      conditional must involve the prior load.  If the compiler is able
 823      to optimize the conditional away, it will have also optimized
 824      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
 825      can help to preserve the needed conditional.
 826
 827  (*) Control dependencies require that the compiler avoid reordering the
 828      dependency into nonexistence.  Careful use of READ_ONCE() or
 829      atomic{,64}_read() can help to preserve your control dependency.
 830      Please see the COMPILER BARRIER section for more information.
 831
 832  (*) Control dependencies pair normally with other types of barriers.
 833
 834  (*) Control dependencies do -not- provide transitivity.  If you
 835      need transitivity, use smp_mb().
 836
 837
 838SMP BARRIER PAIRING
 839-------------------
 840
 841When dealing with CPU-CPU interactions, certain types of memory barrier should
 842always be paired.  A lack of appropriate pairing is almost certainly an error.
 843
 844General barriers pair with each other, though they also pair with most
 845other types of barriers, albeit without transitivity.  An acquire barrier
 846pairs with a release barrier, but both may also pair with other barriers,
 847including of course general barriers.  A write barrier pairs with a data
 848dependency barrier, a control dependency, an acquire barrier, a release
 849barrier, a read barrier, or a general barrier.  Similarly a read barrier,
 850control dependency, or a data dependency barrier pairs with a write
 851barrier, an acquire barrier, a release barrier, or a general barrier:
 852
 853        CPU 1                 CPU 2
 854        ===============       ===============
 855        WRITE_ONCE(a, 1);
 856        <write barrier>
 857        WRITE_ONCE(b, 2);     x = READ_ONCE(b);
 858                              <read barrier>
 859                              y = READ_ONCE(a);
 860
 861Or:
 862
 863        CPU 1                 CPU 2
 864        ===============       ===============================
 865        a = 1;
 866        <write barrier>
 867        WRITE_ONCE(b, &a);    x = READ_ONCE(b);
 868                              <data dependency barrier>
 869                              y = *x;
 870
 871Or even:
 872
 873        CPU 1                 CPU 2
 874        ===============       ===============================
 875        r1 = READ_ONCE(y);
 876        <general barrier>
 877        WRITE_ONCE(y, 1);     if (r2 = READ_ONCE(x)) {
 878                                 <implicit control dependency>
 879                                 WRITE_ONCE(y, 1);
 880                              }
 881
 882        assert(r1 == 0 || r2 == 0);
 883
 884Basically, the read barrier always has to be there, even though it can be of
 885the "weaker" type.
 886
 887[!] Note that the stores before the write barrier would normally be expected to
 888match the loads after the read barrier or the data dependency barrier, and vice
 889versa:
 890
 891        CPU 1                               CPU 2
 892        ===================                 ===================
 893        WRITE_ONCE(a, 1);    }----   --->{  v = READ_ONCE(c);
 894        WRITE_ONCE(b, 2);    }    \ /    {  w = READ_ONCE(d);
 895        <write barrier>            \        <read barrier>
 896        WRITE_ONCE(c, 3);    }    / \    {  x = READ_ONCE(a);
 897        WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
 898
 899
 900EXAMPLES OF MEMORY BARRIER SEQUENCES
 901------------------------------------
 902
 903Firstly, write barriers act as partial orderings on store operations.
 904Consider the following sequence of events:
 905
 906        CPU 1
 907        =======================
 908        STORE A = 1
 909        STORE B = 2
 910        STORE C = 3
 911        <write barrier>
 912        STORE D = 4
 913        STORE E = 5
 914
 915This sequence of events is committed to the memory coherence system in an order
 916that the rest of the system might perceive as the unordered set of { STORE A,
 917STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
 918}:
 919
 920        +-------+       :      :
 921        |       |       +------+
 922        |       |------>| C=3  |     }     /\
 923        |       |  :    +------+     }-----  \  -----> Events perceptible to
 924        |       |  :    | A=1  |     }        \/       the rest of the system
 925        |       |  :    +------+     }
 926        | CPU 1 |  :    | B=2  |     }
 927        |       |       +------+     }
 928        |       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
 929        |       |       +------+     }        requires all stores prior to the
 930        |       |  :    | E=5  |     }        barrier to be committed before
 931        |       |  :    +------+     }        further stores may take place
 932        |       |------>| D=4  |     }
 933        |       |       +------+
 934        +-------+       :      :
 935                           |
 936                           | Sequence in which stores are committed to the
 937                           | memory system by CPU 1
 938                           V
 939
 940
 941Secondly, data dependency barriers act as partial orderings on data-dependent
 942loads.  Consider the following sequence of events:
 943
 944        CPU 1                   CPU 2
 945        ======================= =======================
 946                { B = 7; X = 9; Y = 8; C = &Y }
 947        STORE A = 1
 948        STORE B = 2
 949        <write barrier>
 950        STORE C = &B            LOAD X
 951        STORE D = 4             LOAD C (gets &B)
 952                                LOAD *C (reads B)
 953
 954Without intervention, CPU 2 may perceive the events on CPU 1 in some
 955effectively random order, despite the write barrier issued by CPU 1:
 956
 957        +-------+       :      :                :       :
 958        |       |       +------+                +-------+  | Sequence of update
 959        |       |------>| B=2  |-----       --->| Y->8  |  | of perception on
 960        |       |  :    +------+     \          +-------+  | CPU 2
 961        | CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
 962        |       |       +------+       |        +-------+
 963        |       |   wwwwwwwwwwwwwwww   |        :       :
 964        |       |       +------+       |        :       :
 965        |       |  :    | C=&B |---    |        :       :       +-------+
 966        |       |  :    +------+   \   |        +-------+       |       |
 967        |       |------>| D=4  |    ----------->| C->&B |------>|       |
 968        |       |       +------+       |        +-------+       |       |
 969        +-------+       :      :       |        :       :       |       |
 970                                       |        :       :       |       |
 971                                       |        :       :       | CPU 2 |
 972                                       |        +-------+       |       |
 973            Apparently incorrect --->  |        | B->7  |------>|       |
 974            perception of B (!)        |        +-------+       |       |
 975                                       |        :       :       |       |
 976                                       |        +-------+       |       |
 977            The load of X holds --->    \       | X->9  |------>|       |
 978            up the maintenance           \      +-------+       |       |
 979            of coherence of B             ----->| B->2  |       +-------+
 980                                                +-------+
 981                                                :       :
 982
 983
 984In the above example, CPU 2 perceives that B is 7, despite the load of *C
 985(which would be B) coming after the LOAD of C.
 986
 987If, however, a data dependency barrier were to be placed between the load of C
 988and the load of *C (ie: B) on CPU 2:
 989
 990        CPU 1                   CPU 2
 991        ======================= =======================
 992                { B = 7; X = 9; Y = 8; C = &Y }
 993        STORE A = 1
 994        STORE B = 2
 995        <write barrier>
 996        STORE C = &B            LOAD X
 997        STORE D = 4             LOAD C (gets &B)
 998                                <data dependency barrier>
 999                                LOAD *C (reads B)
1000
1001then the following will occur:
1002
1003        +-------+       :      :                :       :
1004        |       |       +------+                +-------+
1005        |       |------>| B=2  |-----       --->| Y->8  |
1006        |       |  :    +------+     \          +-------+
1007        | CPU 1 |  :    | A=1  |      \     --->| C->&Y |
1008        |       |       +------+       |        +-------+
1009        |       |   wwwwwwwwwwwwwwww   |        :       :
1010        |       |       +------+       |        :       :
1011        |       |  :    | C=&B |---    |        :       :       +-------+
1012        |       |  :    +------+   \   |        +-------+       |       |
1013        |       |------>| D=4  |    ----------->| C->&B |------>|       |
1014        |       |       +------+       |        +-------+       |       |
1015        +-------+       :      :       |        :       :       |       |
1016                                       |        :       :       |       |
1017                                       |        :       :       | CPU 2 |
1018                                       |        +-------+       |       |
1019                                       |        | X->9  |------>|       |
1020                                       |        +-------+       |       |
1021          Makes sure all effects --->   \   ddddddddddddddddd   |       |
1022          prior to the store of C        \      +-------+       |       |
1023          are perceptible to              ----->| B->2  |------>|       |
1024          subsequent loads                      +-------+       |       |
1025                                                :       :       +-------+
1026
1027
1028And thirdly, a read barrier acts as a partial order on loads.  Consider the
1029following sequence of events:
1030
1031        CPU 1                   CPU 2
1032        ======================= =======================
1033                { A = 0, B = 9 }
1034        STORE A=1
1035        <write barrier>
1036        STORE B=2
1037                                LOAD B
1038                                LOAD A
1039
1040Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1041some effectively random order, despite the write barrier issued by CPU 1:
1042
1043        +-------+       :      :                :       :
1044        |       |       +------+                +-------+
1045        |       |------>| A=1  |------      --->| A->0  |
1046        |       |       +------+      \         +-------+
1047        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1048        |       |       +------+        |       +-------+
1049        |       |------>| B=2  |---     |       :       :
1050        |       |       +------+   \    |       :       :       +-------+
1051        +-------+       :      :    \   |       +-------+       |       |
1052                                     ---------->| B->2  |------>|       |
1053                                        |       +-------+       | CPU 2 |
1054                                        |       | A->0  |------>|       |
1055                                        |       +-------+       |       |
1056                                        |       :       :       +-------+
1057                                         \      :       :
1058                                          \     +-------+
1059                                           ---->| A->1  |
1060                                                +-------+
1061                                                :       :
1062
1063
1064If, however, a read barrier were to be placed between the load of B and the
1065load of A on CPU 2:
1066
1067        CPU 1                   CPU 2
1068        ======================= =======================
1069                { A = 0, B = 9 }
1070        STORE A=1
1071        <write barrier>
1072        STORE B=2
1073                                LOAD B
1074                                <read barrier>
1075                                LOAD A
1076
1077then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
10782:
1079
1080        +-------+       :      :                :       :
1081        |       |       +------+                +-------+
1082        |       |------>| A=1  |------      --->| A->0  |
1083        |       |       +------+      \         +-------+
1084        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1085        |       |       +------+        |       +-------+
1086        |       |------>| B=2  |---     |       :       :
1087        |       |       +------+   \    |       :       :       +-------+
1088        +-------+       :      :    \   |       +-------+       |       |
1089                                     ---------->| B->2  |------>|       |
1090                                        |       +-------+       | CPU 2 |
1091                                        |       :       :       |       |
1092                                        |       :       :       |       |
1093          At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1094          barrier causes all effects      \     +-------+       |       |
1095          prior to the storage of B        ---->| A->1  |------>|       |
1096          to be perceptible to CPU 2            +-------+       |       |
1097                                                :       :       +-------+
1098
1099
1100To illustrate this more completely, consider what could happen if the code
1101contained a load of A either side of the read barrier:
1102
1103        CPU 1                   CPU 2
1104        ======================= =======================
1105                { A = 0, B = 9 }
1106        STORE A=1
1107        <write barrier>
1108        STORE B=2
1109                                LOAD B
1110                                LOAD A [first load of A]
1111                                <read barrier>
1112                                LOAD A [second load of A]
1113
1114Even though the two loads of A both occur after the load of B, they may both
1115come up with different values:
1116
1117        +-------+       :      :                :       :
1118        |       |       +------+                +-------+
1119        |       |------>| A=1  |------      --->| A->0  |
1120        |       |       +------+      \         +-------+
1121        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1122        |       |       +------+        |       +-------+
1123        |       |------>| B=2  |---     |       :       :
1124        |       |       +------+   \    |       :       :       +-------+
1125        +-------+       :      :    \   |       +-------+       |       |
1126                                     ---------->| B->2  |------>|       |
1127                                        |       +-------+       | CPU 2 |
1128                                        |       :       :       |       |
1129                                        |       :       :       |       |
1130                                        |       +-------+       |       |
1131                                        |       | A->0  |------>| 1st   |
1132                                        |       +-------+       |       |
1133          At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1134          barrier causes all effects      \     +-------+       |       |
1135          prior to the storage of B        ---->| A->1  |------>| 2nd   |
1136          to be perceptible to CPU 2            +-------+       |       |
1137                                                :       :       +-------+
1138
1139
1140But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1141before the read barrier completes anyway:
1142
1143        +-------+       :      :                :       :
1144        |       |       +------+                +-------+
1145        |       |------>| A=1  |------      --->| A->0  |
1146        |       |       +------+      \         +-------+
1147        | CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1148        |       |       +------+        |       +-------+
1149        |       |------>| B=2  |---     |       :       :
1150        |       |       +------+   \    |       :       :       +-------+
1151        +-------+       :      :    \   |       +-------+       |       |
1152                                     ---------->| B->2  |------>|       |
1153                                        |       +-------+       | CPU 2 |
1154                                        |       :       :       |       |
1155                                         \      :       :       |       |
1156                                          \     +-------+       |       |
1157                                           ---->| A->1  |------>| 1st   |
1158                                                +-------+       |       |
1159                                            rrrrrrrrrrrrrrrrr   |       |
1160                                                +-------+       |       |
1161                                                | A->1  |------>| 2nd   |
1162                                                +-------+       |       |
1163                                                :       :       +-------+
1164
1165
1166The guarantee is that the second load will always come up with A == 1 if the
1167load of B came up with B == 2.  No such guarantee exists for the first load of
1168A; that may come up with either A == 0 or A == 1.
1169
1170
1171READ MEMORY BARRIERS VS LOAD SPECULATION
1172----------------------------------------
1173
1174Many CPUs speculate with loads: that is they see that they will need to load an
1175item from memory, and they find a time where they're not using the bus for any
1176other loads, and so do the load in advance - even though they haven't actually
1177got to that point in the instruction execution flow yet.  This permits the
1178actual load instruction to potentially complete immediately because the CPU
1179already has the value to hand.
1180
1181It may turn out that the CPU didn't actually need the value - perhaps because a
1182branch circumvented the load - in which case it can discard the value or just
1183cache it for later use.
1184
1185Consider:
1186
1187        CPU 1                   CPU 2
1188        ======================= =======================
1189                                LOAD B
1190                                DIVIDE          } Divide instructions generally
1191                                DIVIDE          } take a long time to perform
1192                                LOAD A
1193
1194Which might appear as this:
1195
1196                                                :       :       +-------+
1197                                                +-------+       |       |
1198                                            --->| B->2  |------>|       |
1199                                                +-------+       | CPU 2 |
1200                                                :       :DIVIDE |       |
1201                                                +-------+       |       |
1202        The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1203        division speculates on the              +-------+   ~   |       |
1204        LOAD of A                               :       :   ~   |       |
1205                                                :       :DIVIDE |       |
1206                                                :       :   ~   |       |
1207        Once the divisions are complete -->     :       :   ~-->|       |
1208        the CPU can then perform the            :       :       |       |
1209        LOAD with immediate effect              :       :       +-------+
1210
1211
1212Placing a read barrier or a data dependency barrier just before the second
1213load:
1214
1215        CPU 1                   CPU 2
1216        ======================= =======================
1217                                LOAD B
1218                                DIVIDE
1219                                DIVIDE
1220                                <read barrier>
1221                                LOAD A
1222
1223will force any value speculatively obtained to be reconsidered to an extent
1224dependent on the type of barrier used.  If there was no change made to the
1225speculated memory location, then the speculated value will just be used:
1226
1227                                                :       :       +-------+
1228                                                +-------+       |       |
1229                                            --->| B->2  |------>|       |
1230                                                +-------+       | CPU 2 |
1231                                                :       :DIVIDE |       |
1232                                                +-------+       |       |
1233        The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1234        division speculates on the              +-------+   ~   |       |
1235        LOAD of A                               :       :   ~   |       |
1236                                                :       :DIVIDE |       |
1237                                                :       :   ~   |       |
1238                                                :       :   ~   |       |
1239                                            rrrrrrrrrrrrrrrr~   |       |
1240                                                :       :   ~   |       |
1241                                                :       :   ~-->|       |
1242                                                :       :       |       |
1243                                                :       :       +-------+
1244
1245
1246but if there was an update or an invalidation from another CPU pending, then
1247the speculation will be cancelled and the value reloaded:
1248
1249                                                :       :       +-------+
1250                                                +-------+       |       |
1251                                            --->| B->2  |------>|       |
1252                                                +-------+       | CPU 2 |
1253                                                :       :DIVIDE |       |
1254                                                +-------+       |       |
1255        The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1256        division speculates on the              +-------+   ~   |       |
1257        LOAD of A                               :       :   ~   |       |
1258                                                :       :DIVIDE |       |
1259                                                :       :   ~   |       |
1260                                                :       :   ~   |       |
1261                                            rrrrrrrrrrrrrrrrr   |       |
1262                                                +-------+       |       |
1263        The speculation is discarded --->   --->| A->1  |------>|       |
1264        and an updated value is                 +-------+       |       |
1265        retrieved                               :       :       +-------+
1266
1267
1268TRANSITIVITY
1269------------
1270
1271Transitivity is a deeply intuitive notion about ordering that is not
1272always provided by real computer systems.  The following example
1273demonstrates transitivity:
1274
1275        CPU 1                   CPU 2                   CPU 3
1276        ======================= ======================= =======================
1277                { X = 0, Y = 0 }
1278        STORE X=1               LOAD X                  STORE Y=1
1279                                <general barrier>       <general barrier>
1280                                LOAD Y                  LOAD X
1281
1282Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1283This indicates that CPU 2's load from X in some sense follows CPU 1's
1284store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1285store to Y.  The question is then "Can CPU 3's load from X return 0?"
1286
1287Because CPU 2's load from X in some sense came after CPU 1's store, it
1288is natural to expect that CPU 3's load from X must therefore return 1.
1289This expectation is an example of transitivity: if a load executing on
1290CPU A follows a load from the same variable executing on CPU B, then
1291CPU A's load must either return the same value that CPU B's load did,
1292or must return some later value.
1293
1294In the Linux kernel, use of general memory barriers guarantees
1295transitivity.  Therefore, in the above example, if CPU 2's load from X
1296returns 1 and its load from Y returns 0, then CPU 3's load from X must
1297also return 1.
1298
1299However, transitivity is -not- guaranteed for read or write barriers.
1300For example, suppose that CPU 2's general barrier in the above example
1301is changed to a read barrier as shown below:
1302
1303        CPU 1                   CPU 2                   CPU 3
1304        ======================= ======================= =======================
1305                { X = 0, Y = 0 }
1306        STORE X=1               LOAD X                  STORE Y=1
1307                                <read barrier>          <general barrier>
1308                                LOAD Y                  LOAD X
1309
1310This substitution destroys transitivity: in this example, it is perfectly
1311legal for CPU 2's load from X to return 1, its load from Y to return 0,
1312and CPU 3's load from X to return 0.
1313
1314The key point is that although CPU 2's read barrier orders its pair
1315of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1316this example runs on a system where CPUs 1 and 2 share a store buffer
1317or a level of cache, CPU 2 might have early access to CPU 1's writes.
1318General barriers are therefore required to ensure that all CPUs agree
1319on the combined order of CPU 1's and CPU 2's accesses.
1320
1321General barriers provide "global transitivity", so that all CPUs will
1322agree on the order of operations.  In contrast, a chain of release-acquire
1323pairs provides only "local transitivity", so that only those CPUs on
1324the chain are guaranteed to agree on the combined order of the accesses.
1325For example, switching to C code in deference to Herman Hollerith:
1326
1327        int u, v, x, y, z;
1328
1329        void cpu0(void)
1330        {
1331                r0 = smp_load_acquire(&x);
1332                WRITE_ONCE(u, 1);
1333                smp_store_release(&y, 1);
1334        }
1335
1336        void cpu1(void)
1337        {
1338                r1 = smp_load_acquire(&y);
1339                r4 = READ_ONCE(v);
1340                r5 = READ_ONCE(u);
1341                smp_store_release(&z, 1);
1342        }
1343
1344        void cpu2(void)
1345        {
1346                r2 = smp_load_acquire(&z);
1347                smp_store_release(&x, 1);
1348        }
1349
1350        void cpu3(void)
1351        {
1352                WRITE_ONCE(v, 1);
1353                smp_mb();
1354                r3 = READ_ONCE(u);
1355        }
1356
1357Because cpu0(), cpu1(), and cpu2() participate in a local transitive
1358chain of smp_store_release()/smp_load_acquire() pairs, the following
1359outcome is prohibited:
1360
1361        r0 == 1 && r1 == 1 && r2 == 1
1362
1363Furthermore, because of the release-acquire relationship between cpu0()
1364and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1365outcome is prohibited:
1366
1367        r1 == 1 && r5 == 0
1368
1369However, the transitivity of release-acquire is local to the participating
1370CPUs and does not apply to cpu3().  Therefore, the following outcome
1371is possible:
1372
1373        r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1374
1375As an aside, the following outcome is also possible:
1376
1377        r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1378
1379Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1380writes in order, CPUs not involved in the release-acquire chain might
1381well disagree on the order.  This disagreement stems from the fact that
1382the weak memory-barrier instructions used to implement smp_load_acquire()
1383and smp_store_release() are not required to order prior stores against
1384subsequent loads in all cases.  This means that cpu3() can see cpu0()'s
1385store to u as happening -after- cpu1()'s load from v, even though
1386both cpu0() and cpu1() agree that these two operations occurred in the
1387intended order.
1388
1389However, please keep in mind that smp_load_acquire() is not magic.
1390In particular, it simply reads from its argument with ordering.  It does
1391-not- ensure that any particular value will be read.  Therefore, the
1392following outcome is possible:
1393
1394        r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1395
1396Note that this outcome can happen even on a mythical sequentially
1397consistent system where nothing is ever reordered.
1398
1399To reiterate, if your code requires global transitivity, use general
1400barriers throughout.
1401
1402
1403========================
1404EXPLICIT KERNEL BARRIERS
1405========================
1406
1407The Linux kernel has a variety of different barriers that act at different
1408levels:
1409
1410  (*) Compiler barrier.
1411
1412  (*) CPU memory barriers.
1413
1414  (*) MMIO write barrier.
1415
1416
1417COMPILER BARRIER
1418----------------
1419
1420The Linux kernel has an explicit compiler barrier function that prevents the
1421compiler from moving the memory accesses either side of it to the other side:
1422
1423        barrier();
1424
1425This is a general barrier -- there are no read-read or write-write
1426variants of barrier().  However, READ_ONCE() and WRITE_ONCE() can be
1427thought of as weak forms of barrier() that affect only the specific
1428accesses flagged by the READ_ONCE() or WRITE_ONCE().
1429
1430The barrier() function has the following effects:
1431
1432 (*) Prevents the compiler from reordering accesses following the
1433     barrier() to precede any accesses preceding the barrier().
1434     One example use for this property is to ease communication between
1435     interrupt-handler code and the code that was interrupted.
1436
1437 (*) Within a loop, forces the compiler to load the variables used
1438     in that loop's conditional on each pass through that loop.
1439
1440The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1441optimizations that, while perfectly safe in single-threaded code, can
1442be fatal in concurrent code.  Here are some examples of these sorts
1443of optimizations:
1444
1445 (*) The compiler is within its rights to reorder loads and stores
1446     to the same variable, and in some cases, the CPU is within its
1447     rights to reorder loads to the same variable.  This means that
1448     the following code:
1449
1450        a[0] = x;
1451        a[1] = x;
1452
1453     Might result in an older value of x stored in a[1] than in a[0].
1454     Prevent both the compiler and the CPU from doing this as follows:
1455
1456        a[0] = READ_ONCE(x);
1457        a[1] = READ_ONCE(x);
1458
1459     In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1460     accesses from multiple CPUs to a single variable.
1461
1462 (*) The compiler is within its rights to merge successive loads from
1463     the same variable.  Such merging can cause the compiler to "optimize"
1464     the following code:
1465
1466        while (tmp = a)
1467                do_something_with(tmp);
1468
1469     into the following code, which, although in some sense legitimate
1470     for single-threaded code, is almost certainly not what the developer
1471     intended:
1472
1473        if (tmp = a)
1474                for (;;)
1475                        do_something_with(tmp);
1476
1477     Use READ_ONCE() to prevent the compiler from doing this to you:
1478
1479        while (tmp = READ_ONCE(a))
1480                do_something_with(tmp);
1481
1482 (*) The compiler is within its rights to reload a variable, for example,
1483     in cases where high register pressure prevents the compiler from
1484     keeping all data of interest in registers.  The compiler might
1485     therefore optimize the variable 'tmp' out of our previous example:
1486
1487        while (tmp = a)
1488                do_something_with(tmp);
1489
1490     This could result in the following code, which is perfectly safe in
1491     single-threaded code, but can be fatal in concurrent code:
1492
1493        while (a)
1494                do_something_with(a);
1495
1496     For example, the optimized version of this code could result in
1497     passing a zero to do_something_with() in the case where the variable
1498     a was modified by some other CPU between the "while" statement and
1499     the call to do_something_with().
1500
1501     Again, use READ_ONCE() to prevent the compiler from doing this:
1502
1503        while (tmp = READ_ONCE(a))
1504                do_something_with(tmp);
1505
1506     Note that if the compiler runs short of registers, it might save
1507     tmp onto the stack.  The overhead of this saving and later restoring
1508     is why compilers reload variables.  Doing so is perfectly safe for
1509     single-threaded code, so you need to tell the compiler about cases
1510     where it is not safe.
1511
1512 (*) The compiler is within its rights to omit a load entirely if it knows
1513     what the value will be.  For example, if the compiler can prove that
1514     the value of variable 'a' is always zero, it can optimize this code:
1515
1516        while (tmp = a)
1517                do_something_with(tmp);
1518
1519     Into this:
1520
1521        do { } while (0);
1522
1523     This transformation is a win for single-threaded code because it
1524     gets rid of a load and a branch.  The problem is that the compiler
1525     will carry out its proof assuming that the current CPU is the only
1526     one updating variable 'a'.  If variable 'a' is shared, then the
1527     compiler's proof will be erroneous.  Use READ_ONCE() to tell the
1528     compiler that it doesn't know as much as it thinks it does:
1529
1530        while (tmp = READ_ONCE(a))
1531                do_something_with(tmp);
1532
1533     But please note that the compiler is also closely watching what you
1534     do with the value after the READ_ONCE().  For example, suppose you
1535     do the following and MAX is a preprocessor macro with the value 1:
1536
1537        while ((tmp = READ_ONCE(a)) % MAX)
1538                do_something_with(tmp);
1539
1540     Then the compiler knows that the result of the "%" operator applied
1541     to MAX will always be zero, again allowing the compiler to optimize
1542     the code into near-nonexistence.  (It will still load from the
1543     variable 'a'.)
1544
1545 (*) Similarly, the compiler is within its rights to omit a store entirely
1546     if it knows that the variable already has the value being stored.
1547     Again, the compiler assumes that the current CPU is the only one
1548     storing into the variable, which can cause the compiler to do the
1549     wrong thing for shared variables.  For example, suppose you have
1550     the following:
1551
1552        a = 0;
1553        ... Code that does not store to variable a ...
1554        a = 0;
1555
1556     The compiler sees that the value of variable 'a' is already zero, so
1557     it might well omit the second store.  This would come as a fatal
1558     surprise if some other CPU might have stored to variable 'a' in the
1559     meantime.
1560
1561     Use WRITE_ONCE() to prevent the compiler from making this sort of
1562     wrong guess:
1563
1564        WRITE_ONCE(a, 0);
1565        ... Code that does not store to variable a ...
1566        WRITE_ONCE(a, 0);
1567
1568 (*) The compiler is within its rights to reorder memory accesses unless
1569     you tell it not to.  For example, consider the following interaction
1570     between process-level code and an interrupt handler:
1571
1572        void process_level(void)
1573        {
1574                msg = get_message();
1575                flag = true;
1576        }
1577
1578        void interrupt_handler(void)
1579        {
1580                if (flag)
1581                        process_message(msg);
1582        }
1583
1584     There is nothing to prevent the compiler from transforming
1585     process_level() to the following, in fact, this might well be a
1586     win for single-threaded code:
1587
1588        void process_level(void)
1589        {
1590                flag = true;
1591                msg = get_message();
1592        }
1593
1594     If the interrupt occurs between these two statement, then
1595     interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
1596     to prevent this as follows:
1597
1598        void process_level(void)
1599        {
1600                WRITE_ONCE(msg, get_message());
1601                WRITE_ONCE(flag, true);
1602        }
1603
1604        void interrupt_handler(void)
1605        {
1606                if (READ_ONCE(flag))
1607                        process_message(READ_ONCE(msg));
1608        }
1609
1610     Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1611     interrupt_handler() are needed if this interrupt handler can itself
1612     be interrupted by something that also accesses 'flag' and 'msg',
1613     for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
1614     and WRITE_ONCE() are not needed in interrupt_handler() other than
1615     for documentation purposes.  (Note also that nested interrupts
1616     do not typically occur in modern Linux kernels, in fact, if an
1617     interrupt handler returns with interrupts enabled, you will get a
1618     WARN_ONCE() splat.)
1619
1620     You should assume that the compiler can move READ_ONCE() and
1621     WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1622     barrier(), or similar primitives.
1623
1624     This effect could also be achieved using barrier(), but READ_ONCE()
1625     and WRITE_ONCE() are more selective:  With READ_ONCE() and
1626     WRITE_ONCE(), the compiler need only forget the contents of the
1627     indicated memory locations, while with barrier() the compiler must
1628     discard the value of all memory locations that it has currented
1629     cached in any machine registers.  Of course, the compiler must also
1630     respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1631     though the CPU of course need not do so.
1632
1633 (*) The compiler is within its rights to invent stores to a variable,
1634     as in the following example:
1635
1636        if (a)
1637                b = a;
1638        else
1639                b = 42;
1640
1641     The compiler might save a branch by optimizing this as follows:
1642
1643        b = 42;
1644        if (a)
1645                b = a;
1646
1647     In single-threaded code, this is not only safe, but also saves
1648     a branch.  Unfortunately, in concurrent code, this optimization
1649     could cause some other CPU to see a spurious value of 42 -- even
1650     if variable 'a' was never zero -- when loading variable 'b'.
1651     Use WRITE_ONCE() to prevent this as follows:
1652
1653        if (a)
1654                WRITE_ONCE(b, a);
1655        else
1656                WRITE_ONCE(b, 42);
1657
1658     The compiler can also invent loads.  These are usually less
1659     damaging, but they can result in cache-line bouncing and thus in
1660     poor performance and scalability.  Use READ_ONCE() to prevent
1661     invented loads.
1662
1663 (*) For aligned memory locations whose size allows them to be accessed
1664     with a single memory-reference instruction, prevents "load tearing"
1665     and "store tearing," in which a single large access is replaced by
1666     multiple smaller accesses.  For example, given an architecture having
1667     16-bit store instructions with 7-bit immediate fields, the compiler
1668     might be tempted to use two 16-bit store-immediate instructions to
1669     implement the following 32-bit store:
1670
1671        p = 0x00010002;
1672
1673     Please note that GCC really does use this sort of optimization,
1674     which is not surprising given that it would likely take more
1675     than two instructions to build the constant and then store it.
1676     This optimization can therefore be a win in single-threaded code.
1677     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1678     this optimization in a volatile store.  In the absence of such bugs,
1679     use of WRITE_ONCE() prevents store tearing in the following example:
1680
1681        WRITE_ONCE(p, 0x00010002);
1682
1683     Use of packed structures can also result in load and store tearing,
1684     as in this example:
1685
1686        struct __attribute__((__packed__)) foo {
1687                short a;
1688                int b;
1689                short c;
1690        };
1691        struct foo foo1, foo2;
1692        ...
1693
1694        foo2.a = foo1.a;
1695        foo2.b = foo1.b;
1696        foo2.c = foo1.c;
1697
1698     Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1699     volatile markings, the compiler would be well within its rights to
1700     implement these three assignment statements as a pair of 32-bit
1701     loads followed by a pair of 32-bit stores.  This would result in
1702     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
1703     and WRITE_ONCE() again prevent tearing in this example:
1704
1705        foo2.a = foo1.a;
1706        WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1707        foo2.c = foo1.c;
1708
1709All that aside, it is never necessary to use READ_ONCE() and
1710WRITE_ONCE() on a variable that has been marked volatile.  For example,
1711because 'jiffies' is marked volatile, it is never necessary to
1712say READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
1713WRITE_ONCE() are implemented as volatile casts, which has no effect when
1714its argument is already marked volatile.
1715
1716Please note that these compiler barriers have no direct effect on the CPU,
1717which may then reorder things however it wishes.
1718
1719
1720CPU MEMORY BARRIERS
1721-------------------
1722
1723The Linux kernel has eight basic CPU memory barriers:
1724
1725        TYPE            MANDATORY               SMP CONDITIONAL
1726        =============== ======================= ===========================
1727        GENERAL         mb()                    smp_mb()
1728        WRITE           wmb()                   smp_wmb()
1729        READ            rmb()                   smp_rmb()
1730        DATA DEPENDENCY read_barrier_depends()  smp_read_barrier_depends()
1731
1732
1733All memory barriers except the data dependency barriers imply a compiler
1734barrier. Data dependencies do not impose any additional compiler ordering.
1735
1736Aside: In the case of data dependencies, the compiler would be expected
1737to issue the loads in the correct order (eg. `a[b]` would have to load
1738the value of b before loading a[b]), however there is no guarantee in
1739the C specification that the compiler may not speculate the value of b
1740(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1741tmp = a[b]; ). There is also the problem of a compiler reloading b after
1742having loaded a[b], thus having a newer copy of b than a[b]. A consensus
1743has not yet been reached about these problems, however the READ_ONCE()
1744macro is a good place to start looking.
1745
1746SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1747systems because it is assumed that a CPU will appear to be self-consistent,
1748and will order overlapping accesses correctly with respect to itself.
1749However, see the subsection on "Virtual Machine Guests" below.
1750
1751[!] Note that SMP memory barriers _must_ be used to control the ordering of
1752references to shared memory on SMP systems, though the use of locking instead
1753is sufficient.
1754
1755Mandatory barriers should not be used to control SMP effects, since mandatory
1756barriers impose unnecessary overhead on both SMP and UP systems. They may,
1757however, be used to control MMIO effects on accesses through relaxed memory I/O
1758windows.  These barriers are required even on non-SMP systems as they affect
1759the order in which memory operations appear to a device by prohibiting both the
1760compiler and the CPU from reordering them.
1761
1762
1763There are some more advanced barrier functions:
1764
1765 (*) smp_store_mb(var, value)
1766
1767     This assigns the value to the variable and then inserts a full memory
1768     barrier after it.  It isn't guaranteed to insert anything more than a
1769     compiler barrier in a UP compilation.
1770
1771
1772 (*) smp_mb__before_atomic();
1773 (*) smp_mb__after_atomic();
1774
1775     These are for use with atomic (such as add, subtract, increment and
1776     decrement) functions that don't return a value, especially when used for
1777     reference counting.  These functions do not imply memory barriers.
1778
1779     These are also used for atomic bitop functions that do not return a
1780     value (such as set_bit and clear_bit).
1781
1782     As an example, consider a piece of code that marks an object as being dead
1783     and then decrements the object's reference count:
1784
1785        obj->dead = 1;
1786        smp_mb__before_atomic();
1787        atomic_dec(&obj->ref_count);
1788
1789     This makes sure that the death mark on the object is perceived to be set
1790     *before* the reference counter is decremented.
1791
1792     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1793     operations" subsection for information on where to use these.
1794
1795
1796 (*) lockless_dereference();
1797     This can be thought of as a pointer-fetch wrapper around the
1798     smp_read_barrier_depends() data-dependency barrier.
1799
1800     This is also similar to rcu_dereference(), but in cases where
1801     object lifetime is handled by some mechanism other than RCU, for
1802     example, when the objects removed only when the system goes down.
1803     In addition, lockless_dereference() is used in some data structures
1804     that can be used both with and without RCU.
1805
1806
1807 (*) dma_wmb();
1808 (*) dma_rmb();
1809
1810     These are for use with consistent memory to guarantee the ordering
1811     of writes or reads of shared memory accessible to both the CPU and a
1812     DMA capable device.
1813
1814     For example, consider a device driver that shares memory with a device
1815     and uses a descriptor status value to indicate if the descriptor belongs
1816     to the device or the CPU, and a doorbell to notify it when new
1817     descriptors are available:
1818
1819        if (desc->status != DEVICE_OWN) {
1820                /* do not read data until we own descriptor */
1821                dma_rmb();
1822
1823                /* read/modify data */
1824                read_data = desc->data;
1825                desc->data = write_data;
1826
1827                /* flush modifications before status update */
1828                dma_wmb();
1829
1830                /* assign ownership */
1831                desc->status = DEVICE_OWN;
1832
1833                /* force memory to sync before notifying device via MMIO */
1834                wmb();
1835
1836                /* notify device of new descriptors */
1837                writel(DESC_NOTIFY, doorbell);
1838        }
1839
1840     The dma_rmb() allows us guarantee the device has released ownership
1841     before we read the data from the descriptor, and the dma_wmb() allows
1842     us to guarantee the data is written to the descriptor before the device
1843     can see it now has ownership.  The wmb() is needed to guarantee that the
1844     cache coherent memory writes have completed before attempting a write to
1845     the cache incoherent MMIO region.
1846
1847     See Documentation/DMA-API.txt for more information on consistent memory.
1848
1849MMIO WRITE BARRIER
1850------------------
1851
1852The Linux kernel also has a special barrier for use with memory-mapped I/O
1853writes:
1854
1855        mmiowb();
1856
1857This is a variation on the mandatory write barrier that causes writes to weakly
1858ordered I/O regions to be partially ordered.  Its effects may go beyond the
1859CPU->Hardware interface and actually affect the hardware at some level.
1860
1861See the subsection "Locks vs I/O accesses" for more information.
1862
1863
1864===============================
1865IMPLICIT KERNEL MEMORY BARRIERS
1866===============================
1867
1868Some of the other functions in the linux kernel imply memory barriers, amongst
1869which are locking and scheduling functions.
1870
1871This specification is a _minimum_ guarantee; any particular architecture may
1872provide more substantial guarantees, but these may not be relied upon outside
1873of arch specific code.
1874
1875
1876ACQUIRING FUNCTIONS
1877-------------------
1878
1879The Linux kernel has a number of locking constructs:
1880
1881 (*) spin locks
1882 (*) R/W spin locks
1883 (*) mutexes
1884 (*) semaphores
1885 (*) R/W semaphores
1886
1887In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1888for each construct.  These operations all imply certain barriers:
1889
1890 (1) ACQUIRE operation implication:
1891
1892     Memory operations issued after the ACQUIRE will be completed after the
1893     ACQUIRE operation has completed.
1894
1895     Memory operations issued before the ACQUIRE may be completed after
1896     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1897     combined with a following ACQUIRE, orders prior stores against
1898     subsequent loads and stores. Note that this is weaker than smp_mb()!
1899     The smp_mb__before_spinlock() primitive is free on many architectures.
1900
1901 (2) RELEASE operation implication:
1902
1903     Memory operations issued before the RELEASE will be completed before the
1904     RELEASE operation has completed.
1905
1906     Memory operations issued after the RELEASE may be completed before the
1907     RELEASE operation has completed.
1908
1909 (3) ACQUIRE vs ACQUIRE implication:
1910
1911     All ACQUIRE operations issued before another ACQUIRE operation will be
1912     completed before that ACQUIRE operation.
1913
1914 (4) ACQUIRE vs RELEASE implication:
1915
1916     All ACQUIRE operations issued before a RELEASE operation will be
1917     completed before the RELEASE operation.
1918
1919 (5) Failed conditional ACQUIRE implication:
1920
1921     Certain locking variants of the ACQUIRE operation may fail, either due to
1922     being unable to get the lock immediately, or due to receiving an unblocked
1923     signal whilst asleep waiting for the lock to become available.  Failed
1924     locks do not imply any sort of barrier.
1925
1926[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1927one-way barriers is that the effects of instructions outside of a critical
1928section may seep into the inside of the critical section.
1929
1930An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1931because it is possible for an access preceding the ACQUIRE to happen after the
1932ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1933the two accesses can themselves then cross:
1934
1935        *A = a;
1936        ACQUIRE M
1937        RELEASE M
1938        *B = b;
1939
1940may occur as:
1941
1942        ACQUIRE M, STORE *B, STORE *A, RELEASE M
1943
1944When the ACQUIRE and RELEASE are a lock acquisition and release,
1945respectively, this same reordering can occur if the lock's ACQUIRE and
1946RELEASE are to the same lock variable, but only from the perspective of
1947another CPU not holding that lock.  In short, a ACQUIRE followed by an
1948RELEASE may -not- be assumed to be a full memory barrier.
1949
1950Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
1951not imply a full memory barrier.  Therefore, the CPU's execution of the
1952critical sections corresponding to the RELEASE and the ACQUIRE can cross,
1953so that:
1954
1955        *A = a;
1956        RELEASE M
1957        ACQUIRE N
1958        *B = b;
1959
1960could occur as:
1961
1962        ACQUIRE N, STORE *B, STORE *A, RELEASE M
1963
1964It might appear that this reordering could introduce a deadlock.
1965However, this cannot happen because if such a deadlock threatened,
1966the RELEASE would simply complete, thereby avoiding the deadlock.
1967
1968        Why does this work?
1969
1970        One key point is that we are only talking about the CPU doing
1971        the reordering, not the compiler.  If the compiler (or, for
1972        that matter, the developer) switched the operations, deadlock
1973        -could- occur.
1974
1975        But suppose the CPU reordered the operations.  In this case,
1976        the unlock precedes the lock in the assembly code.  The CPU
1977        simply elected to try executing the later lock operation first.
1978        If there is a deadlock, this lock operation will simply spin (or
1979        try to sleep, but more on that later).  The CPU will eventually
1980        execute the unlock operation (which preceded the lock operation
1981        in the assembly code), which will unravel the potential deadlock,
1982        allowing the lock operation to succeed.
1983
1984        But what if the lock is a sleeplock?  In that case, the code will
1985        try to enter the scheduler, where it will eventually encounter
1986        a memory barrier, which will force the earlier unlock operation
1987        to complete, again unraveling the deadlock.  There might be
1988        a sleep-unlock race, but the locking primitive needs to resolve
1989        such races properly in any case.
1990
1991Locks and semaphores may not provide any guarantee of ordering on UP compiled
1992systems, and so cannot be counted on in such a situation to actually achieve
1993anything at all - especially with respect to I/O accesses - unless combined
1994with interrupt disabling operations.
1995
1996See also the section on "Inter-CPU locking barrier effects".
1997
1998
1999As an example, consider the following:
2000
2001        *A = a;
2002        *B = b;
2003        ACQUIRE
2004        *C = c;
2005        *D = d;
2006        RELEASE
2007        *E = e;
2008        *F = f;
2009
2010The following sequence of events is acceptable:
2011
2012        ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2013
2014        [+] Note that {*F,*A} indicates a combined access.
2015
2016But none of the following are:
2017
2018        {*F,*A}, *B,    ACQUIRE, *C, *D,        RELEASE, *E
2019        *A, *B, *C,     ACQUIRE, *D,            RELEASE, *E, *F
2020        *A, *B,         ACQUIRE, *C,            RELEASE, *D, *E, *F
2021        *B,             ACQUIRE, *C, *D,        RELEASE, {*F,*A}, *E
2022
2023
2024
2025INTERRUPT DISABLING FUNCTIONS
2026-----------------------------
2027
2028Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2029(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
2030barriers are required in such a situation, they must be provided from some
2031other means.
2032
2033
2034SLEEP AND WAKE-UP FUNCTIONS
2035---------------------------
2036
2037Sleeping and waking on an event flagged in global data can be viewed as an
2038interaction between two pieces of data: the task state of the task waiting for
2039the event and the global data used to indicate the event.  To make sure that
2040these appear to happen in the right order, the primitives to begin the process
2041of going to sleep, and the primitives to initiate a wake up imply certain
2042barriers.
2043
2044Firstly, the sleeper normally follows something like this sequence of events:
2045
2046        for (;;) {
2047                set_current_state(TASK_UNINTERRUPTIBLE);
2048                if (event_indicated)
2049                        break;
2050                schedule();
2051        }
2052
2053A general memory barrier is interpolated automatically by set_current_state()
2054after it has altered the task state:
2055
2056        CPU 1
2057        ===============================
2058        set_current_state();
2059          smp_store_mb();
2060            STORE current->state
2061            <general barrier>
2062        LOAD event_indicated
2063
2064set_current_state() may be wrapped by:
2065
2066        prepare_to_wait();
2067        prepare_to_wait_exclusive();
2068
2069which therefore also imply a general memory barrier after setting the state.
2070The whole sequence above is available in various canned forms, all of which
2071interpolate the memory barrier in the right place:
2072
2073        wait_event();
2074        wait_event_interruptible();
2075        wait_event_interruptible_exclusive();
2076        wait_event_interruptible_timeout();
2077        wait_event_killable();
2078        wait_event_timeout();
2079        wait_on_bit();
2080        wait_on_bit_lock();
2081
2082
2083Secondly, code that performs a wake up normally follows something like this:
2084
2085        event_indicated = 1;
2086        wake_up(&event_wait_queue);
2087
2088or:
2089
2090        event_indicated = 1;
2091        wake_up_process(event_daemon);
2092
2093A write memory barrier is implied by wake_up() and co. if and only if they wake
2094something up.  The barrier occurs before the task state is cleared, and so sits
2095between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2096
2097        CPU 1                           CPU 2
2098        =============================== ===============================
2099        set_current_state();            STORE event_indicated
2100          smp_store_mb();               wake_up();
2101            STORE current->state          <write barrier>
2102            <general barrier>             STORE current->state
2103        LOAD event_indicated
2104
2105To repeat, this write memory barrier is present if and only if something
2106is actually awakened.  To see this, consider the following sequence of
2107events, where X and Y are both initially zero:
2108
2109        CPU 1                           CPU 2
2110        =============================== ===============================
2111        X = 1;                          STORE event_indicated
2112        smp_mb();                       wake_up();
2113        Y = 1;                          wait_event(wq, Y == 1);
2114        wake_up();                        load from Y sees 1, no memory barrier
2115                                        load from X might see 0
2116
2117In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2118to see 1.
2119
2120The available waker functions include:
2121
2122        complete();
2123        wake_up();
2124        wake_up_all();
2125        wake_up_bit();
2126        wake_up_interruptible();
2127        wake_up_interruptible_all();
2128        wake_up_interruptible_nr();
2129        wake_up_interruptible_poll();
2130        wake_up_interruptible_sync();
2131        wake_up_interruptible_sync_poll();
2132        wake_up_locked();
2133        wake_up_locked_poll();
2134        wake_up_nr();
2135        wake_up_poll();
2136        wake_up_process();
2137
2138
2139[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2140order multiple stores before the wake-up with respect to loads of those stored
2141values after the sleeper has called set_current_state().  For instance, if the
2142sleeper does:
2143
2144        set_current_state(TASK_INTERRUPTIBLE);
2145        if (event_indicated)
2146                break;
2147        __set_current_state(TASK_RUNNING);
2148        do_something(my_data);
2149
2150and the waker does:
2151
2152        my_data = value;
2153        event_indicated = 1;
2154        wake_up(&event_wait_queue);
2155
2156there's no guarantee that the change to event_indicated will be perceived by
2157the sleeper as coming after the change to my_data.  In such a circumstance, the
2158code on both sides must interpolate its own memory barriers between the
2159separate data accesses.  Thus the above sleeper ought to do:
2160
2161        set_current_state(TASK_INTERRUPTIBLE);
2162        if (event_indicated) {
2163                smp_rmb();
2164                do_something(my_data);
2165        }
2166
2167and the waker should do:
2168
2169        my_data = value;
2170        smp_wmb();
2171        event_indicated = 1;
2172        wake_up(&event_wait_queue);
2173
2174
2175MISCELLANEOUS FUNCTIONS
2176-----------------------
2177
2178Other functions that imply barriers:
2179
2180 (*) schedule() and similar imply full memory barriers.
2181
2182
2183===================================
2184INTER-CPU ACQUIRING BARRIER EFFECTS
2185===================================
2186
2187On SMP systems locking primitives give a more substantial form of barrier: one
2188that does affect memory access ordering on other CPUs, within the context of
2189conflict on any particular lock.
2190
2191
2192ACQUIRES VS MEMORY ACCESSES
2193---------------------------
2194
2195Consider the following: the system has a pair of spinlocks (M) and (Q), and
2196three CPUs; then should the following sequence of events occur:
2197
2198        CPU 1                           CPU 2
2199        =============================== ===============================
2200        WRITE_ONCE(*A, a);              WRITE_ONCE(*E, e);
2201        ACQUIRE M                       ACQUIRE Q
2202        WRITE_ONCE(*B, b);              WRITE_ONCE(*F, f);
2203        WRITE_ONCE(*C, c);              WRITE_ONCE(*G, g);
2204        RELEASE M                       RELEASE Q
2205        WRITE_ONCE(*D, d);              WRITE_ONCE(*H, h);
2206
2207Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2208through *H occur in, other than the constraints imposed by the separate locks
2209on the separate CPUs. It might, for example, see:
2210
2211        *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2212
2213But it won't see any of:
2214
2215        *B, *C or *D preceding ACQUIRE M
2216        *A, *B or *C following RELEASE M
2217        *F, *G or *H preceding ACQUIRE Q
2218        *E, *F or *G following RELEASE Q
2219
2220
2221
2222ACQUIRES VS I/O ACCESSES
2223------------------------
2224
2225Under certain circumstances (especially involving NUMA), I/O accesses within
2226two spinlocked sections on two different CPUs may be seen as interleaved by the
2227PCI bridge, because the PCI bridge does not necessarily participate in the
2228cache-coherence protocol, and is therefore incapable of issuing the required
2229read memory barriers.
2230
2231For example:
2232
2233        CPU 1                           CPU 2
2234        =============================== ===============================
2235        spin_lock(Q)
2236        writel(0, ADDR)
2237        writel(1, DATA);
2238        spin_unlock(Q);
2239                                        spin_lock(Q);
2240                                        writel(4, ADDR);
2241                                        writel(5, DATA);
2242                                        spin_unlock(Q);
2243
2244may be seen by the PCI bridge as follows:
2245
2246        STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2247
2248which would probably cause the hardware to malfunction.
2249
2250
2251What is necessary here is to intervene with an mmiowb() before dropping the
2252spinlock, for example:
2253
2254        CPU 1                           CPU 2
2255        =============================== ===============================
2256        spin_lock(Q)
2257        writel(0, ADDR)
2258        writel(1, DATA);
2259        mmiowb();
2260        spin_unlock(Q);
2261                                        spin_lock(Q);
2262                                        writel(4, ADDR);
2263                                        writel(5, DATA);
2264                                        mmiowb();
2265                                        spin_unlock(Q);
2266
2267this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2268before either of the stores issued on CPU 2.
2269
2270
2271Furthermore, following a store by a load from the same device obviates the need
2272for the mmiowb(), because the load forces the store to complete before the load
2273is performed:
2274
2275        CPU 1                           CPU 2
2276        =============================== ===============================
2277        spin_lock(Q)
2278        writel(0, ADDR)
2279        a = readl(DATA);
2280        spin_unlock(Q);
2281                                        spin_lock(Q);
2282                                        writel(4, ADDR);
2283                                        b = readl(DATA);
2284                                        spin_unlock(Q);
2285
2286
2287See Documentation/DocBook/deviceiobook.tmpl for more information.
2288
2289
2290=================================
2291WHERE ARE MEMORY BARRIERS NEEDED?
2292=================================
2293
2294Under normal operation, memory operation reordering is generally not going to
2295be a problem as a single-threaded linear piece of code will still appear to
2296work correctly, even if it's in an SMP kernel.  There are, however, four
2297circumstances in which reordering definitely _could_ be a problem:
2298
2299 (*) Interprocessor interaction.
2300
2301 (*) Atomic operations.
2302
2303 (*) Accessing devices.
2304
2305 (*) Interrupts.
2306
2307
2308INTERPROCESSOR INTERACTION
2309--------------------------
2310
2311When there's a system with more than one processor, more than one CPU in the
2312system may be working on the same data set at the same time.  This can cause
2313synchronisation problems, and the usual way of dealing with them is to use
2314locks.  Locks, however, are quite expensive, and so it may be preferable to
2315operate without the use of a lock if at all possible.  In such a case
2316operations that affect both CPUs may have to be carefully ordered to prevent
2317a malfunction.
2318
2319Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2320queued on the semaphore, by virtue of it having a piece of its stack linked to
2321the semaphore's list of waiting processes:
2322
2323        struct rw_semaphore {
2324                ...
2325                spinlock_t lock;
2326                struct list_head waiters;
2327        };
2328
2329        struct rwsem_waiter {
2330                struct list_head list;
2331                struct task_struct *task;
2332        };
2333
2334To wake up a particular waiter, the up_read() or up_write() functions have to:
2335
2336 (1) read the next pointer from this waiter's record to know as to where the
2337     next waiter record is;
2338
2339 (2) read the pointer to the waiter's task structure;
2340
2341 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2342
2343 (4) call wake_up_process() on the task; and
2344
2345 (5) release the reference held on the waiter's task struct.
2346
2347In other words, it has to perform this sequence of events:
2348
2349        LOAD waiter->list.next;
2350        LOAD waiter->task;
2351        STORE waiter->task;
2352        CALL wakeup
2353        RELEASE task
2354
2355and if any of these steps occur out of order, then the whole thing may
2356malfunction.
2357
2358Once it has queued itself and dropped the semaphore lock, the waiter does not
2359get the lock again; it instead just waits for its task pointer to be cleared
2360before proceeding.  Since the record is on the waiter's stack, this means that
2361if the task pointer is cleared _before_ the next pointer in the list is read,
2362another CPU might start processing the waiter and might clobber the waiter's
2363stack before the up*() function has a chance to read the next pointer.
2364
2365Consider then what might happen to the above sequence of events:
2366
2367        CPU 1                           CPU 2
2368        =============================== ===============================
2369                                        down_xxx()
2370                                        Queue waiter
2371                                        Sleep
2372        up_yyy()
2373        LOAD waiter->task;
2374        STORE waiter->task;
2375                                        Woken up by other event
2376        <preempt>
2377                                        Resume processing
2378                                        down_xxx() returns
2379                                        call foo()
2380                                        foo() clobbers *waiter
2381        </preempt>
2382        LOAD waiter->list.next;
2383        --- OOPS ---
2384
2385This could be dealt with using the semaphore lock, but then the down_xxx()
2386function has to needlessly get the spinlock again after being woken up.
2387
2388The way to deal with this is to insert a general SMP memory barrier:
2389
2390        LOAD waiter->list.next;
2391        LOAD waiter->task;
2392        smp_mb();
2393        STORE waiter->task;
2394        CALL wakeup
2395        RELEASE task
2396
2397In this case, the barrier makes a guarantee that all memory accesses before the
2398barrier will appear to happen before all the memory accesses after the barrier
2399with respect to the other CPUs on the system.  It does _not_ guarantee that all
2400the memory accesses before the barrier will be complete by the time the barrier
2401instruction itself is complete.
2402
2403On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2404compiler barrier, thus making sure the compiler emits the instructions in the
2405right order without actually intervening in the CPU.  Since there's only one
2406CPU, that CPU's dependency ordering logic will take care of everything else.
2407
2408
2409ATOMIC OPERATIONS
2410-----------------
2411
2412Whilst they are technically interprocessor interaction considerations, atomic
2413operations are noted specially as some of them imply full memory barriers and
2414some don't, but they're very heavily relied on as a group throughout the
2415kernel.
2416
2417Any atomic operation that modifies some state in memory and returns information
2418about the state (old or new) implies an SMP-conditional general memory barrier
2419(smp_mb()) on each side of the actual operation (with the exception of
2420explicit lock operations, described later).  These include:
2421
2422        xchg();
2423        atomic_xchg();                  atomic_long_xchg();
2424        atomic_inc_return();            atomic_long_inc_return();
2425        atomic_dec_return();            atomic_long_dec_return();
2426        atomic_add_return();            atomic_long_add_return();
2427        atomic_sub_return();            atomic_long_sub_return();
2428        atomic_inc_and_test();          atomic_long_inc_and_test();
2429        atomic_dec_and_test();          atomic_long_dec_and_test();
2430        atomic_sub_and_test();          atomic_long_sub_and_test();
2431        atomic_add_negative();          atomic_long_add_negative();
2432        test_and_set_bit();
2433        test_and_clear_bit();
2434        test_and_change_bit();
2435
2436        /* when succeeds */
2437        cmpxchg();
2438        atomic_cmpxchg();               atomic_long_cmpxchg();
2439        atomic_add_unless();            atomic_long_add_unless();
2440
2441These are used for such things as implementing ACQUIRE-class and RELEASE-class
2442operations and adjusting reference counters towards object destruction, and as
2443such the implicit memory barrier effects are necessary.
2444
2445
2446The following operations are potential problems as they do _not_ imply memory
2447barriers, but might be used for implementing such things as RELEASE-class
2448operations:
2449
2450        atomic_set();
2451        set_bit();
2452        clear_bit();
2453        change_bit();
2454
2455With these the appropriate explicit memory barrier should be used if necessary
2456(smp_mb__before_atomic() for instance).
2457
2458
2459The following also do _not_ imply memory barriers, and so may require explicit
2460memory barriers under some circumstances (smp_mb__before_atomic() for
2461instance):
2462
2463        atomic_add();
2464        atomic_sub();
2465        atomic_inc();
2466        atomic_dec();
2467
2468If they're used for statistics generation, then they probably don't need memory
2469barriers, unless there's a coupling between statistical data.
2470
2471If they're used for reference counting on an object to control its lifetime,
2472they probably don't need memory barriers because either the reference count
2473will be adjusted inside a locked section, or the caller will already hold
2474sufficient references to make the lock, and thus a memory barrier unnecessary.
2475
2476If they're used for constructing a lock of some description, then they probably
2477do need memory barriers as a lock primitive generally has to do things in a
2478specific order.
2479
2480Basically, each usage case has to be carefully considered as to whether memory
2481barriers are needed or not.
2482
2483The following operations are special locking primitives:
2484
2485        test_and_set_bit_lock();
2486        clear_bit_unlock();
2487        __clear_bit_unlock();
2488
2489These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2490preference to other operations when implementing locking primitives, because
2491their implementations can be optimised on many architectures.
2492
2493[!] Note that special memory barrier primitives are available for these
2494situations because on some CPUs the atomic instructions used imply full memory
2495barriers, and so barrier instructions are superfluous in conjunction with them,
2496and in such cases the special barrier primitives will be no-ops.
2497
2498See Documentation/atomic_ops.txt for more information.
2499
2500
2501ACCESSING DEVICES
2502-----------------
2503
2504Many devices can be memory mapped, and so appear to the CPU as if they're just
2505a set of memory locations.  To control such a device, the driver usually has to
2506make the right memory accesses in exactly the right order.
2507
2508However, having a clever CPU or a clever compiler creates a potential problem
2509in that the carefully sequenced accesses in the driver code won't reach the
2510device in the requisite order if the CPU or the compiler thinks it is more
2511efficient to reorder, combine or merge accesses - something that would cause
2512the device to malfunction.
2513
2514Inside of the Linux kernel, I/O should be done through the appropriate accessor
2515routines - such as inb() or writel() - which know how to make such accesses
2516appropriately sequential.  Whilst this, for the most part, renders the explicit
2517use of memory barriers unnecessary, there are a couple of situations where they
2518might be needed:
2519
2520 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2521     so for _all_ general drivers locks should be used and mmiowb() must be
2522     issued prior to unlocking the critical section.
2523
2524 (2) If the accessor functions are used to refer to an I/O memory window with
2525     relaxed memory access properties, then _mandatory_ memory barriers are
2526     required to enforce ordering.
2527
2528See Documentation/DocBook/deviceiobook.tmpl for more information.
2529
2530
2531INTERRUPTS
2532----------
2533
2534A driver may be interrupted by its own interrupt service routine, and thus the
2535two parts of the driver may interfere with each other's attempts to control or
2536access the device.
2537
2538This may be alleviated - at least in part - by disabling local interrupts (a
2539form of locking), such that the critical operations are all contained within
2540the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2541routine is executing, the driver's core may not run on the same CPU, and its
2542interrupt is not permitted to happen again until the current interrupt has been
2543handled, thus the interrupt handler does not need to lock against that.
2544
2545However, consider a driver that was talking to an ethernet card that sports an
2546address register and a data register.  If that driver's core talks to the card
2547under interrupt-disablement and then the driver's interrupt handler is invoked:
2548
2549        LOCAL IRQ DISABLE
2550        writew(ADDR, 3);
2551        writew(DATA, y);
2552        LOCAL IRQ ENABLE
2553        <interrupt>
2554        writew(ADDR, 4);
2555        q = readw(DATA);
2556        </interrupt>
2557
2558The store to the data register might happen after the second store to the
2559address register if ordering rules are sufficiently relaxed:
2560
2561        STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2562
2563
2564If ordering rules are relaxed, it must be assumed that accesses done inside an
2565interrupt disabled section may leak outside of it and may interleave with
2566accesses performed in an interrupt - and vice versa - unless implicit or
2567explicit barriers are used.
2568
2569Normally this won't be a problem because the I/O accesses done inside such
2570sections will include synchronous load operations on strictly ordered I/O
2571registers that form implicit I/O barriers. If this isn't sufficient then an
2572mmiowb() may need to be used explicitly.
2573
2574
2575A similar situation may occur between an interrupt routine and two routines
2576running on separate CPUs that communicate with each other. If such a case is
2577likely, then interrupt-disabling locks should be used to guarantee ordering.
2578
2579
2580==========================
2581KERNEL I/O BARRIER EFFECTS
2582==========================
2583
2584When accessing I/O memory, drivers should use the appropriate accessor
2585functions:
2586
2587 (*) inX(), outX():
2588
2589     These are intended to talk to I/O space rather than memory space, but
2590     that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2591     indeed have special I/O space access cycles and instructions, but many
2592     CPUs don't have such a concept.
2593
2594     The PCI bus, amongst others, defines an I/O space concept which - on such
2595     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2596     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2597     memory map, particularly on those CPUs that don't support alternate I/O
2598     spaces.
2599
2600     Accesses to this space may be fully synchronous (as on i386), but
2601     intermediary bridges (such as the PCI host bridge) may not fully honour
2602     that.
2603
2604     They are guaranteed to be fully ordered with respect to each other.
2605
2606     They are not guaranteed to be fully ordered with respect to other types of
2607     memory and I/O operation.
2608
2609 (*) readX(), writeX():
2610
2611     Whether these are guaranteed to be fully ordered and uncombined with
2612     respect to each other on the issuing CPU depends on the characteristics
2613     defined for the memory window through which they're accessing. On later
2614     i386 architecture machines, for example, this is controlled by way of the
2615     MTRR registers.
2616
2617     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2618     provided they're not accessing a prefetchable device.
2619
2620     However, intermediary hardware (such as a PCI bridge) may indulge in
2621     deferral if it so wishes; to flush a store, a load from the same location
2622     is preferred[*], but a load from the same device or from configuration
2623     space should suffice for PCI.
2624
2625     [*] NOTE! attempting to load from the same location as was written to may
2626         cause a malfunction - consider the 16550 Rx/Tx serial registers for
2627         example.
2628
2629     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2630     force stores to be ordered.
2631
2632     Please refer to the PCI specification for more information on interactions
2633     between PCI transactions.
2634
2635 (*) readX_relaxed(), writeX_relaxed()
2636
2637     These are similar to readX() and writeX(), but provide weaker memory
2638     ordering guarantees. Specifically, they do not guarantee ordering with
2639     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2640     ordering with respect to LOCK or UNLOCK operations. If the latter is
2641     required, an mmiowb() barrier can be used. Note that relaxed accesses to
2642     the same peripheral are guaranteed to be ordered with respect to each
2643     other.
2644
2645 (*) ioreadX(), iowriteX()
2646
2647     These will perform appropriately for the type of access they're actually
2648     doing, be it inX()/outX() or readX()/writeX().
2649
2650
2651========================================
2652ASSUMED MINIMUM EXECUTION ORDERING MODEL
2653========================================
2654
2655It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2656maintain the appearance of program causality with respect to itself.  Some CPUs
2657(such as i386 or x86_64) are more constrained than others (such as powerpc or
2658frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2659of arch-specific code.
2660
2661This means that it must be considered that the CPU will execute its instruction
2662stream in any order it feels like - or even in parallel - provided that if an
2663instruction in the stream depends on an earlier instruction, then that
2664earlier instruction must be sufficiently complete[*] before the later
2665instruction may proceed; in other words: provided that the appearance of
2666causality is maintained.
2667
2668 [*] Some instructions have more than one effect - such as changing the
2669     condition codes, changing registers or changing memory - and different
2670     instructions may depend on different effects.
2671
2672A CPU may also discard any instruction sequence that winds up having no
2673ultimate effect.  For example, if two adjacent instructions both load an
2674immediate value into the same register, the first may be discarded.
2675
2676
2677Similarly, it has to be assumed that compiler might reorder the instruction
2678stream in any way it sees fit, again provided the appearance of causality is
2679maintained.
2680
2681
2682============================
2683THE EFFECTS OF THE CPU CACHE
2684============================
2685
2686The way cached memory operations are perceived across the system is affected to
2687a certain extent by the caches that lie between CPUs and memory, and by the
2688memory coherence system that maintains the consistency of state in the system.
2689
2690As far as the way a CPU interacts with another part of the system through the
2691caches goes, the memory system has to include the CPU's caches, and memory
2692barriers for the most part act at the interface between the CPU and its cache
2693(memory barriers logically act on the dotted line in the following diagram):
2694
2695            <--- CPU --->         :       <----------- Memory ----------->
2696                                  :
2697        +--------+    +--------+  :   +--------+    +-----------+
2698        |        |    |        |  :   |        |    |           |    +--------+
2699        |  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2700        |  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2701        |        |    | Queue  |  :   |        |    |           |--->| Memory |
2702        |        |    |        |  :   |        |    |           |    |        |
2703        +--------+    +--------+  :   +--------+    |           |    |        |
2704                                  :                 | Cache     |    +--------+
2705                                  :                 | Coherency |
2706                                  :                 | Mechanism |    +--------+
2707        +--------+    +--------+  :   +--------+    |           |    |        |
2708        |        |    |        |  :   |        |    |           |    |        |
2709        |  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2710        |  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2711        |        |    | Queue  |  :   |        |    |           |    |        |
2712        |        |    |        |  :   |        |    |           |    +--------+
2713        +--------+    +--------+  :   +--------+    +-----------+
2714                                  :
2715                                  :
2716
2717Although any particular load or store may not actually appear outside of the
2718CPU that issued it since it may have been satisfied within the CPU's own cache,
2719it will still appear as if the full memory access had taken place as far as the
2720other CPUs are concerned since the cache coherency mechanisms will migrate the
2721cacheline over to the accessing CPU and propagate the effects upon conflict.
2722
2723The CPU core may execute instructions in any order it deems fit, provided the
2724expected program causality appears to be maintained.  Some of the instructions
2725generate load and store operations which then go into the queue of memory
2726accesses to be performed.  The core may place these in the queue in any order
2727it wishes, and continue execution until it is forced to wait for an instruction
2728to complete.
2729
2730What memory barriers are concerned with is controlling the order in which
2731accesses cross from the CPU side of things to the memory side of things, and
2732the order in which the effects are perceived to happen by the other observers
2733in the system.
2734
2735[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2736their own loads and stores as if they had happened in program order.
2737
2738[!] MMIO or other device accesses may bypass the cache system.  This depends on
2739the properties of the memory window through which devices are accessed and/or
2740the use of any special device communication instructions the CPU may have.
2741
2742
2743CACHE COHERENCY
2744---------------
2745
2746Life isn't quite as simple as it may appear above, however: for while the
2747caches are expected to be coherent, there's no guarantee that that coherency
2748will be ordered.  This means that whilst changes made on one CPU will
2749eventually become visible on all CPUs, there's no guarantee that they will
2750become apparent in the same order on those other CPUs.
2751
2752
2753Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2754has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2755
2756                    :
2757                    :                          +--------+
2758                    :      +---------+         |        |
2759        +--------+  : +--->| Cache A |<------->|        |
2760        |        |  : |    +---------+         |        |
2761        |  CPU 1 |<---+                        |        |
2762        |        |  : |    +---------+         |        |
2763        +--------+  : +--->| Cache B |<------->|        |
2764                    :      +---------+         |        |
2765                    :                          | Memory |
2766                    :      +---------+         | System |
2767        +--------+  : +--->| Cache C |<------->|        |
2768        |        |  : |    +---------+         |        |
2769        |  CPU 2 |<---+                        |        |
2770        |        |  : |    +---------+         |        |
2771        +--------+  : +--->| Cache D |<------->|        |
2772                    :      +---------+         |        |
2773                    :                          +--------+
2774                    :
2775
2776Imagine the system has the following properties:
2777
2778 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2779     resident in memory;
2780
2781 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2782     resident in memory;
2783
2784 (*) whilst the CPU core is interrogating one cache, the other cache may be
2785     making use of the bus to access the rest of the system - perhaps to
2786     displace a dirty cacheline or to do a speculative load;
2787
2788 (*) each cache has a queue of operations that need to be applied to that cache
2789     to maintain coherency with the rest of the system;
2790
2791 (*) the coherency queue is not flushed by normal loads to lines already
2792     present in the cache, even though the contents of the queue may
2793     potentially affect those loads.
2794
2795Imagine, then, that two writes are made on the first CPU, with a write barrier
2796between them to guarantee that they will appear to reach that CPU's caches in
2797the requisite order:
2798
2799        CPU 1           CPU 2           COMMENT
2800        =============== =============== =======================================
2801                                        u == 0, v == 1 and p == &u, q == &u
2802        v = 2;
2803        smp_wmb();                      Make sure change to v is visible before
2804                                         change to p
2805        <A:modify v=2>                  v is now in cache A exclusively
2806        p = &v;
2807        <B:modify p=&v>                 p is now in cache B exclusively
2808
2809The write memory barrier forces the other CPUs in the system to perceive that
2810the local CPU's caches have apparently been updated in the correct order.  But
2811now imagine that the second CPU wants to read those values:
2812
2813        CPU 1           CPU 2           COMMENT
2814        =============== =============== =======================================
2815        ...
2816                        q = p;
2817                        x = *q;
2818
2819The above pair of reads may then fail to happen in the expected order, as the
2820cacheline holding p may get updated in one of the second CPU's caches whilst
2821the update to the cacheline holding v is delayed in the other of the second
2822CPU's caches by some other cache event:
2823
2824        CPU 1           CPU 2           COMMENT
2825        =============== =============== =======================================
2826                                        u == 0, v == 1 and p == &u, q == &u
2827        v = 2;
2828        smp_wmb();
2829        <A:modify v=2>  <C:busy>
2830                        <C:queue v=2>
2831        p = &v;         q = p;
2832                        <D:request p>
2833        <B:modify p=&v> <D:commit p=&v>
2834                        <D:read p>
2835                        x = *q;
2836                        <C:read *q>     Reads from v before v updated in cache
2837                        <C:unbusy>
2838                        <C:commit v=2>
2839
2840Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2841no guarantee that, without intervention, the order of update will be the same
2842as that committed on CPU 1.
2843
2844
2845To intervene, we need to interpolate a data dependency barrier or a read
2846barrier between the loads.  This will force the cache to commit its coherency
2847queue before processing any further requests:
2848
2849        CPU 1           CPU 2           COMMENT
2850        =============== =============== =======================================
2851                                        u == 0, v == 1 and p == &u, q == &u
2852        v = 2;
2853        smp_wmb();
2854        <A:modify v=2>  <C:busy>
2855                        <C:queue v=2>
2856        p = &v;         q = p;
2857                        <D:request p>
2858        <B:modify p=&v> <D:commit p=&v>
2859                        <D:read p>
2860                        smp_read_barrier_depends()
2861                        <C:unbusy>
2862                        <C:commit v=2>
2863                        x = *q;
2864                        <C:read *q>     Reads from v after v updated in cache
2865
2866
2867This sort of problem can be encountered on DEC Alpha processors as they have a
2868split cache that improves performance by making better use of the data bus.
2869Whilst most CPUs do imply a data dependency barrier on the read when a memory
2870access depends on a read, not all do, so it may not be relied on.
2871
2872Other CPUs may also have split caches, but must coordinate between the various
2873cachelets for normal memory accesses.  The semantics of the Alpha removes the
2874need for coordination in the absence of memory barriers.
2875
2876
2877CACHE COHERENCY VS DMA
2878----------------------
2879
2880Not all systems maintain cache coherency with respect to devices doing DMA.  In
2881such cases, a device attempting DMA may obtain stale data from RAM because
2882dirty cache lines may be resident in the caches of various CPUs, and may not
2883have been written back to RAM yet.  To deal with this, the appropriate part of
2884the kernel must flush the overlapping bits of cache on each CPU (and maybe
2885invalidate them as well).
2886
2887In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2888cache lines being written back to RAM from a CPU's cache after the device has
2889installed its own data, or cache lines present in the CPU's cache may simply
2890obscure the fact that RAM has been updated, until at such time as the cacheline
2891is discarded from the CPU's cache and reloaded.  To deal with this, the
2892appropriate part of the kernel must invalidate the overlapping bits of the
2893cache on each CPU.
2894
2895See Documentation/cachetlb.txt for more information on cache management.
2896
2897
2898CACHE COHERENCY VS MMIO
2899-----------------------
2900
2901Memory mapped I/O usually takes place through memory locations that are part of
2902a window in the CPU's memory space that has different properties assigned than
2903the usual RAM directed window.
2904
2905Amongst these properties is usually the fact that such accesses bypass the
2906caching entirely and go directly to the device buses.  This means MMIO accesses
2907may, in effect, overtake accesses to cached memory that were emitted earlier.
2908A memory barrier isn't sufficient in such a case, but rather the cache must be
2909flushed between the cached memory write and the MMIO access if the two are in
2910any way dependent.
2911
2912
2913=========================
2914THE THINGS CPUS GET UP TO
2915=========================
2916
2917A programmer might take it for granted that the CPU will perform memory
2918operations in exactly the order specified, so that if the CPU is, for example,
2919given the following piece of code to execute:
2920
2921        a = READ_ONCE(*A);
2922        WRITE_ONCE(*B, b);
2923        c = READ_ONCE(*C);
2924        d = READ_ONCE(*D);
2925        WRITE_ONCE(*E, e);
2926
2927they would then expect that the CPU will complete the memory operation for each
2928instruction before moving on to the next one, leading to a definite sequence of
2929operations as seen by external observers in the system:
2930
2931        LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2932
2933
2934Reality is, of course, much messier.  With many CPUs and compilers, the above
2935assumption doesn't hold because:
2936
2937 (*) loads are more likely to need to be completed immediately to permit
2938     execution progress, whereas stores can often be deferred without a
2939     problem;
2940
2941 (*) loads may be done speculatively, and the result discarded should it prove
2942     to have been unnecessary;
2943
2944 (*) loads may be done speculatively, leading to the result having been fetched
2945     at the wrong time in the expected sequence of events;
2946
2947 (*) the order of the memory accesses may be rearranged to promote better use
2948     of the CPU buses and caches;
2949
2950 (*) loads and stores may be combined to improve performance when talking to
2951     memory or I/O hardware that can do batched accesses of adjacent locations,
2952     thus cutting down on transaction setup costs (memory and PCI devices may
2953     both be able to do this); and
2954
2955 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2956     mechanisms may alleviate this - once the store has actually hit the cache
2957     - there's no guarantee that the coherency management will be propagated in
2958     order to other CPUs.
2959
2960So what another CPU, say, might actually observe from the above piece of code
2961is:
2962
2963        LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2964
2965        (Where "LOAD {*C,*D}" is a combined load)
2966
2967
2968However, it is guaranteed that a CPU will be self-consistent: it will see its
2969_own_ accesses appear to be correctly ordered, without the need for a memory
2970barrier.  For instance with the following code:
2971
2972        U = READ_ONCE(*A);
2973        WRITE_ONCE(*A, V);
2974        WRITE_ONCE(*A, W);
2975        X = READ_ONCE(*A);
2976        WRITE_ONCE(*A, Y);
2977        Z = READ_ONCE(*A);
2978
2979and assuming no intervention by an external influence, it can be assumed that
2980the final result will appear to be:
2981
2982        U == the original value of *A
2983        X == W
2984        Z == Y
2985        *A == Y
2986
2987The code above may cause the CPU to generate the full sequence of memory
2988accesses:
2989
2990        U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2991
2992in that order, but, without intervention, the sequence may have almost any
2993combination of elements combined or discarded, provided the program's view
2994of the world remains consistent.  Note that READ_ONCE() and WRITE_ONCE()
2995are -not- optional in the above example, as there are architectures
2996where a given CPU might reorder successive loads to the same location.
2997On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
2998necessary to prevent this, for example, on Itanium the volatile casts
2999used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3000and st.rel instructions (respectively) that prevent such reordering.
3001
3002The compiler may also combine, discard or defer elements of the sequence before
3003the CPU even sees them.
3004
3005For instance:
3006
3007        *A = V;
3008        *A = W;
3009
3010may be reduced to:
3011
3012        *A = W;
3013
3014since, without either a write barrier or an WRITE_ONCE(), it can be
3015assumed that the effect of the storage of V to *A is lost.  Similarly:
3016
3017        *A = Y;
3018        Z = *A;
3019
3020may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3021reduced to:
3022
3023        *A = Y;
3024        Z = Y;
3025
3026and the LOAD operation never appear outside of the CPU.
3027
3028
3029AND THEN THERE'S THE ALPHA
3030--------------------------
3031
3032The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
3033some versions of the Alpha CPU have a split data cache, permitting them to have
3034two semantically-related cache lines updated at separate times.  This is where
3035the data dependency barrier really becomes necessary as this synchronises both
3036caches with the memory coherence system, thus making it seem like pointer
3037changes vs new data occur in the right order.
3038
3039The Alpha defines the Linux kernel's memory barrier model.
3040
3041See the subsection on "Cache Coherency" above.
3042
3043VIRTUAL MACHINE GUESTS
3044-------------------
3045
3046Guests running within virtual machines might be affected by SMP effects even if
3047the guest itself is compiled without SMP support.  This is an artifact of
3048interfacing with an SMP host while running an UP kernel.  Using mandatory
3049barriers for this use-case would be possible but is often suboptimal.
3050
3051To handle this case optimally, low-level virt_mb() etc macros are available.
3052These have the same effect as smp_mb() etc when SMP is enabled, but generate
3053identical code for SMP and non-SMP systems. For example, virtual machine guests
3054should use virt_mb() rather than smp_mb() when synchronizing against a
3055(possibly SMP) host.
3056
3057These are equivalent to smp_mb() etc counterparts in all other respects,
3058in particular, they do not control MMIO effects: to control
3059MMIO effects, use mandatory barriers.
3060
3061============
3062EXAMPLE USES
3063============
3064
3065CIRCULAR BUFFERS
3066----------------
3067
3068Memory barriers can be used to implement circular buffering without the need
3069of a lock to serialise the producer with the consumer.  See:
3070
3071        Documentation/circular-buffers.txt
3072
3073for details.
3074
3075
3076==========
3077REFERENCES
3078==========
3079
3080Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3081Digital Press)
3082        Chapter 5.2: Physical Address Space Characteristics
3083        Chapter 5.4: Caches and Write Buffers
3084        Chapter 5.5: Data Sharing
3085        Chapter 5.6: Read/Write Ordering
3086
3087AMD64 Architecture Programmer's Manual Volume 2: System Programming
3088        Chapter 7.1: Memory-Access Ordering
3089        Chapter 7.4: Buffering and Combining Memory Writes
3090
3091IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3092System Programming Guide
3093        Chapter 7.1: Locked Atomic Operations
3094        Chapter 7.2: Memory Ordering
3095        Chapter 7.4: Serializing Instructions
3096
3097The SPARC Architecture Manual, Version 9
3098        Chapter 8: Memory Models
3099        Appendix D: Formal Specification of the Memory Models
3100        Appendix J: Programming with the Memory Models
3101
3102UltraSPARC Programmer Reference Manual
3103        Chapter 5: Memory Accesses and Cacheability
3104        Chapter 15: Sparc-V9 Memory Models
3105
3106UltraSPARC III Cu User's Manual
3107        Chapter 9: Memory Models
3108
3109UltraSPARC IIIi Processor User's Manual
3110        Chapter 8: Memory Models
3111
3112UltraSPARC Architecture 2005
3113        Chapter 9: Memory
3114        Appendix D: Formal Specifications of the Memory Models
3115
3116UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3117        Chapter 8: Memory Models
3118        Appendix F: Caches and Cache Coherency
3119
3120Solaris Internals, Core Kernel Architecture, p63-68:
3121        Chapter 3.3: Hardware Considerations for Locks and
3122                        Synchronization
3123
3124Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3125for Kernel Programmers:
3126        Chapter 13: Other Memory Models
3127
3128Intel Itanium Architecture Software Developer's Manual: Volume 1:
3129        Section 2.6: Speculation
3130        Section 4.4: Memory Access
3131
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.