linux/Documentation/atomic_ops.txt
<<
>>
Prefs
   1                Semantics and Behavior of Atomic and
   2                         Bitmask Operations
   3
   4                          David S. Miller        
   5
   6        This document is intended to serve as a guide to Linux port
   7maintainers on how to implement atomic counter, bitops, and spinlock
   8interfaces properly.
   9
  10        The atomic_t type should be defined as a signed integer.
  11Also, it should be made opaque such that any kind of cast to a normal
  12C integer type will fail.  Something like the following should
  13suffice:
  14
  15        typedef struct { int counter; } atomic_t;
  16
  17Historically, counter has been declared volatile.  This is now discouraged.
  18See Documentation/volatile-considered-harmful.txt for the complete rationale.
  19
  20local_t is very similar to atomic_t. If the counter is per CPU and only
  21updated by one CPU, local_t is probably more appropriate. Please see
  22Documentation/local_ops.txt for the semantics of local_t.
  23
  24The first operations to implement for atomic_t's are the initializers and
  25plain reads.
  26
  27        #define ATOMIC_INIT(i)          { (i) }
  28        #define atomic_set(v, i)        ((v)->counter = (i))
  29
  30The first macro is used in definitions, such as:
  31
  32static atomic_t my_counter = ATOMIC_INIT(1);
  33
  34The initializer is atomic in that the return values of the atomic operations
  35are guaranteed to be correct reflecting the initialized value if the
  36initializer is used before runtime.  If the initializer is used at runtime, a
  37proper implicit or explicit read memory barrier is needed before reading the
  38value with atomic_read from another thread.
  39
  40The second interface can be used at runtime, as in:
  41
  42        struct foo { atomic_t counter; };
  43        ...
  44
  45        struct foo *k;
  46
  47        k = kmalloc(sizeof(*k), GFP_KERNEL);
  48        if (!k)
  49                return -ENOMEM;
  50        atomic_set(&k->counter, 0);
  51
  52The setting is atomic in that the return values of the atomic operations by
  53all threads are guaranteed to be correct reflecting either the value that has
  54been set with this operation or set with another operation.  A proper implicit
  55or explicit memory barrier is needed before the value set with the operation
  56is guaranteed to be readable with atomic_read from another thread.
  57
  58Next, we have:
  59
  60        #define atomic_read(v)  ((v)->counter)
  61
  62which simply reads the counter value currently visible to the calling thread.
  63The read is atomic in that the return value is guaranteed to be one of the
  64values initialized or modified with the interface operations if a proper
  65implicit or explicit memory barrier is used after possible runtime
  66initialization by any other thread and the value is modified only with the
  67interface operations.  atomic_read does not guarantee that the runtime
  68initialization by any other thread is visible yet, so the user of the
  69interface must take care of that with a proper implicit or explicit memory
  70barrier.
  71
  72*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***
  73
  74Some architectures may choose to use the volatile keyword, barriers, or inline
  75assembly to guarantee some degree of immediacy for atomic_read() and
  76atomic_set().  This is not uniformly guaranteed, and may change in the future,
  77so all users of atomic_t should treat atomic_read() and atomic_set() as simple
  78C statements that may be reordered or optimized away entirely by the compiler
  79or processor, and explicitly invoke the appropriate compiler and/or memory
  80barrier for each use case.  Failure to do so will result in code that may
  81suddenly break when used with different architectures or compiler
  82optimizations, or even changes in unrelated code which changes how the
  83compiler optimizes the section accessing atomic_t variables.
  84
  85*** YOU HAVE BEEN WARNED! ***
  86
  87Properly aligned pointers, longs, ints, and chars (and unsigned
  88equivalents) may be atomically loaded from and stored to in the same
  89sense as described for atomic_read() and atomic_set().  The ACCESS_ONCE()
  90macro should be used to prevent the compiler from using optimizations
  91that might otherwise optimize accesses out of existence on the one hand,
  92or that might create unsolicited accesses on the other.
  93
  94For example consider the following code:
  95
  96        while (a > 0)
  97                do_something();
  98
  99If the compiler can prove that do_something() does not store to the
 100variable a, then the compiler is within its rights transforming this to
 101the following:
 102
 103        tmp = a;
 104        if (a > 0)
 105                for (;;)
 106                        do_something();
 107
 108If you don't want the compiler to do this (and you probably don't), then
 109you should use something like the following:
 110
 111        while (ACCESS_ONCE(a) < 0)
 112                do_something();
 113
 114Alternatively, you could place a barrier() call in the loop.
 115
 116For another example, consider the following code:
 117
 118        tmp_a = a;
 119        do_something_with(tmp_a);
 120        do_something_else_with(tmp_a);
 121
 122If the compiler can prove that do_something_with() does not store to the
 123variable a, then the compiler is within its rights to manufacture an
 124additional load as follows:
 125
 126        tmp_a = a;
 127        do_something_with(tmp_a);
 128        tmp_a = a;
 129        do_something_else_with(tmp_a);
 130
 131This could fatally confuse your code if it expected the same value
 132to be passed to do_something_with() and do_something_else_with().
 133
 134The compiler would be likely to manufacture this additional load if
 135do_something_with() was an inline function that made very heavy use
 136of registers: reloading from variable a could save a flush to the
 137stack and later reload.  To prevent the compiler from attacking your
 138code in this manner, write the following:
 139
 140        tmp_a = ACCESS_ONCE(a);
 141        do_something_with(tmp_a);
 142        do_something_else_with(tmp_a);
 143
 144For a final example, consider the following code, assuming that the
 145variable a is set at boot time before the second CPU is brought online
 146and never changed later, so that memory barriers are not needed:
 147
 148        if (a)
 149                b = 9;
 150        else
 151                b = 42;
 152
 153The compiler is within its rights to manufacture an additional store
 154by transforming the above code into the following:
 155
 156        b = 42;
 157        if (a)
 158                b = 9;
 159
 160This could come as a fatal surprise to other code running concurrently
 161that expected b to never have the value 42 if a was zero.  To prevent
 162the compiler from doing this, write something like:
 163
 164        if (a)
 165                ACCESS_ONCE(b) = 9;
 166        else
 167                ACCESS_ONCE(b) = 42;
 168
 169Don't even -think- about doing this without proper use of memory barriers,
 170locks, or atomic operations if variable a can change at runtime!
 171
 172*** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! ***
 173
 174Now, we move onto the atomic operation interfaces typically implemented with
 175the help of assembly code.
 176
 177        void atomic_add(int i, atomic_t *v);
 178        void atomic_sub(int i, atomic_t *v);
 179        void atomic_inc(atomic_t *v);
 180        void atomic_dec(atomic_t *v);
 181
 182These four routines add and subtract integral values to/from the given
 183atomic_t value.  The first two routines pass explicit integers by
 184which to make the adjustment, whereas the latter two use an implicit
 185adjustment value of "1".
 186
 187One very important aspect of these two routines is that they DO NOT
 188require any explicit memory barriers.  They need only perform the
 189atomic_t counter update in an SMP safe manner.
 190
 191Next, we have:
 192
 193        int atomic_inc_return(atomic_t *v);
 194        int atomic_dec_return(atomic_t *v);
 195
 196These routines add 1 and subtract 1, respectively, from the given
 197atomic_t and return the new counter value after the operation is
 198performed.
 199
 200Unlike the above routines, it is required that explicit memory
 201barriers are performed before and after the operation.  It must be
 202done such that all memory operations before and after the atomic
 203operation calls are strongly ordered with respect to the atomic
 204operation itself.
 205
 206For example, it should behave as if a smp_mb() call existed both
 207before and after the atomic operation.
 208
 209If the atomic instructions used in an implementation provide explicit
 210memory barrier semantics which satisfy the above requirements, that is
 211fine as well.
 212
 213Let's move on:
 214
 215        int atomic_add_return(int i, atomic_t *v);
 216        int atomic_sub_return(int i, atomic_t *v);
 217
 218These behave just like atomic_{inc,dec}_return() except that an
 219explicit counter adjustment is given instead of the implicit "1".
 220This means that like atomic_{inc,dec}_return(), the memory barrier
 221semantics are required.
 222
 223Next:
 224
 225        int atomic_inc_and_test(atomic_t *v);
 226        int atomic_dec_and_test(atomic_t *v);
 227
 228These two routines increment and decrement by 1, respectively, the
 229given atomic counter.  They return a boolean indicating whether the
 230resulting counter value was zero or not.
 231
 232It requires explicit memory barrier semantics around the operation as
 233above.
 234
 235        int atomic_sub_and_test(int i, atomic_t *v);
 236
 237This is identical to atomic_dec_and_test() except that an explicit
 238decrement is given instead of the implicit "1".  It requires explicit
 239memory barrier semantics around the operation.
 240
 241        int atomic_add_negative(int i, atomic_t *v);
 242
 243The given increment is added to the given atomic counter value.  A
 244boolean is return which indicates whether the resulting counter value
 245is negative.  It requires explicit memory barrier semantics around the
 246operation.
 247
 248Then:
 249
 250        int atomic_xchg(atomic_t *v, int new);
 251
 252This performs an atomic exchange operation on the atomic variable v, setting
 253the given new value.  It returns the old value that the atomic variable v had
 254just before the operation.
 255
 256atomic_xchg requires explicit memory barriers around the operation.
 257
 258        int atomic_cmpxchg(atomic_t *v, int old, int new);
 259
 260This performs an atomic compare exchange operation on the atomic value v,
 261with the given old and new values. Like all atomic_xxx operations,
 262atomic_cmpxchg will only satisfy its atomicity semantics as long as all
 263other accesses of *v are performed through atomic_xxx operations.
 264
 265atomic_cmpxchg requires explicit memory barriers around the operation.
 266
 267The semantics for atomic_cmpxchg are the same as those defined for 'cas'
 268below.
 269
 270Finally:
 271
 272        int atomic_add_unless(atomic_t *v, int a, int u);
 273
 274If the atomic value v is not equal to u, this function adds a to v, and
 275returns non zero. If v is equal to u then it returns zero. This is done as
 276an atomic operation.
 277
 278atomic_add_unless requires explicit memory barriers around the operation
 279unless it fails (returns 0).
 280
 281atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
 282
 283
 284If a caller requires memory barrier semantics around an atomic_t
 285operation which does not return a value, a set of interfaces are
 286defined which accomplish this:
 287
 288        void smp_mb__before_atomic_dec(void);
 289        void smp_mb__after_atomic_dec(void);
 290        void smp_mb__before_atomic_inc(void);
 291        void smp_mb__after_atomic_inc(void);
 292
 293For example, smp_mb__before_atomic_dec() can be used like so:
 294
 295        obj->dead = 1;
 296        smp_mb__before_atomic_dec();
 297        atomic_dec(&obj->ref_count);
 298
 299It makes sure that all memory operations preceding the atomic_dec()
 300call are strongly ordered with respect to the atomic counter
 301operation.  In the above example, it guarantees that the assignment of
 302"1" to obj->dead will be globally visible to other cpus before the
 303atomic counter decrement.
 304
 305Without the explicit smp_mb__before_atomic_dec() call, the
 306implementation could legally allow the atomic counter update visible
 307to other cpus before the "obj->dead = 1;" assignment.
 308
 309The other three interfaces listed are used to provide explicit
 310ordering with respect to memory operations after an atomic_dec() call
 311(smp_mb__after_atomic_dec()) and around atomic_inc() calls
 312(smp_mb__{before,after}_atomic_inc()).
 313
 314A missing memory barrier in the cases where they are required by the
 315atomic_t implementation above can have disastrous results.  Here is
 316an example, which follows a pattern occurring frequently in the Linux
 317kernel.  It is the use of atomic counters to implement reference
 318counting, and it works such that once the counter falls to zero it can
 319be guaranteed that no other entity can be accessing the object:
 320
 321static void obj_list_add(struct obj *obj, struct list_head *head)
 322{
 323        obj->active = 1;
 324        list_add(&obj->list, head);
 325}
 326
 327static void obj_list_del(struct obj *obj)
 328{
 329        list_del(&obj->list);
 330        obj->active = 0;
 331}
 332
 333static void obj_destroy(struct obj *obj)
 334{
 335        BUG_ON(obj->active);
 336        kfree(obj);
 337}
 338
 339struct obj *obj_list_peek(struct list_head *head)
 340{
 341        if (!list_empty(head)) {
 342                struct obj *obj;
 343
 344                obj = list_entry(head->next, struct obj, list);
 345                atomic_inc(&obj->refcnt);
 346                return obj;
 347        }
 348        return NULL;
 349}
 350
 351void obj_poke(void)
 352{
 353        struct obj *obj;
 354
 355        spin_lock(&global_list_lock);
 356        obj = obj_list_peek(&global_list);
 357        spin_unlock(&global_list_lock);
 358
 359        if (obj) {
 360                obj->ops->poke(obj);
 361                if (atomic_dec_and_test(&obj->refcnt))
 362                        obj_destroy(obj);
 363        }
 364}
 365
 366void obj_timeout(struct obj *obj)
 367{
 368        spin_lock(&global_list_lock);
 369        obj_list_del(obj);
 370        spin_unlock(&global_list_lock);
 371
 372        if (atomic_dec_and_test(&obj->refcnt))
 373                obj_destroy(obj);
 374}
 375
 376(This is a simplification of the ARP queue management in the
 377 generic neighbour discover code of the networking.  Olaf Kirch
 378 found a bug wrt. memory barriers in kfree_skb() that exposed
 379 the atomic_t memory barrier requirements quite clearly.)
 380
 381Given the above scheme, it must be the case that the obj->active
 382update done by the obj list deletion be visible to other processors
 383before the atomic counter decrement is performed.
 384
 385Otherwise, the counter could fall to zero, yet obj->active would still
 386be set, thus triggering the assertion in obj_destroy().  The error
 387sequence looks like this:
 388
 389        cpu 0                           cpu 1
 390        obj_poke()                      obj_timeout()
 391        obj = obj_list_peek();
 392        ... gains ref to obj, refcnt=2
 393                                        obj_list_del(obj);
 394                                        obj->active = 0 ...
 395                                        ... visibility delayed ...
 396                                        atomic_dec_and_test()
 397                                        ... refcnt drops to 1 ...
 398        atomic_dec_and_test()
 399        ... refcount drops to 0 ...
 400        obj_destroy()
 401        BUG() triggers since obj->active
 402        still seen as one
 403                                        obj->active update visibility occurs
 404
 405With the memory barrier semantics required of the atomic_t operations
 406which return values, the above sequence of memory visibility can never
 407happen.  Specifically, in the above case the atomic_dec_and_test()
 408counter decrement would not become globally visible until the
 409obj->active update does.
 410
 411As a historical note, 32-bit Sparc used to only allow usage of
 41224-bits of its atomic_t type.  This was because it used 8 bits
 413as a spinlock for SMP safety.  Sparc32 lacked a "compare and swap"
 414type instruction.  However, 32-bit Sparc has since been moved over
 415to a "hash table of spinlocks" scheme, that allows the full 32-bit
 416counter to be realized.  Essentially, an array of spinlocks are
 417indexed into based upon the address of the atomic_t being operated
 418on, and that lock protects the atomic operation.  Parisc uses the
 419same scheme.
 420
 421Another note is that the atomic_t operations returning values are
 422extremely slow on an old 386.
 423
 424We will now cover the atomic bitmask operations.  You will find that
 425their SMP and memory barrier semantics are similar in shape and scope
 426to the atomic_t ops above.
 427
 428Native atomic bit operations are defined to operate on objects aligned
 429to the size of an "unsigned long" C data type, and are least of that
 430size.  The endianness of the bits within each "unsigned long" are the
 431native endianness of the cpu.
 432
 433        void set_bit(unsigned long nr, volatile unsigned long *addr);
 434        void clear_bit(unsigned long nr, volatile unsigned long *addr);
 435        void change_bit(unsigned long nr, volatile unsigned long *addr);
 436
 437These routines set, clear, and change, respectively, the bit number
 438indicated by "nr" on the bit mask pointed to by "ADDR".
 439
 440They must execute atomically, yet there are no implicit memory barrier
 441semantics required of these interfaces.
 442
 443        int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
 444        int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
 445        int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
 446
 447Like the above, except that these routines return a boolean which
 448indicates whether the changed bit was set _BEFORE_ the atomic bit
 449operation.
 450
 451WARNING! It is incredibly important that the value be a boolean,
 452ie. "0" or "1".  Do not try to be fancy and save a few instructions by
 453declaring the above to return "long" and just returning something like
 454"old_val & mask" because that will not work.
 455
 456For one thing, this return value gets truncated to int in many code
 457paths using these interfaces, so on 64-bit if the bit is set in the
 458upper 32-bits then testers will never see that.
 459
 460One great example of where this problem crops up are the thread_info
 461flag operations.  Routines such as test_and_set_ti_thread_flag() chop
 462the return value into an int.  There are other places where things
 463like this occur as well.
 464
 465These routines, like the atomic_t counter operations returning values,
 466require explicit memory barrier semantics around their execution.  All
 467memory operations before the atomic bit operation call must be made
 468visible globally before the atomic bit operation is made visible.
 469Likewise, the atomic bit operation must be visible globally before any
 470subsequent memory operation is made visible.  For example:
 471
 472        obj->dead = 1;
 473        if (test_and_set_bit(0, &obj->flags))
 474                /* ... */;
 475        obj->killed = 1;
 476
 477The implementation of test_and_set_bit() must guarantee that
 478"obj->dead = 1;" is visible to cpus before the atomic memory operation
 479done by test_and_set_bit() becomes visible.  Likewise, the atomic
 480memory operation done by test_and_set_bit() must become visible before
 481"obj->killed = 1;" is visible.
 482
 483Finally there is the basic operation:
 484
 485        int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
 486
 487Which returns a boolean indicating if bit "nr" is set in the bitmask
 488pointed to by "addr".
 489
 490If explicit memory barriers are required around clear_bit() (which
 491does not return a value, and thus does not need to provide memory
 492barrier semantics), two interfaces are provided:
 493
 494        void smp_mb__before_clear_bit(void);
 495        void smp_mb__after_clear_bit(void);
 496
 497They are used as follows, and are akin to their atomic_t operation
 498brothers:
 499
 500        /* All memory operations before this call will
 501         * be globally visible before the clear_bit().
 502         */
 503        smp_mb__before_clear_bit();
 504        clear_bit( ... );
 505
 506        /* The clear_bit() will be visible before all
 507         * subsequent memory operations.
 508         */
 509         smp_mb__after_clear_bit();
 510
 511There are two special bitops with lock barrier semantics (acquire/release,
 512same as spinlocks). These operate in the same way as their non-_lock/unlock
 513postfixed variants, except that they are to provide acquire/release semantics,
 514respectively. This means they can be used for bit_spin_trylock and
 515bit_spin_unlock type operations without specifying any more barriers.
 516
 517        int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
 518        void clear_bit_unlock(unsigned long nr, unsigned long *addr);
 519        void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
 520
 521The __clear_bit_unlock version is non-atomic, however it still implements
 522unlock barrier semantics. This can be useful if the lock itself is protecting
 523the other bits in the word.
 524
 525Finally, there are non-atomic versions of the bitmask operations
 526provided.  They are used in contexts where some other higher-level SMP
 527locking scheme is being used to protect the bitmask, and thus less
 528expensive non-atomic operations may be used in the implementation.
 529They have names similar to the above bitmask operation interfaces,
 530except that two underscores are prefixed to the interface name.
 531
 532        void __set_bit(unsigned long nr, volatile unsigned long *addr);
 533        void __clear_bit(unsigned long nr, volatile unsigned long *addr);
 534        void __change_bit(unsigned long nr, volatile unsigned long *addr);
 535        int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
 536        int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
 537        int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
 538
 539These non-atomic variants also do not require any special memory
 540barrier semantics.
 541
 542The routines xchg() and cmpxchg() need the same exact memory barriers
 543as the atomic and bit operations returning values.
 544
 545Spinlocks and rwlocks have memory barrier expectations as well.
 546The rule to follow is simple:
 547
 5481) When acquiring a lock, the implementation must make it globally
 549   visible before any subsequent memory operation.
 550
 5512) When releasing a lock, the implementation must make it such that
 552   all previous memory operations are globally visible before the
 553   lock release.
 554
 555Which finally brings us to _atomic_dec_and_lock().  There is an
 556architecture-neutral version implemented in lib/dec_and_lock.c,
 557but most platforms will wish to optimize this in assembler.
 558
 559        int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
 560
 561Atomically decrement the given counter, and if will drop to zero
 562atomically acquire the given spinlock and perform the decrement
 563of the counter to zero.  If it does not drop to zero, do nothing
 564with the spinlock.
 565
 566It is actually pretty simple to get the memory barrier correct.
 567Simply satisfy the spinlock grab requirements, which is make
 568sure the spinlock operation is globally visible before any
 569subsequent memory operation.
 570
 571We can demonstrate this operation more clearly if we define
 572an abstract atomic operation:
 573
 574        long cas(long *mem, long old, long new);
 575
 576"cas" stands for "compare and swap".  It atomically:
 577
 5781) Compares "old" with the value currently at "mem".
 5792) If they are equal, "new" is written to "mem".
 5803) Regardless, the current value at "mem" is returned.
 581
 582As an example usage, here is what an atomic counter update
 583might look like:
 584
 585void example_atomic_inc(long *counter)
 586{
 587        long old, new, ret;
 588
 589        while (1) {
 590                old = *counter;
 591                new = old + 1;
 592
 593                ret = cas(counter, old, new);
 594                if (ret == old)
 595                        break;
 596        }
 597}
 598
 599Let's use cas() in order to build a pseudo-C atomic_dec_and_lock():
 600
 601int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
 602{
 603        long old, new, ret;
 604        int went_to_zero;
 605
 606        went_to_zero = 0;
 607        while (1) {
 608                old = atomic_read(atomic);
 609                new = old - 1;
 610                if (new == 0) {
 611                        went_to_zero = 1;
 612                        spin_lock(lock);
 613                }
 614                ret = cas(atomic, old, new);
 615                if (ret == old)
 616                        break;
 617                if (went_to_zero) {
 618                        spin_unlock(lock);
 619                        went_to_zero = 0;
 620                }
 621        }
 622
 623        return went_to_zero;
 624}
 625
 626Now, as far as memory barriers go, as long as spin_lock()
 627strictly orders all subsequent memory operations (including
 628the cas()) with respect to itself, things will be fine.
 629
 630Said another way, _atomic_dec_and_lock() must guarantee that
 631a counter dropping to zero is never made visible before the
 632spinlock being acquired.
 633
 634Note that this also means that for the case where the counter
 635is not dropping to zero, there are no memory ordering
 636requirements.
 637
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.