linux/Documentation/spinlocks.txt
<<
>>
Prefs
   1Lesson 1: Spin locks
   2
   3The most basic primitive for locking is spinlock.
   4
   5static DEFINE_SPINLOCK(xxx_lock);
   6
   7        unsigned long flags;
   8
   9        spin_lock_irqsave(&xxx_lock, flags);
  10        ... critical section here ..
  11        spin_unlock_irqrestore(&xxx_lock, flags);
  12
  13The above is always safe. It will disable interrupts _locally_, but the
  14spinlock itself will guarantee the global lock, so it will guarantee that
  15there is only one thread-of-control within the region(s) protected by that
  16lock. This works well even under UP. The above sequence under UP
  17essentially is just the same as doing
  18
  19        unsigned long flags;
  20
  21        save_flags(flags); cli();
  22         ... critical section ...
  23        restore_flags(flags);
  24
  25so the code does _not_ need to worry about UP vs SMP issues: the spinlocks
  26work correctly under both (and spinlocks are actually more efficient on
  27architectures that allow doing the "save_flags + cli" in one operation).
  28
  29   NOTE! Implications of spin_locks for memory are further described in:
  30
  31     Documentation/memory-barriers.txt
  32       (5) LOCK operations.
  33       (6) UNLOCK operations.
  34
  35The above is usually pretty simple (you usually need and want only one
  36spinlock for most things - using more than one spinlock can make things a
  37lot more complex and even slower and is usually worth it only for
  38sequences that you _know_ need to be split up: avoid it at all cost if you
  39aren't sure). HOWEVER, it _does_ mean that if you have some code that does
  40
  41        cli();
  42        .. critical section ..
  43        sti();
  44
  45and another sequence that does
  46
  47        spin_lock_irqsave(flags);
  48        .. critical section ..
  49        spin_unlock_irqrestore(flags);
  50
  51then they are NOT mutually exclusive, and the critical regions can happen
  52at the same time on two different CPU's. That's fine per se, but the
  53critical regions had better be critical for different things (ie they
  54can't stomp on each other).
  55
  56The above is a problem mainly if you end up mixing code - for example the
  57routines in ll_rw_block() tend to use cli/sti to protect the atomicity of
  58their actions, and if a driver uses spinlocks instead then you should
  59think about issues like the above.
  60
  61This is really the only really hard part about spinlocks: once you start
  62using spinlocks they tend to expand to areas you might not have noticed
  63before, because you have to make sure the spinlocks correctly protect the
  64shared data structures _everywhere_ they are used. The spinlocks are most
  65easily added to places that are completely independent of other code (for
  66example, internal driver data structures that nobody else ever touches).
  67
  68   NOTE! The spin-lock is safe only when you _also_ use the lock itself
  69   to do locking across CPU's, which implies that EVERYTHING that
  70   touches a shared variable has to agree about the spinlock they want
  71   to use.
  72
  73----
  74
  75Lesson 2: reader-writer spinlocks.
  76
  77If your data accesses have a very natural pattern where you usually tend
  78to mostly read from the shared variables, the reader-writer locks
  79(rw_lock) versions of the spinlocks are sometimes useful. They allow multiple
  80readers to be in the same critical region at once, but if somebody wants
  81to change the variables it has to get an exclusive write lock.
  82
  83   NOTE! reader-writer locks require more atomic memory operations than
  84   simple spinlocks.  Unless the reader critical section is long, you
  85   are better off just using spinlocks.
  86
  87The routines look the same as above:
  88
  89   rwlock_t xxx_lock = RW_LOCK_UNLOCKED;
  90
  91        unsigned long flags;
  92
  93        read_lock_irqsave(&xxx_lock, flags);
  94        .. critical section that only reads the info ...
  95        read_unlock_irqrestore(&xxx_lock, flags);
  96
  97        write_lock_irqsave(&xxx_lock, flags);
  98        .. read and write exclusive access to the info ...
  99        write_unlock_irqrestore(&xxx_lock, flags);
 100
 101The above kind of lock may be useful for complex data structures like
 102linked lists, especially searching for entries without changing the list
 103itself.  The read lock allows many concurrent readers.  Anything that
 104_changes_ the list will have to get the write lock.
 105
 106   NOTE! RCU is better for list traversal, but requires careful
 107   attention to design detail (see Documentation/RCU/listRCU.txt).
 108
 109Also, you cannot "upgrade" a read-lock to a write-lock, so if you at _any_
 110time need to do any changes (even if you don't do it every time), you have
 111to get the write-lock at the very beginning.
 112
 113   NOTE! We are working hard to remove reader-writer spinlocks in most
 114   cases, so please don't add a new one without consensus.  (Instead, see
 115   Documentation/RCU/rcu.txt for complete information.)
 116
 117----
 118
 119Lesson 3: spinlocks revisited.
 120
 121The single spin-lock primitives above are by no means the only ones. They
 122are the most safe ones, and the ones that work under all circumstances,
 123but partly _because_ they are safe they are also fairly slow. They are
 124much faster than a generic global cli/sti pair, but slower than they'd
 125need to be, because they do have to disable interrupts (which is just a
 126single instruction on a x86, but it's an expensive one - and on other
 127architectures it can be worse).
 128
 129If you have a case where you have to protect a data structure across
 130several CPU's and you want to use spinlocks you can potentially use
 131cheaper versions of the spinlocks. IFF you know that the spinlocks are
 132never used in interrupt handlers, you can use the non-irq versions:
 133
 134        spin_lock(&lock);
 135        ...
 136        spin_unlock(&lock);
 137
 138(and the equivalent read-write versions too, of course). The spinlock will
 139guarantee the same kind of exclusive access, and it will be much faster. 
 140This is useful if you know that the data in question is only ever
 141manipulated from a "process context", ie no interrupts involved. 
 142
 143The reasons you mustn't use these versions if you have interrupts that
 144play with the spinlock is that you can get deadlocks:
 145
 146        spin_lock(&lock);
 147        ...
 148                <- interrupt comes in:
 149                        spin_lock(&lock);
 150
 151where an interrupt tries to lock an already locked variable. This is ok if
 152the other interrupt happens on another CPU, but it is _not_ ok if the
 153interrupt happens on the same CPU that already holds the lock, because the
 154lock will obviously never be released (because the interrupt is waiting
 155for the lock, and the lock-holder is interrupted by the interrupt and will
 156not continue until the interrupt has been processed). 
 157
 158(This is also the reason why the irq-versions of the spinlocks only need
 159to disable the _local_ interrupts - it's ok to use spinlocks in interrupts
 160on other CPU's, because an interrupt on another CPU doesn't interrupt the
 161CPU that holds the lock, so the lock-holder can continue and eventually
 162releases the lock). 
 163
 164Note that you can be clever with read-write locks and interrupts. For
 165example, if you know that the interrupt only ever gets a read-lock, then
 166you can use a non-irq version of read locks everywhere - because they
 167don't block on each other (and thus there is no dead-lock wrt interrupts. 
 168But when you do the write-lock, you have to use the irq-safe version. 
 169
 170For an example of being clever with rw-locks, see the "waitqueue_lock" 
 171handling in kernel/sched.c - nothing ever _changes_ a wait-queue from
 172within an interrupt, they only read the queue in order to know whom to
 173wake up. So read-locks are safe (which is good: they are very common
 174indeed), while write-locks need to protect themselves against interrupts.
 175
 176                Linus
 177
 178----
 179
 180Reference information:
 181
 182For dynamic initialization, use spin_lock_init() or rwlock_init() as
 183appropriate:
 184
 185   spinlock_t xxx_lock;
 186   rwlock_t xxx_rw_lock;
 187
 188   static int __init xxx_init(void)
 189   {
 190        spin_lock_init(&xxx_lock);
 191        rwlock_init(&xxx_rw_lock);
 192        ...
 193   }
 194
 195   module_init(xxx_init);
 196
 197For static initialization, use DEFINE_SPINLOCK() / DEFINE_RWLOCK() or
 198__SPIN_LOCK_UNLOCKED() / __RW_LOCK_UNLOCKED() as appropriate.
 199
 200SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED are deprecated.  These interfere
 201with lockdep state tracking.
 202
 203Most of the time, you can simply turn:
 204        static spinlock_t xxx_lock = SPIN_LOCK_UNLOCKED;
 205into:
 206        static DEFINE_SPINLOCK(xxx_lock);
 207
 208Static structure member variables go from:
 209
 210        struct foo bar {
 211                .lock   =       SPIN_LOCK_UNLOCKED;
 212        };
 213
 214to:
 215
 216        struct foo bar {
 217                .lock   =       __SPIN_LOCK_UNLOCKED(bar.lock);
 218        };
 219
 220Declaration of static rw_locks undergo a similar transformation.
 221
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.