1Lockdep-RCU was added to the Linux kernel in early 2010
   2(  This facility checks for some common
   3misuses of the RCU API, most notably using one of the rcu_dereference()
   4family to access an RCU-protected pointer without the proper protection.
   5When such misuse is detected, an lockdep-RCU splat is emitted.
   7The usual cause of a lockdep-RCU slat is someone accessing an
   8RCU-protected data structure without either (1) being in the right kind of
   9RCU read-side critical section or (2) holding the right update-side lock.
  10This problem can therefore be serious: it might result in random memory
  11overwriting or worse.  There can of course be false positives, this
  12being the real world and all that.
  14So let's look at an example RCU lockdep splat from 3.0-rc5, one that
  15has long since been fixed:
  18[ INFO: suspicious RCU usage. ]
  20block/cfq-iosched.c:2776 suspicious rcu_dereference_protected() usage!
  22other info that might help us debug this:
  25rcu_scheduler_active = 1, debug_locks = 0
  263 locks held by scsi_scan_6/1552:
  27 #0:  (&shost->scan_mutex){+.+.+.}, at: [<ffffffff8145efca>]
  29 #1:  (&eq->sysfs_lock){+.+...}, at: [<ffffffff812a5032>]
  31 #2:  (&(&q->__queue_lock)->rlock){-.-...}, at: [<ffffffff812b6233>]
  34stack backtrace:
  35Pid: 1552, comm: scsi_scan_6 Not tainted 3.0.0-rc5 #17
  36Call Trace:
  37 [<ffffffff810abb9b>] lockdep_rcu_dereference+0xbb/0xc0
  38 [<ffffffff812b6139>] __cfq_exit_single_io_context+0xe9/0x120
  39 [<ffffffff812b626c>] cfq_exit_queue+0x7c/0x190
  40 [<ffffffff812a5046>] elevator_exit+0x36/0x60
  41 [<ffffffff812a802a>] blk_cleanup_queue+0x4a/0x60
  42 [<ffffffff8145cc09>] scsi_free_queue+0x9/0x10
  43 [<ffffffff81460944>] __scsi_remove_device+0x84/0xd0
  44 [<ffffffff8145dca3>] scsi_probe_and_add_lun+0x353/0xb10
  45 [<ffffffff817da069>] ? error_exit+0x29/0xb0
  46 [<ffffffff817d98ed>] ? _raw_spin_unlock_irqrestore+0x3d/0x80
  47 [<ffffffff8145e722>] __scsi_scan_target+0x112/0x680
  48 [<ffffffff812c690d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
  49 [<ffffffff817da069>] ? error_exit+0x29/0xb0
  50 [<ffffffff812bcc60>] ? kobject_del+0x40/0x40
  51 [<ffffffff8145ed16>] scsi_scan_channel+0x86/0xb0
  52 [<ffffffff8145f0b0>] scsi_scan_host_selected+0x140/0x150
  53 [<ffffffff8145f149>] do_scsi_scan_host+0x89/0x90
  54 [<ffffffff8145f170>] do_scan_async+0x20/0x160
  55 [<ffffffff8145f150>] ? do_scsi_scan_host+0x90/0x90
  56 [<ffffffff810975b6>] kthread+0xa6/0xb0
  57 [<ffffffff817db154>] kernel_thread_helper+0x4/0x10
  58 [<ffffffff81066430>] ? finish_task_switch+0x80/0x110
  59 [<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe
  60 [<ffffffff81097510>] ? __init_kthread_worker+0x70/0x70
  61 [<ffffffff817db150>] ? gs_change+0xb/0xb
  63Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows:
  65        if (rcu_dereference(ioc->ioc_data) == cic) {
  67This form says that it must be in a plain vanilla RCU read-side critical
  68section, but the "other info" list above shows that this is not the
  69case.  Instead, we hold three locks, one of which might be RCU related.
  70And maybe that lock really does protect this reference.  If so, the fix
  71is to inform RCU, perhaps by changing __cfq_exit_single_io_context() to
  72take the struct request_queue "q" from cfq_exit_queue() as an argument,
  73which would permit us to invoke rcu_dereference_protected as follows:
  75        if (rcu_dereference_protected(ioc->ioc_data,
  76                                      lockdep_is_held(&q->queue_lock)) == cic) {
  78With this change, there would be no lockdep-RCU splat emitted if this
  79code was invoked either from within an RCU read-side critical section
  80or with the ->queue_lock held.  In particular, this would have suppressed
  81the above lockdep-RCU splat because ->queue_lock is held (see #2 in the
  82list above).
  84On the other hand, perhaps we really do need an RCU read-side critical
  85section.  In this case, the critical section must span the use of the
  86return value from rcu_dereference(), or at least until there is some
  87reference count incremented or some such.  One way to handle this is to
  88add rcu_read_lock() and rcu_read_unlock() as follows:
  90        rcu_read_lock();
  91        if (rcu_dereference(ioc->ioc_data) == cic) {
  92                spin_lock(&ioc->lock);
  93                rcu_assign_pointer(ioc->ioc_data, NULL);
  94                spin_unlock(&ioc->lock);
  95        }
  96        rcu_read_unlock();
  98With this change, the rcu_dereference() is always within an RCU
  99read-side critical section, which again would have suppressed the
 100above lockdep-RCU splat.
 102But in this particular case, we don't actually deference the pointer
 103returned from rcu_dereference().  Instead, that pointer is just compared
 104to the cic pointer, which means that the rcu_dereference() can be replaced
 105by rcu_access_pointer() as follows:
 107        if (rcu_access_pointer(ioc->ioc_data) == cic) {
 109Because it is legal to invoke rcu_access_pointer() without protection,
 110this change would also suppress the above lockdep-RCU splat.
 111 kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.