linux/Documentation/filesystems/path-lookup.txt
<<
>>
Prefs
   1Path walking and name lookup locking
   2====================================
   3
   4Path resolution is the finding a dentry corresponding to a path name string, by
   5performing a path walk. Typically, for every open(), stat() etc., the path name
   6will be resolved. Paths are resolved by walking the namespace tree, starting
   7with the first component of the pathname (eg. root or cwd) with a known dentry,
   8then finding the child of that dentry, which is named the next component in the
   9path string. Then repeating the lookup from the child dentry and finding its
  10child with the next element, and so on.
  11
  12Since it is a frequent operation for workloads like multiuser environments and
  13web servers, it is important to optimize this code.
  14
  15Path walking synchronisation history:
  16Prior to 2.5.10, dcache_lock was acquired in d_lookup (dcache hash lookup) and
  17thus in every component during path look-up. Since 2.5.10 onwards, fast-walk
  18algorithm changed this by holding the dcache_lock at the beginning and walking
  19as many cached path component dentries as possible. This significantly
  20decreases the number of acquisition of dcache_lock. However it also increases
  21the lock hold time significantly and affects performance in large SMP machines.
  22Since 2.5.62 kernel, dcache has been using a new locking model that uses RCU to
  23make dcache look-up lock-free.
  24
  25All the above algorithms required taking a lock and reference count on the
  26dentry that was looked up, so that may be used as the basis for walking the
  27next path element. This is inefficient and unscalable. It is inefficient
  28because of the locks and atomic operations required for every dentry element
  29slows things down. It is not scalable because many parallel applications that
  30are path-walk intensive tend to do path lookups starting from a common dentry
  31(usually, the root "/" or current working directory). So contention on these
  32common path elements causes lock and cacheline queueing.
  33
  34Since 2.6.38, RCU is used to make a significant part of the entire path walk
  35(including dcache look-up) completely "store-free" (so, no locks, atomics, or
  36even stores into cachelines of common dentries). This is known as "rcu-walk"
  37path walking.
  38
  39Path walking overview
  40=====================
  41
  42A name string specifies a start (root directory, cwd, fd-relative) and a
  43sequence of elements (directory entry names), which together refer to a path in
  44the namespace. A path is represented as a (dentry, vfsmount) tuple. The name
  45elements are sub-strings, separated by '/'.
  46
  47Name lookups will want to find a particular path that a name string refers to
  48(usually the final element, or parent of final element). This is done by taking
  49the path given by the name's starting point (which we know in advance -- eg.
  50current->fs->cwd or current->fs->root) as the first parent of the lookup. Then
  51iteratively for each subsequent name element, look up the child of the current
  52parent with the given name and if it is not the desired entry, make it the
  53parent for the next lookup.
  54
  55A parent, of course, must be a directory, and we must have appropriate
  56permissions on the parent inode to be able to walk into it.
  57
  58Turning the child into a parent for the next lookup requires more checks and
  59procedures. Symlinks essentially substitute the symlink name for the target
  60name in the name string, and require some recursive path walking.  Mount points
  61must be followed into (thus changing the vfsmount that subsequent path elements
  62refer to), switching from the mount point path to the root of the particular
  63mounted vfsmount. These behaviours are variously modified depending on the
  64exact path walking flags.
  65
  66Path walking then must, broadly, do several particular things:
  67- find the start point of the walk;
  68- perform permissions and validity checks on inodes;
  69- perform dcache hash name lookups on (parent, name element) tuples;
  70- traverse mount points;
  71- traverse symlinks;
  72- lookup and create missing parts of the path on demand.
  73
  74Safe store-free look-up of dcache hash table
  75============================================
  76
  77Dcache name lookup
  78------------------
  79In order to lookup a dcache (parent, name) tuple, we take a hash on the tuple
  80and use that to select a bucket in the dcache-hash table. The list of entries
  81in that bucket is then walked, and we do a full comparison of each entry
  82against our (parent, name) tuple.
  83
  84The hash lists are RCU protected, so list walking is not serialised with
  85concurrent updates (insertion, deletion from the hash). This is a standard RCU
  86list application with the exception of renames, which will be covered below.
  87
  88Parent and name members of a dentry, as well as its membership in the dcache
  89hash, and its inode are protected by the per-dentry d_lock spinlock. A
  90reference is taken on the dentry (while the fields are verified under d_lock),
  91and this stabilises its d_inode pointer and actual inode. This gives a stable
  92point to perform the next step of our path walk against.
  93
  94These members are also protected by d_seq seqlock, although this offers
  95read-only protection and no durability of results, so care must be taken when
  96using d_seq for synchronisation (see seqcount based lookups, below).
  97
  98Renames
  99-------
 100Back to the rename case. In usual RCU protected lists, the only operations that
 101will happen to an object is insertion, and then eventually removal from the
 102list. The object will not be reused until an RCU grace period is complete.
 103This ensures the RCU list traversal primitives can run over the object without
 104problems (see RCU documentation for how this works).
 105
 106However when a dentry is renamed, its hash value can change, requiring it to be
 107moved to a new hash list. Allocating and inserting a new alias would be
 108expensive and also problematic for directory dentries. Latency would be far to
 109high to wait for a grace period after removing the dentry and before inserting
 110it in the new hash bucket. So what is done is to insert the dentry into the
 111new list immediately.
 112
 113However, when the dentry's list pointers are updated to point to objects in the
 114new list before waiting for a grace period, this can result in a concurrent RCU
 115lookup of the old list veering off into the new (incorrect) list and missing
 116the remaining dentries on the list.
 117
 118There is no fundamental problem with walking down the wrong list, because the
 119dentry comparisons will never match. However it is fatal to miss a matching
 120dentry. So a seqlock is used to detect when a rename has occurred, and so the
 121lookup can be retried.
 122
 123         1      2      3
 124        +---+  +---+  +---+
 125hlist-->| N-+->| N-+->| N-+->
 126head <--+-P |<-+-P |<-+-P |
 127        +---+  +---+  +---+
 128
 129Rename of dentry 2 may require it deleted from the above list, and inserted
 130into a new list. Deleting 2 gives the following list.
 131
 132         1             3
 133        +---+         +---+     (don't worry, the longer pointers do not
 134hlist-->| N-+-------->| N-+->    impose a measurable performance overhead
 135head <--+-P |<--------+-P |      on modern CPUs)
 136        +---+         +---+
 137          ^      2      ^
 138          |    +---+    |
 139          |    | N-+----+
 140          +----+-P |
 141               +---+
 142
 143This is a standard RCU-list deletion, which leaves the deleted object's
 144pointers intact, so a concurrent list walker that is currently looking at
 145object 2 will correctly continue to object 3 when it is time to traverse the
 146next object.
 147
 148However, when inserting object 2 onto a new list, we end up with this:
 149
 150         1             3
 151        +---+         +---+
 152hlist-->| N-+-------->| N-+->
 153head <--+-P |<--------+-P |
 154        +---+         +---+
 155                 2
 156               +---+
 157               | N-+---->
 158          <----+-P |
 159               +---+
 160
 161Because we didn't wait for a grace period, there may be a concurrent lookup
 162still at 2. Now when it follows 2's 'next' pointer, it will walk off into
 163another list without ever having checked object 3.
 164
 165A related, but distinctly different, issue is that of rename atomicity versus
 166lookup operations. If a file is renamed from 'A' to 'B', a lookup must only
 167find either 'A' or 'B'. So if a lookup of 'A' returns NULL, a subsequent lookup
 168of 'B' must succeed (note the reverse is not true).
 169
 170Between deleting the dentry from the old hash list, and inserting it on the new
 171hash list, a lookup may find neither 'A' nor 'B' matching the dentry. The same
 172rename seqlock is also used to cover this race in much the same way, by
 173retrying a negative lookup result if a rename was in progress.
 174
 175Seqcount based lookups
 176----------------------
 177In refcount based dcache lookups, d_lock is used to serialise access to
 178the dentry, stabilising it while comparing its name and parent and then
 179taking a reference count (the reference count then gives a stable place to
 180start the next part of the path walk from).
 181
 182As explained above, we would like to do path walking without taking locks or
 183reference counts on intermediate dentries along the path. To do this, a per
 184dentry seqlock (d_seq) is used to take a "coherent snapshot" of what the dentry
 185looks like (its name, parent, and inode). That snapshot is then used to start
 186the next part of the path walk. When loading the coherent snapshot under d_seq,
 187care must be taken to load the members up-front, and use those pointers rather
 188than reloading from the dentry later on (otherwise we'd have interesting things
 189like d_inode going NULL underneath us, if the name was unlinked).
 190
 191Also important is to avoid performing any destructive operations (pretty much:
 192no non-atomic stores to shared data), and to recheck the seqcount when we are
 193"done" with the operation. Retry or abort if the seqcount does not match.
 194Avoiding destructive or changing operations means we can easily unwind from
 195failure.
 196
 197What this means is that a caller, provided they are holding RCU lock to
 198protect the dentry object from disappearing, can perform a seqcount based
 199lookup which does not increment the refcount on the dentry or write to
 200it in any way. This returned dentry can be used for subsequent operations,
 201provided that d_seq is rechecked after that operation is complete.
 202
 203Inodes are also rcu freed, so the seqcount lookup dentry's inode may also be
 204queried for permissions.
 205
 206With this two parts of the puzzle, we can do path lookups without taking
 207locks or refcounts on dentry elements.
 208
 209RCU-walk path walking design
 210============================
 211
 212Path walking code now has two distinct modes, ref-walk and rcu-walk. ref-walk
 213is the traditional[*] way of performing dcache lookups using d_lock to
 214serialise concurrent modifications to the dentry and take a reference count on
 215it. ref-walk is simple and obvious, and may sleep, take locks, etc while path
 216walking is operating on each dentry. rcu-walk uses seqcount based dentry
 217lookups, and can perform lookup of intermediate elements without any stores to
 218shared data in the dentry or inode. rcu-walk can not be applied to all cases,
 219eg. if the filesystem must sleep or perform non trivial operations, rcu-walk
 220must be switched to ref-walk mode.
 221
 222[*] RCU is still used for the dentry hash lookup in ref-walk, but not the full
 223    path walk.
 224
 225Where ref-walk uses a stable, refcounted ``parent'' to walk the remaining
 226path string, rcu-walk uses a d_seq protected snapshot. When looking up a
 227child of this parent snapshot, we open d_seq critical section on the child
 228before closing d_seq critical section on the parent. This gives an interlocking
 229ladder of snapshots to walk down.
 230
 231
 232     proc 101
 233      /----------------\
 234     / comm:    "vi"    \
 235    /  fs.root: dentry0  \
 236    \  fs.cwd:  dentry2  /
 237     \                  /
 238      \----------------/
 239
 240So when vi wants to open("/home/npiggin/test.c", O_RDWR), then it will
 241start from current->fs->root, which is a pinned dentry. Alternatively,
 242"./test.c" would start from cwd; both names refer to the same path in
 243the context of proc101.
 244
 245     dentry 0
 246    +---------------------+   rcu-walk begins here, we note d_seq, check the
 247    | name:    "/"        |   inode's permission, and then look up the next
 248    | inode:   10         |   path element which is "home"...
 249    | children:"home", ...|
 250    +---------------------+
 251              |
 252     dentry 1 V
 253    +---------------------+   ... which brings us here. We find dentry1 via
 254    | name:    "home"     |   hash lookup, then note d_seq and compare name
 255    | inode:   678        |   string and parent pointer. When we have a match,
 256    | children:"npiggin"  |   we now recheck the d_seq of dentry0. Then we
 257    +---------------------+   check inode and look up the next element.
 258              |
 259     dentry2  V
 260    +---------------------+   Note: if dentry0 is now modified, lookup is
 261    | name:    "npiggin"  |   not necessarily invalid, so we need only keep a
 262    | inode:   543        |   parent for d_seq verification, and grandparents
 263    | children:"a.c", ... |   can be forgotten.
 264    +---------------------+
 265              |
 266     dentry3  V
 267    +---------------------+   At this point we have our destination dentry.
 268    | name:    "a.c"      |   We now take its d_lock, verify d_seq of this
 269    | inode:   14221      |   dentry. If that checks out, we can increment
 270    | children:NULL       |   its refcount because we're holding d_lock.
 271    +---------------------+
 272
 273Taking a refcount on a dentry from rcu-walk mode, by taking its d_lock,
 274re-checking its d_seq, and then incrementing its refcount is called
 275"dropping rcu" or dropping from rcu-walk into ref-walk mode.
 276
 277It is, in some sense, a bit of a house of cards. If the seqcount check of the
 278parent snapshot fails, the house comes down, because we had closed the d_seq
 279section on the grandparent, so we have nothing left to stand on. In that case,
 280the path walk must be fully restarted (which we do in ref-walk mode, to avoid
 281live locks). It is costly to have a full restart, but fortunately they are
 282quite rare.
 283
 284When we reach a point where sleeping is required, or a filesystem callout
 285requires ref-walk, then instead of restarting the walk, we attempt to drop rcu
 286at the last known good dentry we have. Avoiding a full restart in ref-walk in
 287these cases is fundamental for performance and scalability because blocking
 288operations such as creates and unlinks are not uncommon.
 289
 290The detailed design for rcu-walk is like this:
 291* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
 292* Take the RCU lock for the entire path walk, starting with the acquiring
 293  of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
 294  not required for dentry persistence.
 295* synchronize_rcu is called when unregistering a filesystem, so we can
 296  access d_ops and i_ops during rcu-walk.
 297* Similarly take the vfsmount lock for the entire path walk. So now mnt
 298  refcounts are not required for persistence. Also we are free to perform mount
 299  lookups, and to assume dentry mount points and mount roots are stable up and
 300  down the path.
 301* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
 302  so we can load this tuple atomically, and also check whether any of its
 303  members have changed.
 304* Dentry lookups (based on parent, candidate string tuple) recheck the parent
 305  sequence after the child is found in case anything changed in the parent
 306  during the path walk.
 307* inode is also RCU protected so we can load d_inode and use the inode for
 308  limited things.
 309* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
 310* i_op can be loaded.
 311* When the destination dentry is reached, drop rcu there (ie. take d_lock,
 312  verify d_seq, increment refcount).
 313* If seqlock verification fails anywhere along the path, do a full restart
 314  of the path lookup in ref-walk mode. -ECHILD tends to be used (for want of
 315  a better errno) to signal an rcu-walk failure.
 316
 317The cases where rcu-walk cannot continue are:
 318* NULL dentry (ie. any uncached path element)
 319* Following links
 320
 321It may be possible eventually to make following links rcu-walk aware.
 322
 323Uncached path elements will always require dropping to ref-walk mode, at the
 324very least because i_mutex needs to be grabbed, and objects allocated.
 325
 326Final note:
 327"store-free" path walking is not strictly store free. We take vfsmount lock
 328and refcounts (both of which can be made per-cpu), and we also store to the
 329stack (which is essentially CPU-local), and we also have to take locks and
 330refcount on final dentry.
 331
 332The point is that shared data, where practically possible, is not locked
 333or stored into. The result is massive improvements in performance and
 334scalability of path resolution.
 335
 336
 337Interesting statistics
 338======================
 339
 340The following table gives rcu lookup statistics for a few simple workloads
 341(2s12c24t Westmere, debian non-graphical system). Ungraceful are attempts to
 342drop rcu that fail due to d_seq failure and requiring the entire path lookup
 343again. Other cases are successful rcu-drops that are required before the final
 344element, nodentry for missing dentry, revalidate for filesystem revalidate
 345routine requiring rcu drop, permission for permission check requiring drop,
 346and link for symlink traversal requiring drop.
 347
 348     rcu-lookups     restart  nodentry          link  revalidate  permission
 349bootup     47121           0      4624          1010       10283        7852
 350dbench  25386793           0   6778659(26.7%)     55         549        1156
 351kbuild   2696672          10     64442(2.3%)  108764(4.0%)     1        1590
 352git diff   39605           0        28             2           0         106
 353vfstest 24185492        4945    708725(2.9%) 1076136(4.4%)     0        2651
 354
 355What this shows is that failed rcu-walk lookups, ie. ones that are restarted
 356entirely with ref-walk, are quite rare. Even the "vfstest" case which
 357specifically has concurrent renames/mkdir/rmdir/ creat/unlink/etc to exercise
 358such races is not showing a huge amount of restarts.
 359
 360Dropping from rcu-walk to ref-walk mean that we have encountered a dentry where
 361the reference count needs to be taken for some reason. This is either because
 362we have reached the target of the path walk, or because we have encountered a
 363condition that can't be resolved in rcu-walk mode.  Ideally, we drop rcu-walk
 364only when we have reached the target dentry, so the other statistics show where
 365this does not happen.
 366
 367Note that a graceful drop from rcu-walk mode due to something such as the
 368dentry not existing (which can be common) is not necessarily a failure of
 369rcu-walk scheme, because some elements of the path may have been walked in
 370rcu-walk mode. The further we get from common path elements (such as cwd or
 371root), the less contended the dentry is likely to be. The closer we are to
 372common path elements, the more likely they will exist in dentry cache.
 373
 374
 375Papers and other documentation on dcache locking
 376================================================
 377
 3781. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).
 379
 3802. http://lse.sourceforge.net/locking/dcache/dcache.html
 381
 382
 383
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.