linux/Documentation/filesystems/inotify.txt
<<
>>
Prefs
   1                                   inotify
   2            a powerful yet simple file change notification system
   3
   4
   5
   6Document started 15 Mar 2005 by Robert Love <rml@novell.com>
   7
   8
   9(i) User Interface
  10
  11Inotify is controlled by a set of three system calls and normal file I/O on a
  12returned file descriptor.
  13
  14First step in using inotify is to initialise an inotify instance:
  15
  16        int fd = inotify_init ();
  17
  18Each instance is associated with a unique, ordered queue.
  19
  20Change events are managed by "watches".  A watch is an (object,mask) pair where
  21the object is a file or directory and the mask is a bit mask of one or more
  22inotify events that the application wishes to receive.  See <linux/inotify.h>
  23for valid events.  A watch is referenced by a watch descriptor, or wd.
  24
  25Watches are added via a path to the file.
  26
  27Watches on a directory will return events on any files inside of the directory.
  28
  29Adding a watch is simple:
  30
  31        int wd = inotify_add_watch (fd, path, mask);
  32
  33Where "fd" is the return value from inotify_init(), path is the path to the
  34object to watch, and mask is the watch mask (see <linux/inotify.h>).
  35
  36You can update an existing watch in the same manner, by passing in a new mask.
  37
  38An existing watch is removed via
  39
  40        int ret = inotify_rm_watch (fd, wd);
  41
  42Events are provided in the form of an inotify_event structure that is read(2)
  43from a given inotify instance.  The filename is of dynamic length and follows
  44the struct. It is of size len.  The filename is padded with null bytes to
  45ensure proper alignment.  This padding is reflected in len.
  46
  47You can slurp multiple events by passing a large buffer, for example
  48
  49        size_t len = read (fd, buf, BUF_LEN);
  50
  51Where "buf" is a pointer to an array of "inotify_event" structures at least
  52BUF_LEN bytes in size.  The above example will return as many events as are
  53available and fit in BUF_LEN.
  54
  55Each inotify instance fd is also select()- and poll()-able.
  56
  57You can find the size of the current event queue via the standard FIONREAD
  58ioctl on the fd returned by inotify_init().
  59
  60All watches are destroyed and cleaned up on close.
  61
  62
  63(ii)
  64
  65Prototypes:
  66
  67        int inotify_init (void);
  68        int inotify_add_watch (int fd, const char *path, __u32 mask);
  69        int inotify_rm_watch (int fd, __u32 mask);
  70
  71
  72(iii) Kernel Interface
  73
  74Inotify's kernel API consists a set of functions for managing watches and an
  75event callback.
  76
  77To use the kernel API, you must first initialize an inotify instance with a set
  78of inotify_operations.  You are given an opaque inotify_handle, which you use
  79for any further calls to inotify.
  80
  81    struct inotify_handle *ih = inotify_init(my_event_handler);
  82
  83You must provide a function for processing events and a function for destroying
  84the inotify watch.
  85
  86    void handle_event(struct inotify_watch *watch, u32 wd, u32 mask,
  87                      u32 cookie, const char *name, struct inode *inode)
  88
  89        watch - the pointer to the inotify_watch that triggered this call
  90        wd - the watch descriptor
  91        mask - describes the event that occurred
  92        cookie - an identifier for synchronizing events
  93        name - the dentry name for affected files in a directory-based event
  94        inode - the affected inode in a directory-based event
  95
  96    void destroy_watch(struct inotify_watch *watch)
  97
  98You may add watches by providing a pre-allocated and initialized inotify_watch
  99structure and specifying the inode to watch along with an inotify event mask.
 100You must pin the inode during the call.  You will likely wish to embed the
 101inotify_watch structure in a structure of your own which contains other
 102information about the watch.  Once you add an inotify watch, it is immediately
 103subject to removal depending on filesystem events.  You must grab a reference if
 104you depend on the watch hanging around after the call.
 105
 106    inotify_init_watch(&my_watch->iwatch);
 107    inotify_get_watch(&my_watch->iwatch);       // optional
 108    s32 wd = inotify_add_watch(ih, &my_watch->iwatch, inode, mask);
 109    inotify_put_watch(&my_watch->iwatch);       // optional
 110
 111You may use the watch descriptor (wd) or the address of the inotify_watch for
 112other inotify operations.  You must not directly read or manipulate data in the
 113inotify_watch.  Additionally, you must not call inotify_add_watch() more than
 114once for a given inotify_watch structure, unless you have first called either
 115inotify_rm_watch() or inotify_rm_wd().
 116
 117To determine if you have already registered a watch for a given inode, you may
 118call inotify_find_watch(), which gives you both the wd and the watch pointer for
 119the inotify_watch, or an error if the watch does not exist.
 120
 121    wd = inotify_find_watch(ih, inode, &watchp);
 122
 123You may use container_of() on the watch pointer to access your own data
 124associated with a given watch.  When an existing watch is found,
 125inotify_find_watch() bumps the refcount before releasing its locks.  You must
 126put that reference with:
 127
 128    put_inotify_watch(watchp);
 129
 130Call inotify_find_update_watch() to update the event mask for an existing watch.
 131inotify_find_update_watch() returns the wd of the updated watch, or an error if
 132the watch does not exist.
 133
 134    wd = inotify_find_update_watch(ih, inode, mask);
 135
 136An existing watch may be removed by calling either inotify_rm_watch() or
 137inotify_rm_wd().
 138
 139    int ret = inotify_rm_watch(ih, &my_watch->iwatch);
 140    int ret = inotify_rm_wd(ih, wd);
 141
 142A watch may be removed while executing your event handler with the following:
 143
 144    inotify_remove_watch_locked(ih, iwatch);
 145
 146Call inotify_destroy() to remove all watches from your inotify instance and
 147release it.  If there are no outstanding references, inotify_destroy() will call
 148your destroy_watch op for each watch.
 149
 150    inotify_destroy(ih);
 151
 152When inotify removes a watch, it sends an IN_IGNORED event to your callback.
 153You may use this event as an indication to free the watch memory.  Note that
 154inotify may remove a watch due to filesystem events, as well as by your request.
 155If you use IN_ONESHOT, inotify will remove the watch after the first event, at
 156which point you may call the final inotify_put_watch.
 157
 158(iv) Kernel Interface Prototypes
 159
 160        struct inotify_handle *inotify_init(struct inotify_operations *ops);
 161
 162        inotify_init_watch(struct inotify_watch *watch);
 163
 164        s32 inotify_add_watch(struct inotify_handle *ih,
 165                              struct inotify_watch *watch,
 166                              struct inode *inode, u32 mask);
 167
 168        s32 inotify_find_watch(struct inotify_handle *ih, struct inode *inode,
 169                               struct inotify_watch **watchp);
 170
 171        s32 inotify_find_update_watch(struct inotify_handle *ih,
 172                                      struct inode *inode, u32 mask);
 173
 174        int inotify_rm_wd(struct inotify_handle *ih, u32 wd);
 175
 176        int inotify_rm_watch(struct inotify_handle *ih,
 177                             struct inotify_watch *watch);
 178
 179        void inotify_remove_watch_locked(struct inotify_handle *ih,
 180                                         struct inotify_watch *watch);
 181
 182        void inotify_destroy(struct inotify_handle *ih);
 183
 184        void get_inotify_watch(struct inotify_watch *watch);
 185        void put_inotify_watch(struct inotify_watch *watch);
 186
 187
 188(v) Internal Kernel Implementation
 189
 190Each inotify instance is represented by an inotify_handle structure.
 191Inotify's userspace consumers also have an inotify_device which is
 192associated with the inotify_handle, and on which events are queued.
 193
 194Each watch is associated with an inotify_watch structure.  Watches are chained
 195off of each associated inotify_handle and each associated inode.
 196
 197See fs/inotify.c and fs/inotify_user.c for the locking and lifetime rules.
 198
 199
 200(vi) Rationale
 201
 202Q: What is the design decision behind not tying the watch to the open fd of
 203   the watched object?
 204
 205A: Watches are associated with an open inotify device, not an open file.
 206   This solves the primary problem with dnotify: keeping the file open pins
 207   the file and thus, worse, pins the mount.  Dnotify is therefore infeasible
 208   for use on a desktop system with removable media as the media cannot be
 209   unmounted.  Watching a file should not require that it be open.
 210
 211Q: What is the design decision behind using an-fd-per-instance as opposed to
 212   an fd-per-watch?
 213
 214A: An fd-per-watch quickly consumes more file descriptors than are allowed,
 215   more fd's than are feasible to manage, and more fd's than are optimally
 216   select()-able.  Yes, root can bump the per-process fd limit and yes, users
 217   can use epoll, but requiring both is a silly and extraneous requirement.
 218   A watch consumes less memory than an open file, separating the number
 219   spaces is thus sensible.  The current design is what user-space developers
 220   want: Users initialize inotify, once, and add n watches, requiring but one
 221   fd and no twiddling with fd limits.  Initializing an inotify instance two
 222   thousand times is silly.  If we can implement user-space's preferences 
 223   cleanly--and we can, the idr layer makes stuff like this trivial--then we 
 224   should.
 225
 226   There are other good arguments.  With a single fd, there is a single
 227   item to block on, which is mapped to a single queue of events.  The single
 228   fd returns all watch events and also any potential out-of-band data.  If
 229   every fd was a separate watch,
 230
 231   - There would be no way to get event ordering.  Events on file foo and
 232     file bar would pop poll() on both fd's, but there would be no way to tell
 233     which happened first.  A single queue trivially gives you ordering.  Such
 234     ordering is crucial to existing applications such as Beagle.  Imagine
 235     "mv a b ; mv b a" events without ordering.
 236
 237   - We'd have to maintain n fd's and n internal queues with state,
 238     versus just one.  It is a lot messier in the kernel.  A single, linear
 239     queue is the data structure that makes sense.
 240
 241   - User-space developers prefer the current API.  The Beagle guys, for
 242     example, love it.  Trust me, I asked.  It is not a surprise: Who'd want
 243     to manage and block on 1000 fd's via select?
 244
 245   - No way to get out of band data.
 246
 247   - 1024 is still too low.  ;-)
 248
 249   When you talk about designing a file change notification system that
 250   scales to 1000s of directories, juggling 1000s of fd's just does not seem
 251   the right interface.  It is too heavy.
 252
 253   Additionally, it _is_ possible to  more than one instance  and
 254   juggle more than one queue and thus more than one associated fd.  There
 255   need not be a one-fd-per-process mapping; it is one-fd-per-queue and a
 256   process can easily want more than one queue.
 257
 258Q: Why the system call approach?
 259
 260A: The poor user-space interface is the second biggest problem with dnotify.
 261   Signals are a terrible, terrible interface for file notification.  Or for
 262   anything, for that matter.  The ideal solution, from all perspectives, is a
 263   file descriptor-based one that allows basic file I/O and poll/select.
 264   Obtaining the fd and managing the watches could have been done either via a
 265   device file or a family of new system calls.  We decided to implement a
 266   family of system calls because that is the preferred approach for new kernel
 267   interfaces.  The only real difference was whether we wanted to use open(2)
 268   and ioctl(2) or a couple of new system calls.  System calls beat ioctls.
 269
 270
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.