linux/Documentation/workqueue.txt
<<
>>
Prefs
   1
   2Concurrency Managed Workqueue (cmwq)
   3
   4September, 2010         Tejun Heo <tj@kernel.org>
   5                        Florian Mickler <florian@mickler.org>
   6
   7CONTENTS
   8
   91. Introduction
  102. Why cmwq?
  113. The Design
  124. Application Programming Interface (API)
  135. Example Execution Scenarios
  146. Guidelines
  15
  16
  171. Introduction
  18
  19There are many cases where an asynchronous process execution context
  20is needed and the workqueue (wq) API is the most commonly used
  21mechanism for such cases.
  22
  23When such an asynchronous execution context is needed, a work item
  24describing which function to execute is put on a queue.  An
  25independent thread serves as the asynchronous execution context.  The
  26queue is called workqueue and the thread is called worker.
  27
  28While there are work items on the workqueue the worker executes the
  29functions associated with the work items one after the other.  When
  30there is no work item left on the workqueue the worker becomes idle.
  31When a new work item gets queued, the worker begins executing again.
  32
  33
  342. Why cmwq?
  35
  36In the original wq implementation, a multi threaded (MT) wq had one
  37worker thread per CPU and a single threaded (ST) wq had one worker
  38thread system-wide.  A single MT wq needed to keep around the same
  39number of workers as the number of CPUs.  The kernel grew a lot of MT
  40wq users over the years and with the number of CPU cores continuously
  41rising, some systems saturated the default 32k PID space just booting
  42up.
  43
  44Although MT wq wasted a lot of resource, the level of concurrency
  45provided was unsatisfactory.  The limitation was common to both ST and
  46MT wq albeit less severe on MT.  Each wq maintained its own separate
  47worker pool.  A MT wq could provide only one execution context per CPU
  48while a ST wq one for the whole system.  Work items had to compete for
  49those very limited execution contexts leading to various problems
  50including proneness to deadlocks around the single execution context.
  51
  52The tension between the provided level of concurrency and resource
  53usage also forced its users to make unnecessary tradeoffs like libata
  54choosing to use ST wq for polling PIOs and accepting an unnecessary
  55limitation that no two polling PIOs can progress at the same time.  As
  56MT wq don't provide much better concurrency, users which require
  57higher level of concurrency, like async or fscache, had to implement
  58their own thread pool.
  59
  60Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
  61focus on the following goals.
  62
  63* Maintain compatibility with the original workqueue API.
  64
  65* Use per-CPU unified worker pools shared by all wq to provide
  66  flexible level of concurrency on demand without wasting a lot of
  67  resource.
  68
  69* Automatically regulate worker pool and level of concurrency so that
  70  the API users don't need to worry about such details.
  71
  72
  733. The Design
  74
  75In order to ease the asynchronous execution of functions a new
  76abstraction, the work item, is introduced.
  77
  78A work item is a simple struct that holds a pointer to the function
  79that is to be executed asynchronously.  Whenever a driver or subsystem
  80wants a function to be executed asynchronously it has to set up a work
  81item pointing to that function and queue that work item on a
  82workqueue.
  83
  84Special purpose threads, called worker threads, execute the functions
  85off of the queue, one after the other.  If no work is queued, the
  86worker threads become idle.  These worker threads are managed in so
  87called thread-pools.
  88
  89The cmwq design differentiates between the user-facing workqueues that
  90subsystems and drivers queue work items on and the backend mechanism
  91which manages thread-pool and processes the queued work items.
  92
  93The backend is called gcwq.  There is one gcwq for each possible CPU
  94and one gcwq to serve work items queued on unbound workqueues.
  95
  96Subsystems and drivers can create and queue work items through special
  97workqueue API functions as they see fit. They can influence some
  98aspects of the way the work items are executed by setting flags on the
  99workqueue they are putting the work item on. These flags include
 100things like CPU locality, reentrancy, concurrency limits and more. To
 101get a detailed overview refer to the API description of
 102alloc_workqueue() below.
 103
 104When a work item is queued to a workqueue, the target gcwq is
 105determined according to the queue parameters and workqueue attributes
 106and appended on the shared worklist of the gcwq.  For example, unless
 107specifically overridden, a work item of a bound workqueue will be
 108queued on the worklist of exactly that gcwq that is associated to the
 109CPU the issuer is running on.
 110
 111For any worker pool implementation, managing the concurrency level
 112(how many execution contexts are active) is an important issue.  cmwq
 113tries to keep the concurrency at a minimal but sufficient level.
 114Minimal to save resources and sufficient in that the system is used at
 115its full capacity.
 116
 117Each gcwq bound to an actual CPU implements concurrency management by
 118hooking into the scheduler.  The gcwq is notified whenever an active
 119worker wakes up or sleeps and keeps track of the number of the
 120currently runnable workers.  Generally, work items are not expected to
 121hog a CPU and consume many cycles.  That means maintaining just enough
 122concurrency to prevent work processing from stalling should be
 123optimal.  As long as there are one or more runnable workers on the
 124CPU, the gcwq doesn't start execution of a new work, but, when the
 125last running worker goes to sleep, it immediately schedules a new
 126worker so that the CPU doesn't sit idle while there are pending work
 127items.  This allows using a minimal number of workers without losing
 128execution bandwidth.
 129
 130Keeping idle workers around doesn't cost other than the memory space
 131for kthreads, so cmwq holds onto idle ones for a while before killing
 132them.
 133
 134For an unbound wq, the above concurrency management doesn't apply and
 135the gcwq for the pseudo unbound CPU tries to start executing all work
 136items as soon as possible.  The responsibility of regulating
 137concurrency level is on the users.  There is also a flag to mark a
 138bound wq to ignore the concurrency management.  Please refer to the
 139API section for details.
 140
 141Forward progress guarantee relies on that workers can be created when
 142more execution contexts are necessary, which in turn is guaranteed
 143through the use of rescue workers.  All work items which might be used
 144on code paths that handle memory reclaim are required to be queued on
 145wq's that have a rescue-worker reserved for execution under memory
 146pressure.  Else it is possible that the thread-pool deadlocks waiting
 147for execution contexts to free up.
 148
 149
 1504. Application Programming Interface (API)
 151
 152alloc_workqueue() allocates a wq.  The original create_*workqueue()
 153functions are deprecated and scheduled for removal.  alloc_workqueue()
 154takes three arguments - @name, @flags and @max_active.  @name is the
 155name of the wq and also used as the name of the rescuer thread if
 156there is one.
 157
 158A wq no longer manages execution resources but serves as a domain for
 159forward progress guarantee, flush and work item attributes.  @flags
 160and @max_active control how work items are assigned execution
 161resources, scheduled and executed.
 162
 163@flags:
 164
 165  WQ_NON_REENTRANT
 166
 167        By default, a wq guarantees non-reentrance only on the same
 168        CPU.  A work item may not be executed concurrently on the same
 169        CPU by multiple workers but is allowed to be executed
 170        concurrently on multiple CPUs.  This flag makes sure
 171        non-reentrance is enforced across all CPUs.  Work items queued
 172        to a non-reentrant wq are guaranteed to be executed by at most
 173        one worker system-wide at any given time.
 174
 175  WQ_UNBOUND
 176
 177        Work items queued to an unbound wq are served by a special
 178        gcwq which hosts workers which are not bound to any specific
 179        CPU.  This makes the wq behave as a simple execution context
 180        provider without concurrency management.  The unbound gcwq
 181        tries to start execution of work items as soon as possible.
 182        Unbound wq sacrifices locality but is useful for the following
 183        cases.
 184
 185        * Wide fluctuation in the concurrency level requirement is
 186          expected and using bound wq may end up creating large number
 187          of mostly unused workers across different CPUs as the issuer
 188          hops through different CPUs.
 189
 190        * Long running CPU intensive workloads which can be better
 191          managed by the system scheduler.
 192
 193  WQ_FREEZABLE
 194
 195        A freezable wq participates in the freeze phase of the system
 196        suspend operations.  Work items on the wq are drained and no
 197        new work item starts execution until thawed.
 198
 199  WQ_MEM_RECLAIM
 200
 201        All wq which might be used in the memory reclaim paths _MUST_
 202        have this flag set.  The wq is guaranteed to have at least one
 203        execution context regardless of memory pressure.
 204
 205  WQ_HIGHPRI
 206
 207        Work items of a highpri wq are queued at the head of the
 208        worklist of the target gcwq and start execution regardless of
 209        the current concurrency level.  In other words, highpri work
 210        items will always start execution as soon as execution
 211        resource is available.
 212
 213        Ordering among highpri work items is preserved - a highpri
 214        work item queued after another highpri work item will start
 215        execution after the earlier highpri work item starts.
 216
 217        Although highpri work items are not held back by other
 218        runnable work items, they still contribute to the concurrency
 219        level.  Highpri work items in runnable state will prevent
 220        non-highpri work items from starting execution.
 221
 222        This flag is meaningless for unbound wq.
 223
 224  WQ_CPU_INTENSIVE
 225
 226        Work items of a CPU intensive wq do not contribute to the
 227        concurrency level.  In other words, runnable CPU intensive
 228        work items will not prevent other work items from starting
 229        execution.  This is useful for bound work items which are
 230        expected to hog CPU cycles so that their execution is
 231        regulated by the system scheduler.
 232
 233        Although CPU intensive work items don't contribute to the
 234        concurrency level, start of their executions is still
 235        regulated by the concurrency management and runnable
 236        non-CPU-intensive work items can delay execution of CPU
 237        intensive work items.
 238
 239        This flag is meaningless for unbound wq.
 240
 241  WQ_HIGHPRI | WQ_CPU_INTENSIVE
 242
 243        This combination makes the wq avoid interaction with
 244        concurrency management completely and behave as a simple
 245        per-CPU execution context provider.  Work items queued on a
 246        highpri CPU-intensive wq start execution as soon as resources
 247        are available and don't affect execution of other work items.
 248
 249@max_active:
 250
 251@max_active determines the maximum number of execution contexts per
 252CPU which can be assigned to the work items of a wq.  For example,
 253with @max_active of 16, at most 16 work items of the wq can be
 254executing at the same time per CPU.
 255
 256Currently, for a bound wq, the maximum limit for @max_active is 512
 257and the default value used when 0 is specified is 256.  For an unbound
 258wq, the limit is higher of 512 and 4 * num_possible_cpus().  These
 259values are chosen sufficiently high such that they are not the
 260limiting factor while providing protection in runaway cases.
 261
 262The number of active work items of a wq is usually regulated by the
 263users of the wq, more specifically, by how many work items the users
 264may queue at the same time.  Unless there is a specific need for
 265throttling the number of active work items, specifying '0' is
 266recommended.
 267
 268Some users depend on the strict execution ordering of ST wq.  The
 269combination of @max_active of 1 and WQ_UNBOUND is used to achieve this
 270behavior.  Work items on such wq are always queued to the unbound gcwq
 271and only one work item can be active at any given time thus achieving
 272the same ordering property as ST wq.
 273
 274
 2755. Example Execution Scenarios
 276
 277The following example execution scenarios try to illustrate how cmwq
 278behave under different configurations.
 279
 280 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
 281 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
 282 again before finishing.  w1 and w2 burn CPU for 5ms then sleep for
 283 10ms.
 284
 285Ignoring all other tasks, works and processing overhead, and assuming
 286simple FIFO scheduling, the following is one highly simplified version
 287of possible sequences of events with the original wq.
 288
 289 TIME IN MSECS  EVENT
 290 0              w0 starts and burns CPU
 291 5              w0 sleeps
 292 15             w0 wakes up and burns CPU
 293 20             w0 finishes
 294 20             w1 starts and burns CPU
 295 25             w1 sleeps
 296 35             w1 wakes up and finishes
 297 35             w2 starts and burns CPU
 298 40             w2 sleeps
 299 50             w2 wakes up and finishes
 300
 301And with cmwq with @max_active >= 3,
 302
 303 TIME IN MSECS  EVENT
 304 0              w0 starts and burns CPU
 305 5              w0 sleeps
 306 5              w1 starts and burns CPU
 307 10             w1 sleeps
 308 10             w2 starts and burns CPU
 309 15             w2 sleeps
 310 15             w0 wakes up and burns CPU
 311 20             w0 finishes
 312 20             w1 wakes up and finishes
 313 25             w2 wakes up and finishes
 314
 315If @max_active == 2,
 316
 317 TIME IN MSECS  EVENT
 318 0              w0 starts and burns CPU
 319 5              w0 sleeps
 320 5              w1 starts and burns CPU
 321 10             w1 sleeps
 322 15             w0 wakes up and burns CPU
 323 20             w0 finishes
 324 20             w1 wakes up and finishes
 325 20             w2 starts and burns CPU
 326 25             w2 sleeps
 327 35             w2 wakes up and finishes
 328
 329Now, let's assume w1 and w2 are queued to a different wq q1 which has
 330WQ_HIGHPRI set,
 331
 332 TIME IN MSECS  EVENT
 333 0              w1 and w2 start and burn CPU
 334 5              w1 sleeps
 335 10             w2 sleeps
 336 10             w0 starts and burns CPU
 337 15             w0 sleeps
 338 15             w1 wakes up and finishes
 339 20             w2 wakes up and finishes
 340 25             w0 wakes up and burns CPU
 341 30             w0 finishes
 342
 343If q1 has WQ_CPU_INTENSIVE set,
 344
 345 TIME IN MSECS  EVENT
 346 0              w0 starts and burns CPU
 347 5              w0 sleeps
 348 5              w1 and w2 start and burn CPU
 349 10             w1 sleeps
 350 15             w2 sleeps
 351 15             w0 wakes up and burns CPU
 352 20             w0 finishes
 353 20             w1 wakes up and finishes
 354 25             w2 wakes up and finishes
 355
 356
 3576. Guidelines
 358
 359* Do not forget to use WQ_MEM_RECLAIM if a wq may process work items
 360  which are used during memory reclaim.  Each wq with WQ_MEM_RECLAIM
 361  set has an execution context reserved for it.  If there is
 362  dependency among multiple work items used during memory reclaim,
 363  they should be queued to separate wq each with WQ_MEM_RECLAIM.
 364
 365* Unless strict ordering is required, there is no need to use ST wq.
 366
 367* Unless there is a specific need, using 0 for @max_active is
 368  recommended.  In most use cases, concurrency level usually stays
 369  well under the default limit.
 370
 371* A wq serves as a domain for forward progress guarantee
 372  (WQ_MEM_RECLAIM, flush and work item attributes.  Work items which
 373  are not involved in memory reclaim and don't need to be flushed as a
 374  part of a group of work items, and don't require any special
 375  attribute, can use one of the system wq.  There is no difference in
 376  execution characteristics between using a dedicated wq and a system
 377  wq.
 378
 379* Unless work items are expected to consume a huge amount of CPU
 380  cycles, using a bound wq is usually beneficial due to the increased
 381  level of locality in wq operations and work item execution.
 382
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.