linux/Documentation/workqueue.txt
<<
>>
Prefs
   1
   2Concurrency Managed Workqueue (cmwq)
   3
   4September, 2010         Tejun Heo <tj@kernel.org>
   5                        Florian Mickler <florian@mickler.org>
   6
   7CONTENTS
   8
   91. Introduction
  102. Why cmwq?
  113. The Design
  124. Application Programming Interface (API)
  135. Example Execution Scenarios
  146. Guidelines
  157. Debugging
  16
  17
  181. Introduction
  19
  20There are many cases where an asynchronous process execution context
  21is needed and the workqueue (wq) API is the most commonly used
  22mechanism for such cases.
  23
  24When such an asynchronous execution context is needed, a work item
  25describing which function to execute is put on a queue.  An
  26independent thread serves as the asynchronous execution context.  The
  27queue is called workqueue and the thread is called worker.
  28
  29While there are work items on the workqueue the worker executes the
  30functions associated with the work items one after the other.  When
  31there is no work item left on the workqueue the worker becomes idle.
  32When a new work item gets queued, the worker begins executing again.
  33
  34
  352. Why cmwq?
  36
  37In the original wq implementation, a multi threaded (MT) wq had one
  38worker thread per CPU and a single threaded (ST) wq had one worker
  39thread system-wide.  A single MT wq needed to keep around the same
  40number of workers as the number of CPUs.  The kernel grew a lot of MT
  41wq users over the years and with the number of CPU cores continuously
  42rising, some systems saturated the default 32k PID space just booting
  43up.
  44
  45Although MT wq wasted a lot of resource, the level of concurrency
  46provided was unsatisfactory.  The limitation was common to both ST and
  47MT wq albeit less severe on MT.  Each wq maintained its own separate
  48worker pool.  A MT wq could provide only one execution context per CPU
  49while a ST wq one for the whole system.  Work items had to compete for
  50those very limited execution contexts leading to various problems
  51including proneness to deadlocks around the single execution context.
  52
  53The tension between the provided level of concurrency and resource
  54usage also forced its users to make unnecessary tradeoffs like libata
  55choosing to use ST wq for polling PIOs and accepting an unnecessary
  56limitation that no two polling PIOs can progress at the same time.  As
  57MT wq don't provide much better concurrency, users which require
  58higher level of concurrency, like async or fscache, had to implement
  59their own thread pool.
  60
  61Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
  62focus on the following goals.
  63
  64* Maintain compatibility with the original workqueue API.
  65
  66* Use per-CPU unified worker pools shared by all wq to provide
  67  flexible level of concurrency on demand without wasting a lot of
  68  resource.
  69
  70* Automatically regulate worker pool and level of concurrency so that
  71  the API users don't need to worry about such details.
  72
  73
  743. The Design
  75
  76In order to ease the asynchronous execution of functions a new
  77abstraction, the work item, is introduced.
  78
  79A work item is a simple struct that holds a pointer to the function
  80that is to be executed asynchronously.  Whenever a driver or subsystem
  81wants a function to be executed asynchronously it has to set up a work
  82item pointing to that function and queue that work item on a
  83workqueue.
  84
  85Special purpose threads, called worker threads, execute the functions
  86off of the queue, one after the other.  If no work is queued, the
  87worker threads become idle.  These worker threads are managed in so
  88called thread-pools.
  89
  90The cmwq design differentiates between the user-facing workqueues that
  91subsystems and drivers queue work items on and the backend mechanism
  92which manages thread-pool and processes the queued work items.
  93
  94The backend is called gcwq.  There is one gcwq for each possible CPU
  95and one gcwq to serve work items queued on unbound workqueues.
  96
  97Subsystems and drivers can create and queue work items through special
  98workqueue API functions as they see fit. They can influence some
  99aspects of the way the work items are executed by setting flags on the
 100workqueue they are putting the work item on. These flags include
 101things like CPU locality, reentrancy, concurrency limits and more. To
 102get a detailed overview refer to the API description of
 103alloc_workqueue() below.
 104
 105When a work item is queued to a workqueue, the target gcwq is
 106determined according to the queue parameters and workqueue attributes
 107and appended on the shared worklist of the gcwq.  For example, unless
 108specifically overridden, a work item of a bound workqueue will be
 109queued on the worklist of exactly that gcwq that is associated to the
 110CPU the issuer is running on.
 111
 112For any worker pool implementation, managing the concurrency level
 113(how many execution contexts are active) is an important issue.  cmwq
 114tries to keep the concurrency at a minimal but sufficient level.
 115Minimal to save resources and sufficient in that the system is used at
 116its full capacity.
 117
 118Each gcwq bound to an actual CPU implements concurrency management by
 119hooking into the scheduler.  The gcwq is notified whenever an active
 120worker wakes up or sleeps and keeps track of the number of the
 121currently runnable workers.  Generally, work items are not expected to
 122hog a CPU and consume many cycles.  That means maintaining just enough
 123concurrency to prevent work processing from stalling should be
 124optimal.  As long as there are one or more runnable workers on the
 125CPU, the gcwq doesn't start execution of a new work, but, when the
 126last running worker goes to sleep, it immediately schedules a new
 127worker so that the CPU doesn't sit idle while there are pending work
 128items.  This allows using a minimal number of workers without losing
 129execution bandwidth.
 130
 131Keeping idle workers around doesn't cost other than the memory space
 132for kthreads, so cmwq holds onto idle ones for a while before killing
 133them.
 134
 135For an unbound wq, the above concurrency management doesn't apply and
 136the gcwq for the pseudo unbound CPU tries to start executing all work
 137items as soon as possible.  The responsibility of regulating
 138concurrency level is on the users.  There is also a flag to mark a
 139bound wq to ignore the concurrency management.  Please refer to the
 140API section for details.
 141
 142Forward progress guarantee relies on that workers can be created when
 143more execution contexts are necessary, which in turn is guaranteed
 144through the use of rescue workers.  All work items which might be used
 145on code paths that handle memory reclaim are required to be queued on
 146wq's that have a rescue-worker reserved for execution under memory
 147pressure.  Else it is possible that the thread-pool deadlocks waiting
 148for execution contexts to free up.
 149
 150
 1514. Application Programming Interface (API)
 152
 153alloc_workqueue() allocates a wq.  The original create_*workqueue()
 154functions are deprecated and scheduled for removal.  alloc_workqueue()
 155takes three arguments - @name, @flags and @max_active.  @name is the
 156name of the wq and also used as the name of the rescuer thread if
 157there is one.
 158
 159A wq no longer manages execution resources but serves as a domain for
 160forward progress guarantee, flush and work item attributes.  @flags
 161and @max_active control how work items are assigned execution
 162resources, scheduled and executed.
 163
 164@flags:
 165
 166  WQ_NON_REENTRANT
 167
 168        By default, a wq guarantees non-reentrance only on the same
 169        CPU.  A work item may not be executed concurrently on the same
 170        CPU by multiple workers but is allowed to be executed
 171        concurrently on multiple CPUs.  This flag makes sure
 172        non-reentrance is enforced across all CPUs.  Work items queued
 173        to a non-reentrant wq are guaranteed to be executed by at most
 174        one worker system-wide at any given time.
 175
 176  WQ_UNBOUND
 177
 178        Work items queued to an unbound wq are served by a special
 179        gcwq which hosts workers which are not bound to any specific
 180        CPU.  This makes the wq behave as a simple execution context
 181        provider without concurrency management.  The unbound gcwq
 182        tries to start execution of work items as soon as possible.
 183        Unbound wq sacrifices locality but is useful for the following
 184        cases.
 185
 186        * Wide fluctuation in the concurrency level requirement is
 187          expected and using bound wq may end up creating large number
 188          of mostly unused workers across different CPUs as the issuer
 189          hops through different CPUs.
 190
 191        * Long running CPU intensive workloads which can be better
 192          managed by the system scheduler.
 193
 194  WQ_FREEZABLE
 195
 196        A freezable wq participates in the freeze phase of the system
 197        suspend operations.  Work items on the wq are drained and no
 198        new work item starts execution until thawed.
 199
 200  WQ_MEM_RECLAIM
 201
 202        All wq which might be used in the memory reclaim paths _MUST_
 203        have this flag set.  The wq is guaranteed to have at least one
 204        execution context regardless of memory pressure.
 205
 206  WQ_HIGHPRI
 207
 208        Work items of a highpri wq are queued at the head of the
 209        worklist of the target gcwq and start execution regardless of
 210        the current concurrency level.  In other words, highpri work
 211        items will always start execution as soon as execution
 212        resource is available.
 213
 214        Ordering among highpri work items is preserved - a highpri
 215        work item queued after another highpri work item will start
 216        execution after the earlier highpri work item starts.
 217
 218        Although highpri work items are not held back by other
 219        runnable work items, they still contribute to the concurrency
 220        level.  Highpri work items in runnable state will prevent
 221        non-highpri work items from starting execution.
 222
 223        This flag is meaningless for unbound wq.
 224
 225  WQ_CPU_INTENSIVE
 226
 227        Work items of a CPU intensive wq do not contribute to the
 228        concurrency level.  In other words, runnable CPU intensive
 229        work items will not prevent other work items from starting
 230        execution.  This is useful for bound work items which are
 231        expected to hog CPU cycles so that their execution is
 232        regulated by the system scheduler.
 233
 234        Although CPU intensive work items don't contribute to the
 235        concurrency level, start of their executions is still
 236        regulated by the concurrency management and runnable
 237        non-CPU-intensive work items can delay execution of CPU
 238        intensive work items.
 239
 240        This flag is meaningless for unbound wq.
 241
 242  WQ_HIGHPRI | WQ_CPU_INTENSIVE
 243
 244        This combination makes the wq avoid interaction with
 245        concurrency management completely and behave as a simple
 246        per-CPU execution context provider.  Work items queued on a
 247        highpri CPU-intensive wq start execution as soon as resources
 248        are available and don't affect execution of other work items.
 249
 250@max_active:
 251
 252@max_active determines the maximum number of execution contexts per
 253CPU which can be assigned to the work items of a wq.  For example,
 254with @max_active of 16, at most 16 work items of the wq can be
 255executing at the same time per CPU.
 256
 257Currently, for a bound wq, the maximum limit for @max_active is 512
 258and the default value used when 0 is specified is 256.  For an unbound
 259wq, the limit is higher of 512 and 4 * num_possible_cpus().  These
 260values are chosen sufficiently high such that they are not the
 261limiting factor while providing protection in runaway cases.
 262
 263The number of active work items of a wq is usually regulated by the
 264users of the wq, more specifically, by how many work items the users
 265may queue at the same time.  Unless there is a specific need for
 266throttling the number of active work items, specifying '0' is
 267recommended.
 268
 269Some users depend on the strict execution ordering of ST wq.  The
 270combination of @max_active of 1 and WQ_UNBOUND is used to achieve this
 271behavior.  Work items on such wq are always queued to the unbound gcwq
 272and only one work item can be active at any given time thus achieving
 273the same ordering property as ST wq.
 274
 275
 2765. Example Execution Scenarios
 277
 278The following example execution scenarios try to illustrate how cmwq
 279behave under different configurations.
 280
 281 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
 282 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
 283 again before finishing.  w1 and w2 burn CPU for 5ms then sleep for
 284 10ms.
 285
 286Ignoring all other tasks, works and processing overhead, and assuming
 287simple FIFO scheduling, the following is one highly simplified version
 288of possible sequences of events with the original wq.
 289
 290 TIME IN MSECS  EVENT
 291 0              w0 starts and burns CPU
 292 5              w0 sleeps
 293 15             w0 wakes up and burns CPU
 294 20             w0 finishes
 295 20             w1 starts and burns CPU
 296 25             w1 sleeps
 297 35             w1 wakes up and finishes
 298 35             w2 starts and burns CPU
 299 40             w2 sleeps
 300 50             w2 wakes up and finishes
 301
 302And with cmwq with @max_active >= 3,
 303
 304 TIME IN MSECS  EVENT
 305 0              w0 starts and burns CPU
 306 5              w0 sleeps
 307 5              w1 starts and burns CPU
 308 10             w1 sleeps
 309 10             w2 starts and burns CPU
 310 15             w2 sleeps
 311 15             w0 wakes up and burns CPU
 312 20             w0 finishes
 313 20             w1 wakes up and finishes
 314 25             w2 wakes up and finishes
 315
 316If @max_active == 2,
 317
 318 TIME IN MSECS  EVENT
 319 0              w0 starts and burns CPU
 320 5              w0 sleeps
 321 5              w1 starts and burns CPU
 322 10             w1 sleeps
 323 15             w0 wakes up and burns CPU
 324 20             w0 finishes
 325 20             w1 wakes up and finishes
 326 20             w2 starts and burns CPU
 327 25             w2 sleeps
 328 35             w2 wakes up and finishes
 329
 330Now, let's assume w1 and w2 are queued to a different wq q1 which has
 331WQ_HIGHPRI set,
 332
 333 TIME IN MSECS  EVENT
 334 0              w1 and w2 start and burn CPU
 335 5              w1 sleeps
 336 10             w2 sleeps
 337 10             w0 starts and burns CPU
 338 15             w0 sleeps
 339 15             w1 wakes up and finishes
 340 20             w2 wakes up and finishes
 341 25             w0 wakes up and burns CPU
 342 30             w0 finishes
 343
 344If q1 has WQ_CPU_INTENSIVE set,
 345
 346 TIME IN MSECS  EVENT
 347 0              w0 starts and burns CPU
 348 5              w0 sleeps
 349 5              w1 and w2 start and burn CPU
 350 10             w1 sleeps
 351 15             w2 sleeps
 352 15             w0 wakes up and burns CPU
 353 20             w0 finishes
 354 20             w1 wakes up and finishes
 355 25             w2 wakes up and finishes
 356
 357
 3586. Guidelines
 359
 360* Do not forget to use WQ_MEM_RECLAIM if a wq may process work items
 361  which are used during memory reclaim.  Each wq with WQ_MEM_RECLAIM
 362  set has an execution context reserved for it.  If there is
 363  dependency among multiple work items used during memory reclaim,
 364  they should be queued to separate wq each with WQ_MEM_RECLAIM.
 365
 366* Unless strict ordering is required, there is no need to use ST wq.
 367
 368* Unless there is a specific need, using 0 for @max_active is
 369  recommended.  In most use cases, concurrency level usually stays
 370  well under the default limit.
 371
 372* A wq serves as a domain for forward progress guarantee
 373  (WQ_MEM_RECLAIM, flush and work item attributes.  Work items which
 374  are not involved in memory reclaim and don't need to be flushed as a
 375  part of a group of work items, and don't require any special
 376  attribute, can use one of the system wq.  There is no difference in
 377  execution characteristics between using a dedicated wq and a system
 378  wq.
 379
 380* Unless work items are expected to consume a huge amount of CPU
 381  cycles, using a bound wq is usually beneficial due to the increased
 382  level of locality in wq operations and work item execution.
 383
 384
 3857. Debugging
 386
 387Because the work functions are executed by generic worker threads
 388there are a few tricks needed to shed some light on misbehaving
 389workqueue users.
 390
 391Worker threads show up in the process list as:
 392
 393root      5671  0.0  0.0      0     0 ?        S    12:07   0:00 [kworker/0:1]
 394root      5672  0.0  0.0      0     0 ?        S    12:07   0:00 [kworker/1:2]
 395root      5673  0.0  0.0      0     0 ?        S    12:12   0:00 [kworker/0:0]
 396root      5674  0.0  0.0      0     0 ?        S    12:13   0:00 [kworker/1:0]
 397
 398If kworkers are going crazy (using too much cpu), there are two types
 399of possible problems:
 400
 401        1. Something beeing scheduled in rapid succession
 402        2. A single work item that consumes lots of cpu cycles
 403
 404The first one can be tracked using tracing:
 405
 406        $ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event
 407        $ cat /sys/kernel/debug/tracing/trace_pipe > out.txt
 408        (wait a few secs)
 409        ^C
 410
 411If something is busy looping on work queueing, it would be dominating
 412the output and the offender can be determined with the work item
 413function.
 414
 415For the second type of problems it should be possible to just check
 416the stack trace of the offending worker thread.
 417
 418        $ cat /proc/THE_OFFENDING_KWORKER/stack
 419
 420The work item's function should be trivially visible in the stack
 421trace.
 422