Lines Matching +full:work +full:- +full:around
17 When such an asynchronous execution context is needed, a work item
22 While there are work items on the workqueue the worker executes the
23 functions associated with the work items one after the other. When
24 there is no work item left on the workqueue the worker becomes idle.
25 When a new work item gets queued, the worker begins executing again.
33 thread system-wide. A single MT wq needed to keep around the same
43 while an ST wq one for the whole system. Work items had to compete for
45 including proneness to deadlocks around the single execution context.
60 * Use per-CPU unified worker pools shared by all wq to provide
72 abstraction, the work item, is introduced.
74 A work item is a simple struct that holds a pointer to the function
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
80 A work item can be executed in either a thread or the BH (softirq) context.
83 the functions off of the queue, one after the other. If no work is queued,
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
88 subsystems and drivers queue work items on and the backend mechanism
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
93 worker-pools to serve work items queued on unbound workqueues - the
98 Each per-CPU BH worker pool contains only one pseudo worker which represents
102 Subsystems and drivers can create and queue work items through special
104 aspects of the way the work items are executed by setting flags on the
105 workqueue they are putting the work item on. These flags include
110 When a work item is queued to a workqueue, the target worker-pool is
112 and appended on the shared worklist of the worker-pool. For example,
113 unless specifically overridden, a work item of a bound workqueue will
114 be queued on the worklist of either normal or highpri worker-pool that
123 Each worker-pool bound to an actual CPU implements concurrency
124 management by hooking into the scheduler. The worker-pool is notified
126 number of the currently runnable workers. Generally, work items are
128 maintaining just enough concurrency to prevent work processing from
130 workers on the CPU, the worker-pool doesn't start execution of a new
131 work, but, when the last running worker goes to sleep, it immediately
133 are pending work items. This allows using a minimal number of workers
136 Keeping idle workers around doesn't cost other than the memory space
150 through the use of rescue workers. All work items which might be used
152 wq's that have a rescue-worker reserved for execution under memory
153 pressure. Else it is possible that the worker-pool deadlocks waiting
162 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
167 forward progress guarantee, flush and work item attributes. ``@flags``
168 and ``@max_active`` control how work items are assigned execution
173 ---------
177 workqueues are always per-CPU and all BH work items are executed in the
183 BH work items cannot sleep. All other features such as delayed queueing,
187 Work items queued to an unbound wq are served by the special
188 worker-pools which host workers which are not bound to any
191 worker-pools try to start execution of work items as soon as
205 suspend operations. Work items on the wq are drained and no
206 new work item starts execution until thawed.
214 Work items of a highpri wq are queued to the highpri
215 worker-pool of the target cpu. Highpri worker-pools are
218 Note that normal and highpri worker-pools don't interact with
223 Work items of a CPU intensive wq do not contribute to the
225 work items will not prevent other work items in the same
226 worker-pool from starting execution. This is useful for bound
227 work items which are expected to hog CPU cycles so that their
230 Although CPU intensive work items don't contribute to the
233 non-CPU-intensive work items can delay execution of CPU
234 intensive work items.
240 --------------
243 CPU which can be assigned to the work items of a wq. For example, with
244 ``@max_active`` of 16, at most 16 work items of the wq can be executing
245 at the same time per CPU. This is always a per-CPU attribute, even for
253 The number of active work items of a wq is usually regulated by the
254 users of the wq, more specifically, by how many work items the users
256 throttling the number of active work items, specifying '0' is
259 Some users depend on strict execution ordering where only one work item
260 is in flight at any given time and the work items are processed in
272 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
339 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
342 there is dependency among multiple work items used during memory
353 (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items
355 flushed as a part of a group of work items, and don't require any
361 work items (do stress test your producers), it may saturate a system
365 * Unless work items are expected to consume a huge amount of CPU
367 level of locality in wq operations and work item execution.
376 boundaries. A work item queued on the workqueue will be assigned to a worker
388 CPUs are not grouped. A work item issued on one CPU is processed by a
389 worker on the same CPU. This makes unbound workqueues behave as per-cpu
406 work item on a CPU close to the issuing CPU.
423 0 by default indicating that affinity scopes are not strict. When a work
424 item starts execution, workqueue makes a best-effort attempt to ensure
443 kernel, there exists a pronounced trade-off between locality and utilization
446 Higher locality leads to higher efficiency where more work is performed for
448 cause lower overall system utilization if the work items are not spread
450 testing with dm-crypt clearly illustrates this trade-off.
452 The tests are run on a CPU with 12-cores/24-threads split across four L3
454 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
458 Scenario 1: Enough issuers and work spread across the machine
459 -------------------------------------------------------------
463 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
464 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
465 --name=iops-test-job --verify=sha512
467 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
474 .. list-table::
476 :header-rows: 1
478 * - Affinity
479 - Bandwidth (MiBps)
480 - CPU util (%)
482 * - system
483 - 1159.40 ±1.34
484 - 99.31 ±0.02
486 * - cache
487 - 1166.40 ±0.89
488 - 99.34 ±0.01
490 * - cache (strict)
491 - 1166.00 ±0.71
492 - 99.35 ±0.01
496 machine but the cache-affine ones outperform by 0.6% thanks to improved
500 Scenario 2: Fewer issuers, enough work for saturation
501 -----------------------------------------------------
505 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
506 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
507 --time_based --group_reporting --name=iops-test-job --verify=sha512
509 The only difference from the previous scenario is ``--numjobs=8``. There are
510 a third of the issuers but is still enough total work to saturate the
513 .. list-table::
515 :header-rows: 1
517 * - Affinity
518 - Bandwidth (MiBps)
519 - CPU util (%)
521 * - system
522 - 1155.40 ±0.89
523 - 97.41 ±0.05
525 * - cache
526 - 1154.40 ±1.14
527 - 96.15 ±0.09
529 * - cache (strict)
530 - 1112.00 ±4.64
531 - 93.26 ±0.35
533 This is more than enough work to saturate the system. Both "system" and
538 Eight issuers moving around over four L3 cache scope still allow "cache
539 (strict)" to mostly saturate the machine but the loss of work conservation
543 Scenario 3: Even fewer issuers, not enough work to saturate
544 -----------------------------------------------------------
548 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
549 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
550 --time_based --group_reporting --name=iops-test-job --verify=sha512
552 Again, the only difference is ``--numjobs=4``. With the number of issuers
553 reduced to four, there now isn't enough work to saturate the whole system
556 .. list-table::
558 :header-rows: 1
560 * - Affinity
561 - Bandwidth (MiBps)
562 - CPU util (%)
564 * - system
565 - 993.60 ±1.82
566 - 75.49 ±0.06
568 * - cache
569 - 973.40 ±1.52
570 - 74.90 ±0.07
572 * - cache (strict)
573 - 828.20 ±4.49
574 - 66.84 ±0.29
581 ------------------------------
588 While the loss of work-conservation in certain scenarios hurts, it is a lot
599 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
605 * The loss of work-conservation in non-strict affinity scopes is likely
608 work-conservation in most cases. As such, it is possible that future
650 pod_node [0]=-1
656 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
658 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
660 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
662 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
666 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
667 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
668 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
670 Workqueue CPU -> pool
696 events 18545 0 6.1 0 5 - -
697 events_highpri 8 0 0.0 0 0 - -
698 events_long 3 0 0.0 0 0 - -
699 events_unbound 38306 0 0.1 - 7 - -
700 events_freezable 0 0 0.0 0 0 - -
701 events_power_efficient 29598 0 0.2 0 0 - -
702 events_freezable_pwr_ef 10 0 0.0 0 0 - -
703 sock_diag_events 0 0 0.0 0 0 - -
706 events 18548 0 6.1 0 5 - -
707 events_highpri 8 0 0.0 0 0 - -
708 events_long 3 0 0.0 0 0 - -
709 events_unbound 38322 0 0.1 - 7 - -
710 events_freezable 0 0 0.0 0 0 - -
711 events_power_efficient 29603 0 0.2 0 0 - -
712 events_freezable_pwr_ef 10 0 0.0 0 0 - -
713 sock_diag_events 0 0 0.0 0 0 - -
723 Because the work functions are executed by generic worker threads
738 2. A single work item that consumes lots of cpu cycles
747 If something is busy looping on work queueing, it would be dominating
748 the output and the offender can be determined with the work item
756 The work item's function should be trivially visible in the stack
760 Non-reentrance Conditions
763 Workqueue guarantees that a work item cannot be re-entrant if the following
764 conditions hold after a work item gets queued:
766 1. The work function hasn't been changed.
767 2. No one queues the work item to another workqueue.
768 3. The work item hasn't been reinitiated.
770 In other words, if the above conditions hold, the work item is guaranteed to be
771 executed by at most one worker system-wide at any given time.
773 Note that requeuing the work item (to the same queue) in the self function
775 required when breaking the conditions inside a work function.
781 .. kernel-doc:: include/linux/workqueue.h
783 .. kernel-doc:: kernel/workqueue.c