Lines Matching +full:cpu +full:- +full:2

32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
35 wq users over the years and with the number of CPU cores continuously
42 worker pool. An MT wq could provide only one execution context per CPU
60 * Use per-CPU unified worker pools shared by all wq to provide
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
92 for high priority ones, for each possible CPU and some extra
93 worker-pools to serve work items queued on unbound workqueues - the
98 Each per-CPU BH worker pool contains only one pseudo worker which represents
106 things like CPU locality, concurrency limits, priority and more. To
110 When a work item is queued to a workqueue, the target worker-pool is
112 and appended on the shared worklist of the worker-pool. For example,
114 be queued on the worklist of either normal or highpri worker-pool that
115 is associated to the CPU the issuer is running on.
123 Each worker-pool bound to an actual CPU implements concurrency
124 management by hooking into the scheduler. The worker-pool is notified
127 not expected to hog a CPU and consume many cycles. That means
130 workers on the CPU, the worker-pool doesn't start execution of a new
132 schedules a new worker so that the CPU doesn't sit idle while there
152 wq's that have a rescue-worker reserved for execution under memory
153 pressure. Else it is possible that the worker-pool deadlocks waiting
162 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
173 ---------
177 workqueues are always per-CPU and all BH work items are executed in the
178 queueing CPU's softirq context in the queueing order.
188 worker-pools which host workers which are not bound to any
189 specific CPU. This makes the wq behave as a simple execution
191 worker-pools try to start execution of work items as soon as
200 * Long running CPU intensive workloads which can be better
215 worker-pool of the target cpu. Highpri worker-pools are
218 Note that normal and highpri worker-pools don't interact with
223 Work items of a CPU intensive wq do not contribute to the
224 concurrency level. In other words, runnable CPU intensive
226 worker-pool from starting execution. This is useful for bound
227 work items which are expected to hog CPU cycles so that their
230 Although CPU intensive work items don't contribute to the
233 non-CPU-intensive work items can delay execution of CPU
240 --------------
243 CPU which can be assigned to the work items of a wq. For example, with
245 at the same time per CPU. This is always a per-CPU attribute, even for
272 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
273 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
274 again before finishing. w1 and w2 burn CPU for 5ms then sleep for
282 0 w0 starts and burns CPU
284 15 w0 wakes up and burns CPU
286 20 w1 starts and burns CPU
289 35 w2 starts and burns CPU
296 0 w0 starts and burns CPU
298 5 w1 starts and burns CPU
300 10 w2 starts and burns CPU
302 15 w0 wakes up and burns CPU
307 If ``@max_active`` == 2, ::
310 0 w0 starts and burns CPU
312 5 w1 starts and burns CPU
314 15 w0 wakes up and burns CPU
317 20 w2 starts and burns CPU
325 0 w0 starts and burns CPU
327 5 w1 and w2 start and burn CPU
330 15 w0 wakes up and burns CPU
365 * Unless work items are expected to consume a huge amount of CPU
377 on one of the CPUs which share the last level cache with the issuing CPU.
387 ``cpu``
388 CPUs are not grouped. A work item issued on one CPU is processed by a
389 worker on the same CPU. This makes unbound workqueues behave as per-cpu
394 logical threads of each physical CPU core are grouped together.
406 work item on a CPU close to the issuing CPU.
424 item starts execution, workqueue makes a best-effort attempt to ensure
443 kernel, there exists a pronounced trade-off between locality and utilization
447 the same number of consumed CPU cycles. However, higher locality may also
450 testing with dm-crypt clearly illustrates this trade-off.
452 The tests are run on a CPU with 12-cores/24-threads split across four L3
453 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency.
454 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
459 -------------------------------------------------------------
463 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
464 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
465 --name=iops-test-job --verify=sha512
467 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
470 are the read bandwidths and CPU utilizations depending on different affinity
472 MiBps, and CPU util in percents.
474 .. list-table::
476 :header-rows: 1
478 * - Affinity
479 - Bandwidth (MiBps)
480 - CPU util (%)
482 * - system
483 - 1159.40 ±1.34
484 - 99.31 ±0.02
486 * - cache
487 - 1166.40 ±0.89
488 - 99.34 ±0.01
490 * - cache (strict)
491 - 1166.00 ±0.71
492 - 99.35 ±0.01
496 machine but the cache-affine ones outperform by 0.6% thanks to improved
500 Scenario 2: Fewer issuers, enough work for saturation
501 -----------------------------------------------------
505 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
506 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
507 --time_based --group_reporting --name=iops-test-job --verify=sha512
509 The only difference from the previous scenario is ``--numjobs=8``. There are
513 .. list-table::
515 :header-rows: 1
517 * - Affinity
518 - Bandwidth (MiBps)
519 - CPU util (%)
521 * - system
522 - 1155.40 ±0.89
523 - 97.41 ±0.05
525 * - cache
526 - 1154.40 ±1.14
527 - 96.15 ±0.09
529 * - cache (strict)
530 - 1112.00 ±4.64
531 - 93.26 ±0.35
535 less CPU but the better efficiency puts it at the same bandwidth as
544 -----------------------------------------------------------
548 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
549 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
550 --time_based --group_reporting --name=iops-test-job --verify=sha512
552 Again, the only difference is ``--numjobs=4``. With the number of issuers
556 .. list-table::
558 :header-rows: 1
560 * - Affinity
561 - Bandwidth (MiBps)
562 - CPU util (%)
564 * - system
565 - 993.60 ±1.82
566 - 75.49 ±0.06
568 * - cache
569 - 973.40 ±1.52
570 - 74.90 ±0.07
572 * - cache (strict)
573 - 828.20 ±4.49
574 - 66.84 ±0.29
577 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.
581 ------------------------------
588 While the loss of work-conservation in certain scenarios hurts, it is a lot
594 that may consume a significant amount of CPU are recommended to configure
598 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
599 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
605 * The loss of work-conservation in non-strict affinity scopes is likely
608 work-conservation in most cases. As such, it is possible that future
615 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
623 CPU
625 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008
626 pod_node [0]=0 [1]=0 [2]=1 [3]=1
627 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3
631 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008
632 pod_node [0]=0 [1]=0 [2]=1 [3]=1
633 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3
636 nr_pods 2
639 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1
642 nr_pods 2
645 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1
650 pod_node [0]=-1
651 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0
655 pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0
656 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
657 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1
658 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
659 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2
660 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
661 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3
662 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
666 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
667 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
668 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
670 Workqueue CPU -> pool
672 [ workqueue \ CPU 0 1 2 3 dfl]
673 events percpu 0 2 4 6
675 events_long percpu 0 2 4 6
677 events_freezable percpu 0 2 4 6
678 events_power_efficient percpu 0 2 4 6
679 events_freezable_pwr_ef percpu 0 2 4 6
680 rcu_gp percpu 0 2 4 6
681 rcu_par_gp percpu 0 2 4 6
682 slub_flushwq percpu 0 2 4 6
696 events 18545 0 6.1 0 5 - -
697 events_highpri 8 0 0.0 0 0 - -
698 events_long 3 0 0.0 0 0 - -
699 events_unbound 38306 0 0.1 - 7 - -
700 events_freezable 0 0 0.0 0 0 - -
701 events_power_efficient 29598 0 0.2 0 0 - -
702 events_freezable_pwr_ef 10 0 0.0 0 0 - -
703 sock_diag_events 0 0 0.0 0 0 - -
706 events 18548 0 6.1 0 5 - -
707 events_highpri 8 0 0.0 0 0 - -
708 events_long 3 0 0.0 0 0 - -
709 events_unbound 38322 0 0.1 - 7 - -
710 events_freezable 0 0 0.0 0 0 - -
711 events_power_efficient 29603 0 0.2 0 0 - -
712 events_freezable_pwr_ef 10 0 0.0 0 0 - -
713 sock_diag_events 0 0 0.0 0 0 - -
730 root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2]
734 If kworkers are going crazy (using too much cpu), there are two types
738 2. A single work item that consumes lots of cpu cycles
760 Non-reentrance Conditions
763 Workqueue guarantees that a work item cannot be re-entrant if the following
767 2. No one queues the work item to another workqueue.
771 executed by at most one worker system-wide at any given time.
781 .. kernel-doc:: include/linux/workqueue.h
783 .. kernel-doc:: kernel/workqueue.c