Lines Matching +full:20 +full:a
17 When such an asynchronous execution context is needed, a work item
18 describing which function to execute is put on a queue. An
25 When a new work item gets queued, the worker begins executing again.
31 In the original wq implementation, a multi threaded (MT) wq had one
32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
34 number of workers as the number of CPUs. The kernel grew a lot of MT
39 Although MT wq wasted a lot of resource, the level of concurrency
55 Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
61 flexible level of concurrency on demand without wasting a lot of
71 In order to ease the asynchronous execution of functions a new
74 A work item is a simple struct that holds a pointer to the function
75 that is to be executed asynchronously. Whenever a driver or subsystem
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
80 A work item can be executed in either a thread or the BH (softirq) context.
99 the BH execution context. A BH workqueue can be considered a convenience
107 get a detailed overview refer to the API description of
110 When a work item is queued to a workqueue, the target worker-pool is
113 unless specifically overridden, a work item of a bound workqueue will
119 tries to keep the concurrency at a minimal but sufficient level.
127 not expected to hog a CPU and consume many cycles. That means
130 workers on the CPU, the worker-pool doesn't start execution of a new
132 schedules a new worker so that the CPU doesn't sit idle while there
133 are pending work items. This allows using a minimal number of workers
137 for kthreads, so cmwq holds onto idle ones for a while before killing
144 regulating concurrency level is on the users. There is also a flag to
145 mark a bound wq to ignore the concurrency management. Please refer to
152 wq's that have a rescue-worker reserved for execution under memory
160 ``alloc_workqueue()`` allocates a wq. The original
166 A wq no longer manages execution resources but serves as a domain for
176 BH workqueues can be considered a convenience interface to softirq. BH
189 specific CPU. This makes the wq behave as a simple execution
204 A freezable wq participates in the freeze phase of the system
214 Work items of a highpri wq are queued to the highpri
223 Work items of a CPU intensive wq do not contribute to the
243 CPU which can be assigned to the work items of a wq. For example, with
245 at the same time per CPU. This is always a per-CPU attribute, even for
253 The number of active work items of a wq is usually regulated by the
255 may queue at the same time. Unless there is a specific need for
272 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
285 20 w0 finishes
286 20 w1 starts and burns CPU
303 20 w0 finishes
304 20 w1 wakes up and finishes
315 20 w0 finishes
316 20 w1 wakes up and finishes
317 20 w2 starts and burns CPU
321 Now, let's assume w1 and w2 are queued to a different wq q1 which has
331 20 w0 finishes
332 20 w1 wakes up and finishes
339 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
348 * Unless there is a specific need, using 0 for @max_active is
352 * A wq serves as a domain for forward progress guarantee
355 flushed as a part of a group of work items, and don't require any
357 difference in execution characteristics between using a dedicated wq
358 and a system wq.
361 work items (do stress test your producers), it may saturate a system
365 * Unless work items are expected to consume a huge amount of CPU
366 cycles, using a bound wq is usually beneficial due to the increased
374 cache locality. For example, if a workqueue is using the default affinity
376 boundaries. A work item queued on the workqueue will be assigned to a worker
388 CPUs are not grouped. A work item issued on one CPU is processed by a
398 boundary is used is determined by the arch code. L3 is used in a lot of
405 All CPUs are put in the same group. Workqueue makes no effort to process a
406 work item on a CPU close to the issuing CPU.
409 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
423 0 by default indicating that affinity scopes are not strict. When a work
424 item starts execution, workqueue makes a best-effort attempt to ensure
443 kernel, there exists a pronounced trade-off between locality and utilization
452 The tests are run on a CPU with 12-cores/24-threads split across four L3
454 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
475 :widths: 16 20 20
510 a third of the issuers but is still enough total work to saturate the
514 :widths: 16 20 20
557 :widths: 16 20 20
577 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.
588 While the loss of work-conservation in certain scenarios hurts, it is a lot
594 that may consume a significant amount of CPU are recommended to configure
600 latter and an unbound workqueue provides a lot more flexibility.
656 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
658 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
660 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
662 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
666 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
667 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
668 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
724 there are a few tricks needed to shed some light on misbehaving
738 2. A single work item that consumes lots of cpu cycles
744 (wait a few secs)
763 Workqueue guarantees that a work item cannot be re-entrant if the following
764 conditions hold after a work item gets queued:
775 required when breaking the conditions inside a work function.