Lines Matching full:workqueue
3 * kernel/workqueue.c - generic async execution with shared worker pool
25 * Please read Documentation/core-api/workqueue.rst for details.
35 #include <linux/workqueue.h>
235 * tools/workqueue/wq_monitor.py.
251 * The per-pool workqueue. While queued, bits below WORK_PWQ_SHIFT
258 struct workqueue_struct *wq; /* I: the owning workqueue */
301 * Structure used to wait for workqueue flush.
312 * Unlike in a per-cpu workqueue where max_active limits its concurrency level
313 * on each CPU, in an unbound workqueue, max_active applies to the whole system.
332 * The externally visible workqueue. It relays the issued work items to
370 char name[WQ_NAME_LEN]; /* I: workqueue name */
531 #include <trace/events/workqueue.h>
587 * for_each_pwq - iterate through all pool_workqueues of the specified workqueue
589 * @wq: the target workqueue
739 * unbound_effective_cpumask - effective cpumask of an unbound workqueue
740 * @wq: workqueue of interest
1091 * before the original execution finishes, workqueue will identify the
1290 * should be using an unbound workqueue instead.
1339 …printk_deferred(KERN_WARNING "workqueue: %ps hogged CPU for >%luus %llu times, consider switching … in wq_cpu_intensive_report()
1523 * As this function doesn't involve any workqueue-related locking, it
1541 * @wq: workqueue of interest
1566 * @wq: workqueue to update
1720 /* BH or per-cpu workqueue, pwq->nr_active is sufficient */ in pwq_tryinc_nr_active()
1729 * Unbound workqueue uses per-node shared nr_active $nna. If @pwq is in pwq_tryinc_nr_active()
1944 * For a percpu workqueue, it's simple. Just need to kick the first in pwq_dec_nr_active()
1953 * If @pwq is for an unbound workqueue, it's more complicated because in pwq_dec_nr_active()
1979 * decrement nr_in_flight of its pwq and handle workqueue flushing.
2193 * same workqueue.
2220 pr_warn_once("workqueue: round-robin CPU selection forced, expect performance impact\n"); in wq_select_unbound_cpu()
2252 * For a draining wq, only works from the same workqueue are in __queue_work()
2257 WARN_ONCE(!is_chained_work(wq), "workqueue: cannot queue %ps on wq %s\n", in __queue_work()
2279 * For ordered workqueue, work items must be queued on the newest pwq in __queue_work()
2318 WARN_ONCE(true, "workqueue: per-cpu pwq for %s on cpu%d has 0 refcnt", in __queue_work()
2371 * @wq: workqueue to use
2433 * @wq: workqueue to use
2461 * If this is used with a per-cpu workqueue then the logic in in queue_work_node()
2535 * @wq: workqueue to use
2573 * @wq: workqueue to use
2616 * @wq: workqueue to use
2766 * create_worker - create a new workqueue worker
2785 pr_err_once("workqueue: Failed to allocate a worker ID: %pe\n", in create_worker()
2792 pr_err_once("workqueue: Failed to allocate a worker\n"); in create_worker()
2806 pr_err("workqueue: Interrupted when creating a worker thread \"%s\"\n", in create_worker()
2809 pr_err_once("workqueue: Failed to create a worker thread: %pe", in create_worker()
3252 pr_err("BUG: workqueue leaked atomic, lock or RCU: %s[%d]\n" in process_one_work()
3339 * work items regardless of their specific target workqueue. The only
3350 /* tell the scheduler that this is a workqueue worker */ in worker_thread()
3423 * Workqueue rescuer thread function. There's one rescuer for each
3424 * workqueue which has WQ_MEM_RECLAIM set.
3457 * By the time the rescuer is requested to stop, the workqueue in rescuer_thread()
3485 * Slurp in all works issued via this workqueue and in rescuer_thread()
3591 * TODO: Convert all tasklet users to workqueue and use softirq directly.
3690 * @target_wq: workqueue being flushed
3691 * @target_work: work item being flushed (NULL for workqueue flushes)
3698 * on a workqueue which doesn't have %WQ_MEM_RECLAIM as that can break forward-
3715 "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps", in check_flush_dependency()
3719 "workqueue: WQ_MEM_RECLAIM %s:%ps is flushing !WQ_MEM_RECLAIM %s:%ps", in check_flush_dependency()
3814 * flush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing
3815 * @wq: workqueue being flushed
3819 * Prepare pwqs for workqueue flushing.
3857 * For unbound workqueue, pwqs will map to only a few pools. in flush_workqueue_prep_pwqs()
3932 * @wq: workqueue to flush
4088 * drain_workqueue - drain a workqueue
4089 * @wq: workqueue to drain
4091 * Wait until the workqueue becomes empty. While draining is in progress,
4129 pr_warn("workqueue %s: %s() isn't complete after %u tries\n", in drain_workqueue()
4180 * single-threaded or rescuer equipped workqueue. in start_flush_work()
4215 * was queued on a BH workqueue, we also know that it was running in the in __flush_work()
4318 WARN_ONCE(true, "workqueue: work disable count overflowed\n"); in work_offqd_disable()
4326 WARN_ONCE(true, "workqueue: work disable count underflowed\n"); in work_offqd_enable()
4386 * even if the work re-queues itself or migrates to another workqueue. On return
4394 * workqueue. Can also be called from non-hardirq atomic contexts including BH
4395 * if @work was last queued on a BH workqueue.
4468 * workqueue. Can also be called from non-hardirq atomic contexts including BH
4469 * if @work was last queued on a BH workqueue.
4549 * system workqueue and blocks until all CPUs have completed.
4668 * Some attrs fields are workqueue-only. Clear them for worker_pool's. See the
5082 * For ordered workqueue with a plugged dfl_pwq, restart it now. in pwq_release_workfn()
5189 * @attrs: the wq_attrs of the default pwq of the target workqueue
5192 * Calculate the cpumask a workqueue with @attrs should use on @pod.
5235 struct workqueue_struct *wq; /* target workqueue */
5376 * apply_workqueue_attrs - apply new workqueue_attrs to an unbound workqueue
5377 * @wq: the target workqueue
5380 * Apply @attrs to an unbound workqueue @wq. Unless disabled, this function maps
5404 * @wq: the target workqueue
5414 * Note that when the last allowed CPU of a pod goes offline for a workqueue
5416 * executing the work items for the workqueue will lose their CPU affinity and
5418 * CPU_DOWN. If a workqueue user wants strict affinity, it's the user's
5449 pr_warn("workqueue: allocation failed while updating CPU pod affinity of \"%s\"\n", in unbound_wq_update_pwq()
5519 "ordering guarantee broken for workqueue %s\n", wq->name); in alloc_and_link_pwqs()
5544 pr_warn("workqueue: max_active %d requested for %s is out of range, clamping between %d and %d\n", in wq_clamp_max_active()
5567 pr_err("workqueue: Failed to allocate a rescuer for wq \"%s\"\n", in init_rescuer()
5578 pr_err("workqueue: Failed to create a rescuer kthread for wq \"%s\": %pe", in init_rescuer()
5596 * @wq: target workqueue
5696 pr_warn_once("workqueue: name exceeds WQ_NAME_LEN. Truncating to: %s\n", in __alloc_workqueue()
5836 * destroy_workqueue - safely terminate a workqueue
5837 * @wq: target workqueue
5839 * Safely destroy a workqueue. All work currently pending will be done first.
5852 /* mark the workqueue destruction is in progress */ in destroy_workqueue()
5923 * workqueue_set_max_active - adjust max_active of a workqueue
5924 * @wq: target workqueue
5957 * workqueue_set_min_active - adjust min_active of an unbound workqueue
5958 * @wq: target unbound workqueue
5961 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
5962 * unbound workqueue is not guaranteed to be able to process max_active
5963 * interdependent work items. Instead, an unbound workqueue is guaranteed to be
5986 * Determine if %current task is a workqueue worker and what it's working on.
5989 * Return: work struct if %current task is a workqueue worker, %NULL otherwise.
6000 * current_is_workqueue_rescuer - is %current workqueue rescuer?
6002 * Determine whether %current is a workqueue rescuer. Can be used from
6005 * Return: %true if %current is a workqueue rescuer. %false otherwise.
6015 * workqueue_congested - test whether a workqueue is congested
6017 * @wq: target workqueue
6019 * Test whether @wq's cpu workqueue for @cpu is congested. There is
6026 * pool_workqueues, each with its own congested state. A workqueue being
6027 * congested on one CPU doesn't mean that the workqueue is contested on any
6117 * name of the workqueue being serviced and worker description set with
6143 * Carefully copy the associated workqueue's workfn, name and desc. in print_worker_info()
6305 * show_one_workqueue - dump state of specified workqueue
6306 * @wq: workqueue whose state will be printed
6320 if (idle) /* Nothing to print for idle workqueue */ in show_one_workqueue()
6323 pr_info("workqueue %s: flags=0x%x\n", wq->name, wq->flags); in show_one_workqueue()
6398 * show_all_workqueues - dump workqueue state
6422 * show_freezable_workqueues - dump freezable workqueue state
6952 * by any subsequent write to workqueue/cpumask sysfs file. in workqueue_unbound_exclude_cpumask()
7021 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
7024 * per_cpu RO bool : whether the workqueue is per-cpu or unbound
7257 .name = "workqueue",
7367 * workqueue_sysfs_register - make a workqueue visible in sysfs
7368 * @wq: the workqueue to register
7370 * Expose @wq in sysfs under /sys/bus/workqueue/devices.
7374 * Workqueue user should use this function directly iff it wants to apply
7375 * workqueue_attrs before making the workqueue visible in sysfs; otherwise,
7435 * @wq: the workqueue to unregister
7454 * Workqueue watchdog.
7458 * indefinitely. Workqueue stalls can be very difficult to debug as the
7459 * usual warning mechanisms don't trigger and internal workqueue state is
7462 * Workqueue watchdog monitors all worker pools periodically and dumps
7467 * "workqueue.watchdog_thresh" which can be updated at runtime through the
7597 pr_emerg("BUG: workqueue lockup - pool"); in wq_watchdog_timer_fn()
7700 pr_warn("workqueue: Restricting unbound_cpumask (%*pb) with %s (%*pb) leaves no CPU, ignoring\n", in restrict_unbound_cpumask()
7725 * workqueue_init_early - early init for workqueue subsystem
7727 * This is the first step of three-staged workqueue subsystem initialization and
7754 restrict_unbound_cpumask("workqueue.unbound_cpus", &wq_cmdline_cpumask); in workqueue_init_early()
7764 * If nohz_full is enabled, set power efficient workqueue as unbound. in workqueue_init_early()
7765 * This allows workqueue items to be moved to HK CPUs. in workqueue_init_early()
7880 * workqueue_init - bring workqueue subsystem fully online
7882 * This is the second step of three-staged workqueue subsystem initialization
7911 "workqueue: failed to create early rescuer for %s", in workqueue_init()
8006 * This is the third step of three-staged workqueue subsystem initialization and
8026 * worker pool. Explicitly call unbound_wq_update_pwq() on all workqueue in workqueue_init_topology()
8053 pr_warn("workqueue.unbound_cpus: incorrect CPU range, using default\n"); in workqueue_unbound_cpus_setup()
8058 __setup("workqueue.unbound_cpus=", workqueue_unbound_cpus_setup);