Lines Matching +full:in +full:- +full:application

1 .. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
10 The name NAPI no longer stands for anything in particular [#]_.
12 In basic operation the device notifies the host about new events
18 NAPI processing usually happens in the software interrupt context,
22 All in all NAPI abstracts away from the drivers the context and configuration
30 of the NAPI instance while the method is the driver-specific event
37 -----------
42 unregistered). Instances are added in a disabled state.
51 calls may result in crashes, deadlocks, or race conditions. For example,
52 calling napi_disable() multiple times in a row will deadlock.
55 ------------
58 Drivers should call this function in their interrupt handler
64 argument - drivers can process completions for any number of Tx
68 In other words for Rx processing the ``budget`` argument limits how many
69 packets driver can process in a single poll. Rx specific APIs like page
81 the poll method should return exactly ``budget``. In that case,
96 or return ``budget - 1``.
101 -------------
109 As mentioned in the :ref:`drv_ctrl` section - napi_disable() and subsequent
118 --------------------------
121 the NAPI instance - until NAPI polling finishes any further
125 to IRQ being auto-masked by the device) should use the napi_schedule_prep()
128 .. code-block:: c
130 if (napi_schedule_prep(&v->napi)) {
131 mydrv_mask_rxtx_irq(v->idx);
133 __napi_schedule(&v->napi);
138 .. code-block:: c
140 if (budget && napi_complete_done(&v->napi, work_done)) {
141 mydrv_unmask_rxtx_irq(v->idx);
142 return min(work_done, budget - 1);
146 of guarantees given by being invoked in IRQ context (no need to
151 -------------------------
156 abstraction without specific user-facing semantics. That said, most networking
157 devices end up using NAPI in fairly similar ways.
162 In less common cases a NAPI instance may be used for multiple queues
182 -----------------------
185 In most scenarios batching happens due to IRQ coalescing which is done
195 The above parameters can also be set on a per-NAPI basis using netlink via
196 netdev-genl. When used with netlink and configured on a per-NAPI basis, the
198 ``gro-flush-timeout`` and ``napi-defer-hard-irqs``.
200 Per-NAPI configuration can be done programmatically in a user application
201 or by using a script included in the kernel source tree:
206 .. code-block:: bash
208 $ kernel-source/tools/net/ynl/pyynl/cli.py \
209 --spec Documentation/netlink/specs/netdev.yaml \
210 --do napi-set \
211 --json='{"id": 345,
212 "defer-hard-irqs": 111,
213 "gro-flush-timeout": 11111}'
215 Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
216 via netdev-genl. There is no global sysfs parameter for this value.
218 ``irq-suspend-timeout`` is used to determine how long an application can
219 completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
220 which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
225 ------------
237 epoll-based busy polling
238 ------------------------
241 ``epoll_wait``. In order to use this feature, a user application must ensure
244 If the application uses a dedicated acceptor thread, the application can obtain
250 Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program can
255 In order to enable busy polling, there are two choices:
257 1. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
258 loop waiting for events. This is a system-wide setting and will cause all
259 epoll-based applications to busy poll when they call epoll_wait. This may
266 .. code-block:: c
278 ---------------
283 Very high request-per-second applications (especially routing/forwarding
292 if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
300 with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
302 in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
308 ``gro_flush_timeout`` can cause interference of the user application which is
310 should be chosen carefully with these tradeoffs in mind. epoll-based busy
318 --------------
323 While application calls to epoll_wait successfully retrieve events, the kernel will
334 1. The per-NAPI config parameter ``irq-suspend-timeout`` should be set to the
335 maximum time (in nanoseconds) the application can have its IRQs
338 the application has stalled. This value should be chosen so that it covers
339 the amount of time the user application needs to process data from its
343 2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
350 4. The application uses epoll as described above to trigger NAPI packet
354 userland, the ``irq-suspend-timeout`` is deferred and IRQs are disabled. This
355 allows the application to process data without interference.
357 Once a call to epoll_wait results in no events being found, IRQ suspension is
361 It is expected that ``irq-suspend-timeout`` will be set to a value much larger
362 than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for
370 irq-driven packet delivery. During busy periods, ``irq-suspend-timeout``
378 1) hardirq -> softirq -> napi poll; basic interrupt delivery
379 2) timer -> softirq -> napi poll; deferred irq processing
380 3) epoll -> busy-poll -> napi poll; busy looping
388 During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2,
389 which essentially tilts network processing in favour of Loop 3.
395 the recommended usage, because otherwise setting ``irq-suspend-timeout``
401 -------------
407 thread (called ``napi/${ifc-name}-${napi-id}``).
412 dependent). The NAPI instance IDs will be assigned in the opposite
415 Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
420 .. [#] NAPI was originally referred to as New API in 2.4 Linux.