Lines Matching +full:use +full:- +full:cases

12 ----------
18 - more efficient memory utilization by sharing ring buffer across CPUs;
19 - preserving ordering of events that happen sequentially in time, even across
23 Both are a result of a choice to have per-CPU perf ring buffer. Both can be
25 problem could technically be solved for perf buffer with some in-kernel
30 ------------------
39 with existing perf buffer use in BPF, but would fail if application needed more
42 Additionally, given the performance of BPF ringbuf, many use cases would just
56 The approach chosen has an advantage of re-using existing BPF map
62 combined with ``ARRAY_OF_MAPS`` and ``HASH_OF_MAPS`` map-in-maps to implement
64 a replacement for perf buffer use cases), to a complicated application
75 - variable-length records;
76 - if there is no more space left in ring buffer, reservation fails, no
78 - memory-mappable data area for user-space applications for ease of
80 - epoll notifications for new incoming data;
81 - but still the ability to do busy polling for new data to achieve the
86 - ``bpf_ringbuf_output()`` allows to *copy* data from one place to a ring
88 - ``bpf_ringbuf_reserve()``/``bpf_ringbuf_commit()``/``bpf_ringbuf_discard()``
91 area is returned, which BPF programs can use similarly to a data inside
103 pointer directly to ring buffer memory. In a lot of cases records are larger
104 than BPF stack space allows, so many programs have use extra per-CPU array as
109 due to extra memory copy, covers some use cases that are not suitable for
114 code. Discard is useful for some advanced use-cases, such as ensuring
115 all-or-nothing multi-record submission, or emulating temporary
119 reference-tracking logic, similar to socket ref-tracking. It is thus
125 - ``BPF_RB_AVAIL_DATA`` returns amount of unconsumed data in ring buffer;
126 - ``BPF_RB_RING_SIZE`` returns the size of ring buffer;
127 - ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical position
133 into account highly-changeable nature of some of those characteristics.
135 One such heuristic might involve more fine-grained control over poll/epoll
139 efficient batched notifications. Default self-balancing strategy, though,
144 -------------------------
156 The ring buffer itself internally is implemented as a power-of-2 sized
157 circular buffer, with two logical and ever-increasing counters (which might
158 wrap around on 32-bit architectures, that's not a problem):
160 - consumer counter shows up to which logical position consumer consumed the
162 - producer counter denotes amount of data reserved by all producers.
187 area is mapped twice contiguously back-to-back in the virtual memory. This
195 a self-pacing notifications of new data being availability.
203 buffer. For extreme cases, when BPF program wants more manual control of