Lines Matching +full:data +full:- +full:independent
12 ----------
18 - more efficient memory utilization by sharing ring buffer across CPUs;
19 - preserving ordering of events that happen sequentially in time, even across
22 These two problems are independent, but perf buffer fails to satisfy both.
23 Both are a result of a choice to have per-CPU perf ring buffer. Both can be
25 problem could technically be solved for perf buffer with some in-kernel
30 ------------------
56 The approach chosen has an advantage of re-using existing BPF map
62 combined with ``ARRAY_OF_MAPS`` and ``HASH_OF_MAPS`` map-in-maps to implement
75 - variable-length records;
76 - if there is no more space left in ring buffer, reservation fails, no
78 - memory-mappable data area for user-space applications for ease of
80 - epoll notifications for new incoming data;
81 - but still the ability to do busy polling for new data to achieve the
86 - ``bpf_ringbuf_output()`` allows to *copy* data from one place to a ring
88 - ``bpf_ringbuf_reserve()``/``bpf_ringbuf_commit()``/``bpf_ringbuf_discard()``
90 is reserved. If successful, a pointer to a data inside ring buffer data
91 area is returned, which BPF programs can use similarly to a data inside
104 than BPF stack space allows, so many programs have use extra per-CPU array as
114 code. Discard is useful for some advanced use-cases, such as ensuring
115 all-or-nothing multi-record submission, or emulating temporary
119 reference-tracking logic, similar to socket ref-tracking. It is thus
125 - ``BPF_RB_AVAIL_DATA`` returns amount of unconsumed data in ring buffer;
126 - ``BPF_RB_RING_SIZE`` returns the size of ring buffer;
127 - ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical position
133 into account highly-changeable nature of some of those characteristics.
135 One such heuristic might involve more fine-grained control over poll/epoll
136 notifications about new data availability in ring buffer. Together with
139 efficient batched notifications. Default self-balancing strategy, though,
144 -------------------------
148 independent records and work with them without blocking other producers. This
156 The ring buffer itself internally is implemented as a power-of-2 sized
157 circular buffer, with two logical and ever-increasing counters (which might
158 wrap around on 32-bit architectures, that's not a problem):
160 - consumer counter shows up to which logical position consumer consumed the
161 data;
162 - producer counter denotes amount of data reserved by all producers.
165 successfully advance producer counter. At that point, data is still not yet
171 relative offset from the beginning of ring buffer data area (in pages). This
180 completely lockless and independent. All records become available to consumer
186 speeds up as well) implementation of both producers and consumers is how data
187 area is mapped twice contiguously back-to-back in the virtual memory. This
189 at the end of the circular buffer data area, because the next page after the
190 last data page would be first data page again, and thus the sample will still
195 a self-pacing notifications of new data being availability.
199 will see new data anyways without needing an extra poll notification.
206 data availability, but require extra caution and diligence in using this API.