Lines Matching +full:half +full:- +full:period

7    The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst
12 The bandwidth allowed for a group is specified using a quota and period. Within
13 each given "period" (microseconds), a task group is allocated up to "quota"
14 microseconds of CPU time. That quota is assigned to per-cpu run queues in
18 period when the quota is replenished.
21 cfs_quota units at each period boundary. As threads consume this bandwidth it
22 is transferred to cpu-local "silos" on a demand basis. The amount transferred
26 -------------
30 Traditional (UP-EDF) bandwidth control is something like:
66 https://lore.kernel.org/lkml/5371BD36-55AE-4F71-B9D7-[email protected]/
69 ----------
70 Quota, period and burst are managed within the cpu subsystem via cgroupfs.
75 :ref:`Documentation/admin-guide/cgroup-v2.rst <cgroup-v2-cpu>`.
77 - cpu.cfs_quota_us: run-time replenished within a period (in microseconds)
78 - cpu.cfs_period_us: the length of a period (in microseconds)
79 - cpu.stat: exports throttling statistics [explained further below]
80 - cpu.cfs_burst_us: the maximum accumulated run-time (in microseconds)
85 cpu.cfs_quota_us=-1
88 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
90 bandwidth group. This represents the traditional work-conserving behavior for
95 period is 1ms. There is also an upper bound on the period length of 1s.
112 --------------------
113 For efficiency run-time is transferred between the global pool and CPU local
123 for more fine-grained consumption.
126 ----------
131 - nr_periods: Number of enforcement intervals that have elapsed.
132 - nr_throttled: Number of times the group has been throttled/limited.
133 - throttled_time: The total time duration (in nanoseconds) for which entities
135 - nr_bursts: Number of periods burst occurs.
136 - burst_time: Cumulative wall-time (in nanoseconds) that any CPUs has used
139 This interface is read-only.
142 ---------------------------
144 attainable, that is: max(c_i) <= C. However, over-subscription in the
145 aggregate case is explicitly allowed to enable work-conserving semantics
155 a. it fully consumes its own quota within a period
156 b. a parent's quota is fully consumed within its period
162 ---------------------------
169 The fact that cpu-local slices do not expire results in some interesting corner
174 quota as well as the entirety of each cpu-local slice in each period. As a
176 cpuacct.usage will increase roughly equal to cfs_quota_us in each period.
178 For highly-threaded, non-cpu bound applications this non-expiration nuance
185 average usage, albeit over a longer time window than a single period. This
192 possibility of wastefully expiring quota on cpu-local silos that don't need a
195 The interaction between cpu-bound and non-cpu-bound-interactive applications
197 gave each of these applications half of a cpu-core and they both got scheduled
198 on the same CPU it is theoretically possible that the non-cpu bound application
200 cpu-bound application from fully using its quota by that same amount. In these
201 instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
207 --------
210 If period is 250ms and quota is also 250ms, the group will get
214 # echo 250000 > cpu.cfs_period_us /* period = 250ms */
216 2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine
218 With 500ms period and 1000ms quota, the group can get 2 CPUs worth of
222 # echo 500000 > cpu.cfs_period_us /* period = 500ms */
224 The larger period here allows for increased burst capacity.
228 With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU::
231 # echo 50000 > cpu.cfs_period_us /* period = 50ms */
233 By using a small period here we are ensuring a consistent latency
239 With 50ms period, 20ms quota will be equivalent to 40% of 1 CPU.
243 # echo 50000 > cpu.cfs_period_us /* period = 50ms */