Lines Matching full:throughput

28  * to distribute the device throughput among processes as desired,
29 * without any distortion due to throughput fluctuations, or to device
34 * guarantees that each queue receives a fraction of the throughput
37 * processes issuing sequential requests (to boost the throughput),
76 * preserving both a low latency and a high throughput on NCQ-capable,
81 * the maximum-possible throughput at all times, then do switch off
190 * writes to steal I/O throughput to reads.
240 * because it is characterized by limited throughput and apparently
320 * a) unjustly steal throughput to applications that may actually need
323 * in loss of device throughput with most flash-based storage, and may
349 * throughput-friendly I/O operations. This is even more true if BFQ
834 * must receive the same share of the throughput (symmetric scenario),
836 * throughput lower than or equal to the share that every other active
839 * throughput even if I/O dispatching is not plugged when bfqq remains
1301 * throughput: the quicker the requests of the activated queues are
1307 * weight-raising these new queues just lowers throughput in most
1331 * idling depending on which choice boosts the throughput more. The
1585 * I/O, which may in turn cause loss of throughput. Finally, there may
1702 * budget. Do not care about throughput consequences, in bfq_update_bfqq_wr_on_rq_arrival()
1945 * guarantees or throughput. As for guarantees, we care in bfq_bfqq_handle_idle_busy_switch()
1975 * As for throughput, we ask bfq_better_to_idle() whether we in bfq_bfqq_handle_idle_busy_switch()
1978 * boost throughput or to perserve service guarantees. Then in bfq_bfqq_handle_idle_busy_switch()
1980 * would certainly lower throughput. We may end up in this in bfq_bfqq_handle_idle_busy_switch()
2032 * throughput, as explained in detail in the comments in in bfq_reset_inject_limit()
2100 * A remarkable throughput boost can be reached by unconditionally
2103 * plugged for bfqq. In addition to boosting throughput, this
2127 * The sooner a waker queue is detected, the sooner throughput can be
2159 * doesn't hurt throughput that much. The condition below makes sure in bfq_check_waker()
2745 * the best possible order for throughput. in bfq_find_close_cooperator()
2814 * are likely to increase the throughput. in bfq_setup_merge()
2968 * throughput, it must have many requests enqueued at the same in bfq_setup_cooperator()
2974 * the throughput reached by the device is likely to be the in bfq_setup_cooperator()
2978 * terms of throughput. Merging tends to make many workloads in bfq_setup_cooperator()
2987 * for BFQ to let the device reach a high throughput. in bfq_setup_cooperator()
3296 * budget. This prevents seeky processes from lowering the throughput.
3400 * its reserved share of the throughput (in particular, it is in bfq_arm_slice_timer()
3423 * this maximises throughput with sequential workloads.
3432 * Update parameters related to throughput and responsiveness, as a
3690 * throughput concerns, but to preserve the throughput share of
3702 * determine also the actual throughput distribution among
3704 * concern about per-process throughput distribution, and
3707 * scheduler is likely to coincide with the desired throughput
3710 * (i-a) each of these processes must get the same throughput as
3714 * throughput than any of the other processes;
3723 * same throughput. This is exactly the desired throughput
3730 * that bfqq receives its assigned fraction of the device throughput
3733 * The problem is that idling may significantly reduce throughput with
3737 * throughput, it is important to check conditions (i-a), i(-b) and
3753 * share of the throughput even after being dispatched. In this
3758 * guaranteed its fair share of the throughput (basically because
3786 * risk of getting less throughput than its fair share.
3790 * throughput. This mechanism and its benefits are explained
3827 * part) without minimally sacrificing throughput. And, if
3829 * this device is probably a high throughput.
4003 * for throughput. in __bfq_bfqq_recalc_budget()
4027 * the throughput, as discussed in the in __bfq_bfqq_recalc_budget()
4042 * the chance to boost the throughput if this in __bfq_bfqq_recalc_budget()
4056 * candidate to boost the disk throughput. in __bfq_bfqq_recalc_budget()
4138 * their chances to lower the throughput. More details in the comments
4249 * throughput with the I/O of the application (e.g., because the I/O
4340 * tends to lower the throughput). In addition, this time-charging
4486 * only to be kicked off for preserving a high throughput.
4519 * boosts the throughput. in idling_boosts_thr_without_issues()
4522 * idling is virtually always beneficial for the throughput if: in idling_boosts_thr_without_issues()
4532 * throughput even with sequential I/O; rather it would lower in idling_boosts_thr_without_issues()
4533 * the throughput in proportion to how fast the device in idling_boosts_thr_without_issues()
4556 * of the device throughput proportional to their high in idling_boosts_thr_without_issues()
4584 * device idling plays a critical role for both throughput boosting
4589 * beneficial for throughput or, even if detrimental for throughput,
4591 * latency, desired throughput distribution, ...). In particular, on
4594 * device boost the throughput without causing any service-guarantee
4635 * either boosts the throughput (without issues), or is in bfq_better_to_idle()
4649 * why performing device idling is the best choice to boost the throughput
4737 * drive reach a very high throughput, even if in bfq_choose_bfqq_for_injection()
4886 * provide a reasonable throughput. in bfq_select_queue()
4902 * throughput and is possible. in bfq_select_queue()
4943 * throughput. The best action to take is therefore to in bfq_select_queue()
4963 * bfqq delivers more throughput when served without in bfq_select_queue()
4966 * count more than overall throughput, and may be in bfq_select_queue()
4987 * reasons. First, throughput may be low because the in bfq_select_queue()
5239 * throughput. in __bfq_dispatch_request()
5721 * Many throughput-sensitive workloads are made of several parallel
5730 * throughput, and not detrimental for service guarantees. The
5737 * throughput of the flows and task-wide I/O latency. In particular,
5758 * with ten random readers on /dev/nullb shows a throughput boost of
5760 * the total per-request processing time, the above throughput boost
5787 * underutilized, and throughput may decrease. in bfq_do_or_sched_stable_merge()
5791 * throughput-beneficial if not merged. Currently this is in bfq_do_or_sched_stable_merge()
5793 * such a drive, not merging bfqq is better for throughput if in bfq_do_or_sched_stable_merge()
5815 * throughput benefits compared with in bfq_do_or_sched_stable_merge()
6025 * and in a severe loss of total throughput. in bfq_update_has_short_ttime()
6051 * performed at all times, and throughput gets boosted. in bfq_update_has_short_ttime()
6070 * to boost throughput more effectively, by injecting the I/O in bfq_update_has_short_ttime()
6107 * - we are idling to boost throughput, and in bfq_rq_enqueued()
6424 * control troubles than throughput benefits. Then reset in bfq_completed_request()
6503 * and the throughput is not affected. In contrast, if BFQ is not
6514 * To counter this loss of throughput, BFQ implements a "request
6518 * both boost throughput and not break bfqq's bandwidth and latency
6561 * set to 1, to start boosting throughput, and to prepare the