Home
last modified time | relevance | path

Searched +full:always +full:- +full:running (Results 1 – 25 of 1024) sorted by relevance

12345678910>>...41

/linux-6.14.4/drivers/thermal/
Dcpuidle_cooling.c1 // SPDX-License-Identifier: GPL-2.0
21 * struct cpuidle_cooling_device - data for the idle cooling device
31 * cpuidle_cooling_runtime - Running time computation
35 * The running duration is computed from the idle injection duration
37 * means the running duration is zero. If we have a 50% ratio
39 * running duration.
43 * running = idle x ((100 / ratio) - 1)
47 * running = (idle x 100) / ratio - idle
50 * with 10ms of idle injection and 10ms of running duration.
60 return ((idle_duration_us * 100) / state) - idle_duration_us; in cpuidle_cooling_runtime()
[all …]
/linux-6.14.4/arch/x86/xen/
DKconfig1 # SPDX-License-Identifier: GPL-2.0
29 Support running as a Xen PV guest.
32 bool "Limit Xen pv-domain memory to 512GB"
39 pv-domains with more than 512 GB of RAM. This option controls the
41 It is always possible to change the default via specifying the
65 Support running as a Xen PVHVM guest.
85 Support for running as a Xen PVH guest.
94 Support running as a Xen Dom0 guest.
97 bool "Always use safe MSR accesses in PV guests"
/linux-6.14.4/include/uapi/linux/
Dmembarrier.h31 * enum membarrier_cmd - membarrier system call command
34 * @MEMBARRIER_CMD_GLOBAL: Execute a memory barrier on all running threads.
36 * is ensured that all running threads have passed
38 * user-space addresses match program order between
40 * (non-running threads are de facto in such a
42 * running on the system. This command returns 0.
44 * Execute a memory barrier on all running threads
48 * is ensured that all running threads have passed
50 * user-space addresses match program order between
52 * (non-running threads are de facto in such a
[all …]
/linux-6.14.4/Documentation/userspace-api/
Dcheck_exec.rst1 .. SPDX-License-Identifier: GPL-2.0
12 `samples/check-exec/inc.c`_ example.
15 security risk of running malicious scripts with respect to the execution
17 not. For instance, Python scripts running on a server can use arbitrary
20 However, a JavaScript engine running in a web browser should already be
30 ``SECBIT_EXEC_RESTRICT_FILE`` or ``SECBIT_EXEC_DENY_INTERACTIVE`` were always
31 set to 1 (i.e. always enforce restrictions).
41 Programs should always perform this check to apply kernel-level checks against
60 To avoid race conditions leading to time-of-check to time-of-use issues,
76 securebits but without relying on any other user-controlled configuration.
[all …]
/linux-6.14.4/Documentation/arch/arm64/
Dcpu-hotplug.rst1 .. SPDX-License-Identifier: GPL-2.0
15 CPU Hotplug on physical systems - CPUs not present at boot
16 ----------------------------------------------------------
20 in one of the sockets can be replaced while the system is running.
26 while the system is running, and ACPI is not able to sufficiently describe
37 'always on'.
42 CPU Hotplug on virtual systems - CPUs not enabled at boot
43 ---------------------------------------------------------
46 ever have can be described at boot. There are no power-domain considerations
66 ``enabled``. The 'always on' GICR structure must be used to describe the
[all …]
/linux-6.14.4/Documentation/admin-guide/hw-vuln/
Dcore-scheduling.rst1 .. SPDX-License-Identifier: GPL-2.0
9 workloads may benefit from running on the same core as they don't need the same
15 ----------------
16 A cross-HT attack involves the attacker and victim running on different Hyper
18 full mitigation of cross-HT attacks is to disable Hyper Threading (HT). Core
19 scheduling is a scheduler feature that can mitigate some (not all) cross-HT
21 user-designated trusted group can share a core. This increase in core sharing
23 will always improve, though that is seen to be the case with a number of real
26 not always: as synchronizing scheduling decisions across 2 or more CPUs in a
27 core involves additional overhead - especially when the system is lightly
[all …]
/linux-6.14.4/rust/kernel/
Dtask.rs1 // SPDX-License-Identifier: GPL-2.0
30 /// Returns the currently running task.
34 // SAFETY: Deref + addr-of below create a temporary `TaskRef` that cannot outlive the
40 /// Returns the currently running task's pid namespace.
44 // SAFETY: Deref + addr-of below create a temporary `PidNamespaceRef` that cannot outlive
56 /// Instances of this type are always refcounted, that is, a call to `get_task_struct` ensures
86 /// fn new() -> Self {
122 pub fn current_raw() -> *mut bindings::task_struct { in current_raw()
123 // SAFETY: Getting the current pointer is always safe. in current_raw()
135 pub unsafe fn current() -> impl Deref<Target = Task> { in current()
[all …]
/linux-6.14.4/Documentation/scheduler/
Dsched-nice-design.rst6 nice-levels implementation in the new Linux scheduler.
8 Nice levels were always pretty weak under Linux and people continuously
34 -*----------------------------------*-----> [nice level]
35 -20 | +19
49 people were running number crunching apps at nice +19.)
52 right minimal granularity - and this translates to 5% CPU utilization.
53 But the fundamental HZ-sensitive property for nice+19 still remained,
56 too _strong_ :-)
58 To sum it up: we always wanted to make nice levels more consistent, but
79 depend on the nice level of the parent shell - if it was at nice -10 the
[all …]
Dsched-util-clamp.rst1 .. SPDX-License-Identifier: GPL-2.0
57 foreground, top-app, etc. Util clamp can be used to constrain how much
60 the ones belonging to the currently active app (top-app group). Beside this
65 1. The big cores are free to run top-app tasks immediately. top-app
90 UCLAMP_MIN=1024 will ensure such tasks will always see the highest performance
91 level when they start running.
106 Note that by design RT tasks don't have per-task PELT signal and must always
109 Note that using schedutil always implies a single delay to modify the frequency
111 helps picking what frequency to request instead of schedutil always requesting
114 See :ref:`section 3.4 <uclamp-default-values>` for default values and
[all …]
Dsched-design-CFS.rst16 Documentation/scheduler/sched-eevdf.rst.
19 an "ideal, precise multi-tasking CPU" on real hardware.
21 "Ideal multi-tasking CPU" is a (non-existent :-)) CPU that has 100% physical
23 1/nr_running speed. For example: if there are 2 tasks running, then it runs
24 each at 50% physical power --- i.e., actually in parallel.
29 multi-tasking CPU described above. In practice, the virtual runtime of a task
30 is its actual runtime normalized to the total number of running tasks.
37 In CFS the virtual runtime is expressed and tracked via the per-task
38 p->se.vruntime (nanosec-unit) value. This way, it's possible to accurately
42 p->se.vruntime value --- i.e., tasks would execute simultaneously and no task
[all …]
/linux-6.14.4/Documentation/virt/kvm/x86/
Drunning-nested-guests.rst1 .. SPDX-License-Identifier: GPL-2.0
4 Running nested guests with KVM
8 can be KVM-based or a different hypervisor). The straightforward
12 .----------------. .----------------.
17 |----------------'--'----------------|
22 .------------------------------------------------------.
25 |------------------------------------------------------|
27 '------------------------------------------------------'
31 - L0 – level-0; the bare metal host, running KVM
33 - L1 – level-1 guest; a VM running on L0; also called the "guest
[all …]
/linux-6.14.4/Documentation/leds/
Dleds-lp55xx.rst8 -----------
14 Device attributes for user-space interface
15 Program memory for running LED patterns
50 - Maximum number of channels
51 - Reset command, chip enable command
52 - Chip specific initialization
53 - Brightness control register access
54 - Setting LED output current
55 - Program memory address access for running patterns
56 - Additional device specific attributes
[all …]
/linux-6.14.4/arch/arm64/kvm/hyp/vhe/
Dsysreg-sr.c1 // SPDX-License-Identifier: GPL-2.0-only
3 * Copyright (C) 2012-2015 - ARM Ltd
7 #include <hyp/sysreg-sr.h>
38 * _EL2 copy in sys_regs[] is always up-to-date and we don't need in __sysreg_save_vel2_state()
46 * are always trapped, ensuring that the in-memory in __sysreg_save_vel2_state()
47 * copy is always up-to-date. A small blessing... in __sysreg_save_vel2_state()
54 if (ctxt_has_tcrx(&vcpu->arch.ctxt)) { in __sysreg_save_vel2_state()
57 if (ctxt_has_s1pie(&vcpu->arch.ctxt)) { in __sysreg_save_vel2_state()
62 if (ctxt_has_s1poe(&vcpu->arch.ctxt)) in __sysreg_save_vel2_state()
69 * bits when reading back the guest-visible value. in __sysreg_save_vel2_state()
[all …]
/linux-6.14.4/Documentation/virt/hyperv/
Dvpci.rst1 .. SPDX-License-Identifier: GPL-2.0
3 PCI pass-thru devices
5 In a Hyper-V guest VM, PCI pass-thru devices (also called
13 would when running on bare metal, so no changes are required
16 Hyper-V terminology for vPCI devices is "Discrete Device
17 Assignment" (DDA). Public documentation for Hyper-V DDA is
20 …tps://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devi…
23 and for GPUs. A similar mechanism for NICs is called SR-IOV
25 driver to interact directly with the hardware. See Hyper-V
26 public documentation here: `SR-IOV`_
[all …]
/linux-6.14.4/sound/usb/
Dcard.h1 /* SPDX-License-Identifier: GPL-2.0 */
9 #define SYNC_URBS 4 /* always four urbs for sync */
16 unsigned int fmt_type; /* USB audio format type (1-3) */
18 unsigned int frame_size; /* samples per frame for non-audio */
68 int opened; /* open refcount; protect with chip->mutex */
69 atomic_t running; /* running status */ member
77 atomic_t state; /* running state */
122 unsigned int fill_max:1; /* fill max packet size always */
131 bool lowlatency_playback; /* low-latency playback mode */
132 bool need_setup; /* (re-)need for hw_params? */
[all …]
/linux-6.14.4/scripts/basic/
DMakefile1 # SPDX-License-Identifier: GPL-2.0-only
5 hostprogs-always-y += fixdep
7 # randstruct: the seed is needed before building the gcc-plugin or
8 # before running a Clang kernel build.
9 gen-randstruct-seed := $(srctree)/scripts/gen-randstruct-seed.sh
12 $(CONFIG_SHELL) $(gen-randstruct-seed) \
14 $(obj)/randstruct.seed: $(gen-randstruct-seed) FORCE
16 always-$(CONFIG_RANDSTRUCT) += randstruct.seed
/linux-6.14.4/arch/powerpc/kvm/
Dbook3s_hv_hmi.c1 // SPDX-License-Identifier: GPL-2.0-or-later
23 * been loaded yet and hence no guests are running, or running in wait_for_subcore_guest_exit()
26 * If no KVM is in use, no need to co-ordinate among threads in wait_for_subcore_guest_exit()
27 * as all of them will always be in host and no one is going in wait_for_subcore_guest_exit()
34 if (!local_paca->sibling_subcore_state) in wait_for_subcore_guest_exit()
38 while (local_paca->sibling_subcore_state->in_guest[i]) in wait_for_subcore_guest_exit()
44 if (!local_paca->sibling_subcore_state) in wait_for_tb_resync()
48 &local_paca->sibling_subcore_state->flags)) in wait_for_tb_resync()
/linux-6.14.4/Documentation/networking/
Dxfrm_sync.rst1 .. SPDX-License-Identifier: GPL-2.0
21 This way a backup stays as closely up-to-date as an active member.
25 For this reason, we also add a nagle-like algorithm to restrict
28 These thresholds are set system-wide via sysctls or can be updated
32 - the lifetime byte counter
36 - the replay sequence for both inbound and outbound
39 ----------------------
41 nlmsghdr:aevent_id:optional-TLVs.
76 message (kernel<->user) as well the cause (config, query or event).
87 -----------------------------------------
[all …]
/linux-6.14.4/kernel/sched/
Dpelt.h2 #include "sched-pelt.h"
7 int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
8 int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
16 return READ_ONCE(rq->avg_hw.load_avg); in hw_load_avg()
32 int update_irq_load_avg(struct rq *rq, u64 running);
35 update_irq_load_avg(struct rq *rq, u64 running) in update_irq_load_avg() argument
41 #define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)
45 return PELT_MIN_DIVIDER + avg->period_contrib; in get_pelt_divider()
56 enqueued = avg->util_est; in cfs_se_util_change()
62 WRITE_ONCE(avg->util_est, enqueued); in cfs_se_util_change()
[all …]
/linux-6.14.4/Documentation/power/
Dswsusp.rst47 - If you feel ACPI works pretty well on your system, you might try::
51 - If you would like to write hibernation image to swap and then suspend
56 - If you have SATA disks, you'll need recent kernels with SATA suspend
58 are built into kernel -- not modules. [There's way to make
68 - The resume process checks for the presence of the resume device,
72 - The resume process may be triggered in two ways:
81 read-only) otherwise data may be corrupted.
87 Last revised: 2003-10-20 by Pavel Machek
90 -------------------------
97 are real high when running from batteries. The other gain is that we don't have
[all …]
/linux-6.14.4/Documentation/core-api/
Dcachetlb.rst25 virtual-->physical address translations obtained from the software
44 the TLB. After running, this interface must make sure that
46 'mm' will be visible to the cpu. That is, after running,
57 address translations from the TLB. After running, this
59 modifications for the address space 'vma->vm_mm' in the range
60 'start' to 'end-1' will be visible to the cpu. That is, after
61 running, there will be no entries in the TLB for 'mm' for
62 virtual addresses in the range 'start' to 'end-1'.
78 address space is available via vma->vm_mm. Also, one may
79 test (vma->vm_flags & VM_EXEC) to see if this region is
[all …]
/linux-6.14.4/sound/usb/line6/
Dpcm.h1 /* SPDX-License-Identifier: GPL-2.0-only */
5 * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at)
21 The Line 6 Windows driver always transmits two frames per packet, but
38 (line6pcm->pcm->streams[stream].substream)
53 We define two bit flags, "opened" and "running", for each playback
60 the running flag indicates whether the stream is running.
130 /* Bit flags for running stream types */
131 unsigned long running; member
/linux-6.14.4/Documentation/dev-tools/kunit/
Dfaq.rst1 .. SPDX-License-Identifier: GPL-2.0
25 Does KUnit support running on architectures other than UML?
35 (see :ref:`kunit-on-qemu`).
40 For more information, see :ref:`kunit-on-non-uml`.
42 .. _kinds-of-tests:
47 test, or an end-to-end test.
49 - A unit test is supposed to test a single unit of code in isolation. A unit
54 - An integration test tests the interaction between a minimal set of components,
61 - An end-to-end test usually tests the entire system from the perspective of the
62 code under test. For example, someone might write an end-to-end test for the
[all …]
/linux-6.14.4/arch/parisc/include/asm/
Dalternative.h1 /* SPDX-License-Identifier: GPL-2.0 */
5 #define ALT_COND_ALWAYS 0x80 /* always replace instruction */
6 #define ALT_COND_NO_SMP 0x01 /* when running UP instead of SMP */
7 #define ALT_COND_NO_DCACHE 0x02 /* if system has no d-cache */
8 #define ALT_COND_NO_ICACHE 0x04 /* if system has no i-cache */
11 #define ALT_COND_RUN_ON_QEMU 0x20 /* if running on QEMU */
39 ".word (0b-4-.) !" \
50 .word (from - .) ! \
51 .hword (to - from)/4, cond ! \
59 .word (from - .) ! \
[all …]
/linux-6.14.4/drivers/gpu/drm/xe/abi/
Dguc_klvs_abi.h1 /* SPDX-License-Identifier: MIT */
14 * +---+-------+--------------------------------------------------------------+
17 * | 0 | 31:16 | **KEY** - KLV key identifier |
18 * | | | - `GuC Self Config KLVs`_ |
19 * | | | - `GuC VGT Policy KLVs`_ |
20 * | | | - `GuC VF Configuration KLVs`_ |
22 * | +-------+--------------------------------------------------------------+
23 * | | 15:0 | **LEN** - length of VALUE (in 32bit dwords) |
24 * +---+-------+--------------------------------------------------------------+
25 * | 1 | 31:0 | **VALUE** - actual value of the KLV (format depends on KEY) |
[all …]

12345678910>>...41