Lines Matching full:asid

10  *  -Major rewrite of Core ASID allocation routine get_new_mmu_context
23 /* ARC ASID Management
25 * MMU tags TLBs with an 8-bit ASID, avoiding need to flush the TLB on
28 * ASID is managed per cpu, so task threads across CPUs can have different
29 * ASID. Global ASID management is needed if hardware supports TLB shootdown
32 * Each task is assigned unique ASID, with a simple round-robin allocator
36 * A new allocation cycle, post rollover, could potentially reassign an ASID
37 * to a different task. Thus the rule is to refresh the ASID in a new cycle.
38 * The 32 bit @asid_cpu (and mm->asid) have 8 bits MMU PID and rest 24 bits
49 #define asid_mm(mm, cpu) mm->context.asid[cpu]
56 * Get a new ASID if task doesn't have a valid one (unalloc or from prev cycle)
57 * Also set the MMU PID register to existing/updated ASID
67 * Move to new ASID if it was not from current alloc-cycle/generation. in get_new_mmu_context()
68 * This is done by ensuring that the generation bits in both mm->ASID in get_new_mmu_context()
69 * and cpu's ASID counter are exactly same. in get_new_mmu_context()
71 * Note: Callers needing new ASID unconditionally, independent of in get_new_mmu_context()
79 /* move to new ASID and handle rollover */ in get_new_mmu_context()
85 * Above check for rollover of 8 bit ASID in 32 bit container. in get_new_mmu_context()
93 /* Assign new ASID to tsk */ in get_new_mmu_context()
129 /* Prepare the MMU for task: setup PID reg with allocated ASID
130 If task doesn't have an ASID (never alloc or stolen, get a new ASID)
157 * time of execve() to get a new ASID Note the subtlety here:
159 * it always returns a new ASID, because mm has an unallocated "initial"
160 * value, while in latter, it moves to a new ASID, only if it was
168 * there is a good chance that task gets sched-out/in, making its ASID valid