Lines Matching +full:can +full:- +full:secondary

1 /* SPDX-License-Identifier: GPL-2.0 */
18 * enum mmu_notifier_event - reason for the mmu notifier callback
68 * freed. This can run concurrently with other mmu notifier
70 * should tear down all secondary mmu mappings and freeze the
71 * secondary mmu. If this method isn't implemented you've to
73 * through the secondary mmu by the time the last thread with
74 * tsk->mm == mm exits.
76 * As side note: the pages freed after ->release returns could
78 * address with a different cache model, so if ->release isn't
80 * through the secondary mmu are terminated by the time the
82 * speculative _hardware_ operations can't allocate dirty
93 * test-and-clearing the young/accessed bitflag in the
95 * accesses to the page through the secondary MMUs and not
97 * Start-end is necessary in case the secondary MMU is mapping the page
107 * latter, it is supposed to test-and-clear the young/accessed bitflag
108 * in the secondary pte, but it may omit flushing the secondary tlb.
117 * the secondary pte. This is used to know if the page is
119 * down the secondary mapping on the page.
129 * can't guarantee that no additional references are taken to
159 * invalidate_range_start() then the VM can free pages as page
163 * any secondary tlb before doing the final free on the
169 * sleep and has to return with -EAGAIN if sleeping would be required.
170 * 0 should be returned otherwise. Please note that notifiers that can
181 * arch_invalidate_secondary_tlbs() is used to manage a non-CPU TLB
182 * which shares page-tables with the CPU. The
188 * holding the ptl spin-lock and therefore this callback is not allowed
192 * entry. It is assumed that any secondary TLB has the same rules for
194 * code will need to call this explicitly when required for secondary
222 * Therefore notifier chains can only be traversed when either
225 * 2. One of the reverse map locks is held (i_mmap_rwsem or anon_vma->rwsem).
226 * 3. No other concurrent thread can access the list (release)
239 * range. This function can sleep. Return false only if sleeping
273 return unlikely(mm->notifier_subscriptions); in mm_has_notifiers()
311 * mmu_interval_set_seq - Save the invalidation sequence
312 * @interval_sub - The subscription passed to invalidate
313 * @cur_seq - The cur_seq passed to the invalidate() callback
327 WRITE_ONCE(interval_sub->invalidate_seq, cur_seq); in mmu_interval_set_seq()
331 * mmu_interval_read_retry - End a read side critical section against a VA range
336 * unconditionally by op->invalidate() when it calls mmu_interval_set_seq().
348 return interval_sub->invalidate_seq != seq; in mmu_interval_read_retry()
352 * mmu_interval_check_retry - Test if a collision has occurred
356 * This can be used in the critical section between mmu_interval_read_begin()
362 * occurred. It can be called many times and does not have to hold the user
365 * This call can be used as part of loops and other expensive operations to
373 return READ_ONCE(interval_sub->invalidate_seq) != seq; in mmu_interval_check_retry()
396 return (range->flags & MMU_NOTIFIER_RANGE_BLOCKABLE); in mmu_notifier_range_blockable()
437 if (mm_has_notifiers(range->mm)) { in mmu_notifier_invalidate_range_start()
438 range->flags |= MMU_NOTIFIER_RANGE_BLOCKABLE; in mmu_notifier_invalidate_range_start()
446 * can return an error if a notifier can't proceed without blocking, in which
457 if (mm_has_notifiers(range->mm)) { in mmu_notifier_invalidate_range_start_nonblock()
458 range->flags &= ~MMU_NOTIFIER_RANGE_BLOCKABLE; in mmu_notifier_invalidate_range_start_nonblock()
471 if (mm_has_notifiers(range->mm)) in mmu_notifier_invalidate_range_end()
484 mm->notifier_subscriptions = NULL; in mmu_notifier_subscriptions_init()
501 range->event = event; in mmu_notifier_range_init()
502 range->mm = mm; in mmu_notifier_range_init()
503 range->start = start; in mmu_notifier_range_init()
504 range->end = end; in mmu_notifier_range_init()
505 range->flags = flags; in mmu_notifier_range_init()
515 range->owner = owner; in mmu_notifier_range_init_owner()
524 __young |= mmu_notifier_clear_flush_young(___vma->vm_mm, \
537 __young |= mmu_notifier_clear_flush_young(___vma->vm_mm, \
550 __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address, \
561 __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address, \
577 range->start = start; in _mmu_notifier_range_init()
578 range->end = end; in _mmu_notifier_range_init()