Searched full:invalidations (Results 1 – 25 of 93) sorted by relevance
1234
110 /* Serialize global tlb invalidations */114 * Batch TLB invalidations119 * so we track how many TLB invalidations have been
106 * invalidations so it is good to avoid paying the forcewake cost and in mmio_invalidate_full()
41 You may be doing too many individual invalidations if you see the43 profiles. If you believe that individual invalidations being
135 * in order to force TLB invalidations to be global as to in mm_context_add_copro()159 * for the time being. Invalidations will remain global if in mm_context_remove_copro()161 * it could make some invalidations local with no flush in mm_context_remove_copro()
76 /* Enable use of broadcast TLB invalidations. We don't always set it78 * use of such invalidations
39 * Broadcast I-cache block invalidations by default. in shx3_cache_init()
29 * invalidations need to be broadcasted to all other cpu in the system in
105 are hit during checks for userptr invalidations.
12 …h return data even if the snoops cause an invalidation. L2 cache line invalidations which do not w…
12 …nce operations. The following cache operations are not counted:\n\n1. Invalidations which do not r…
268 u64 invalidations = 0; in mlx5_ib_invalidate_range() local295 * overwrite the same MTTs. Concurent invalidations might race us, in mlx5_ib_invalidate_range()321 /* Count page invalidations */ in mlx5_ib_invalidate_range()322 invalidations += idx - blk_start_idx + 1; in mlx5_ib_invalidate_range()331 /* Count page invalidations */ in mlx5_ib_invalidate_range()332 invalidations += idx - blk_start_idx + 1; in mlx5_ib_invalidate_range()335 mlx5_update_odp_stats_with_handled(mr, invalidations, invalidations); in mlx5_ib_invalidate_range()
104 atomic64_read(&mr->odp_stats.invalidations))) in fill_stat_mr_entry()
80 /* read TID cache invalidations */
99 * flushed/invalidated. As we always have to emit invalidations in i915_gem_clflush_object()
316 * - Flush the caches per Table 28 ”Guidance to Software for Invalidations“622 * From VT-d spec table 25 "Guidance to Software for Invalidations": in intel_pasid_setup_dirty_tracking()1119 * Cache invalidations after change in a context table entry that was present1120 * according to the Spec 6.5.3.3 (Guidance to Software for Invalidations). If
39 … which return data, regardless of whether they cause an invalidation. Invalidations from the L2 wh…
148 struct mutex mutex; /* serialises mmu invalidations */
15 and manage coherency, TLB invalidations and memory barriers.
142 * will observe it without requiring cache invalidations. in arch_setup_additional_pages()
123 * However, we'll turn the invalidations off, so that in cxllib_switch_phb_mode()
212 * callbacks to avoid device MMU invalidations for device private