Lines Matching full:shadow

3  * This file contains KASAN runtime code that manages shadow memory for
132 * Perform shadow offset calculation based on untagged address, as in kasan_poison()
157 u8 *shadow = (u8 *)kasan_mem_to_shadow(addr + size); in kasan_poison_last_granule() local
158 *shadow = size & KASAN_GRANULE_MASK; in kasan_poison_last_granule()
168 * Perform shadow offset calculation based on untagged address, as in kasan_unpoison()
235 * If shadow is mapped already than it must have been mapped in kasan_mem_notifier()
260 * In the latter case we can use vfree() to free shadow. in kasan_mem_notifier()
264 * Currently it's not possible to free shadow mapped in kasan_mem_notifier()
337 * User Mode Linux maps enough shadow memory for all of virtual memory in kasan_populate_vmalloc()
373 * STORE shadow(a), unpoison_val in kasan_populate_vmalloc()
375 * STORE shadow(a+99), unpoison_val x = LOAD p in kasan_populate_vmalloc()
377 * STORE p, a LOAD shadow(x+99) in kasan_populate_vmalloc()
379 * If there is no barrier between the end of unpoisoning the shadow in kasan_populate_vmalloc()
382 * poison in the shadow. in kasan_populate_vmalloc()
388 * get_vm_area() and friends, the caller gets shadow allocated but in kasan_populate_vmalloc()
427 * That might not map onto the shadow in a way that is page-aligned:
437 * |??AAAAAA|AAAAAAAA|AA??????| < shadow
441 * shadow of the region aligns with shadow page boundaries. In the
442 * example, this gives us the shadow page (2). This is the shadow entirely
446 * partially covered shadow pages - (1) and (3) in the example. For this,
459 * |FFAAAAAA|AAAAAAAA|AAF?????| < shadow
463 * the free region down so that the shadow is page aligned. So we can free
486 * means that so long as we are careful with alignment and only free shadow
570 * Poison the shadow for a vmalloc region. Called as part of the