Lines Matching +full:memory +full:- +full:mapped

11 This document describes the Linux memory manager's "Unevictable LRU"
19 details - the "what does it do?" - by reading the code. One hopes that the
31 memory x86_64 systems.
33 To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of
34 main memory will have over 32 million 4k pages in a single node. When a large
47 * Those mapped into SHM_LOCK'd shared memory regions.
49 * Those mapped into VM_LOCKED [mlock()ed] VMAs.
56 ------------------------------
58 The Unevictable LRU folio list is a lie. It was never an LRU-ordered
59 list, but a companion to the LRU-ordered anonymous and file, active and
64 The Unevictable LRU infrastructure consists of an additional, per-node, LRU list
76 system - which means we get to use the same code to manipulate them, the
80 (2) We want to be able to migrate unevictable folios between nodes for memory
81 defragmentation, workload management and memory hotplug. The Linux kernel
84 maintain folios elsewhere than on an LRU-like list, where they can be
87 The unevictable list does not differentiate between file-backed and
88 anonymous, swap-backed folios. This differentiation is only important
91 The unevictable list benefits from the "arrayification" of the per-node LRU
95 Memory Control Group Interaction
96 --------------------------------
98 The unevictable LRU facility interacts with the memory control group [aka
99 memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by
102 The memory controller data structure automatically gets a per-node unevictable
103 list as a result of the "arrayification" of the per-node LRU lists (one per
104 lru_list enum element). The memory controller tracks the movement of pages to
107 When a memory control group comes under memory pressure, the controller will
117 the control group may not fit into the available memory. This can cause
118 the control group to thrash or to OOM-kill tasks.
124 ----------------------------------
152 ensure they're in memory.
155 amount of unevictable memory marked by i915 driver is roughly the bounded
160 ---------------------------
186 ---------------------------------------
196 unevictable list for the memory cgroup and node being scanned.
198 There may be situations where a folio is mapped into a VM_LOCKED VMA,
206 the LRU list using folio_putback_lru() - the inverse operation to
207 folio_isolate_lru() - after dropping the folio lock. Because the
222 -------
240 other VM_LOCKED VMAs still mapped the page.
248 no use for that linked list anyway - though its size is maintained for meminfo.
252 ----------------
254 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
255 pages. When such a page has been "noticed" by the memory management subsystem,
260 the LRU. Such pages can be "noticed" by memory management in several places:
277 (1) mapped in a range unlocked via the munlock()/munlockall() system calls;
289 ------------------------------------------------
294 is used for both mlocking and munlocking a range of memory. A call to mlock()
296 treated as a no-op and mlock_fixup() simply returns.
308 Note that the VMA being mlocked might be mapped with PROT_NONE. In this case,
311 fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
338 ----------------------
348 2) VMAs mapping hugetlbfs page are already effectively pinned into memory. We
367 -------------------------------------------
377 mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same
395 -----------------------
403 new page is mapped in place of migration entry in a VM_LOCKED VMA. If the page
420 afterwards. The "unneeded" page - old page on success, new page on failure -
425 ------------------------
427 The memory map can be scanned for compactable regions and the default behavior
429 controls this behavior (see Documentation/admin-guide/sysctl/vm.rst). The work
435 -------------------------------
447 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists:
450 This way the huge page is accessible for vmscan. Under memory pressure the
455 of a transparent huge page which are mapped only by PTEs in VM_LOCKED VMAs.
459 -------------------------------------
462 can request that a region of memory be mlocked by supplying the MAP_LOCKED flag
466 The mmapped area will still have properties of the locked area - pages will not
467 get swapped out - but major page faults to fault memory in might still happen.
471 in the newly mapped memory being mlocked. Before the unevictable/mlock
475 To mlock a range of memory under the unevictable/mlock infrastructure,
481 -------------------------------------------
483 When unmapping an mlocked region of memory, whether by an explicit call to
507 ------------------------
511 which had been Copied-On-Write from the file pages now being truncated.
530 -------------------------------
532 vmscan's shrink_active_list() culls any obviously unevictable pages -
533 i.e. !page_evictable(page) pages - diverting those to the unevictable list.
536 set - otherwise they would be on the unevictable list and shrink_active_list()
543 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
544 allocate or fault in the pages in the shared memory region. This happens
548 (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked,
552 unevictable pages found on the inactive lists to the appropriate memory cgroup
557 check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio()