Lines Matching +full:page +full:- +full:based
1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
72 available at the following LWN page:
179 page. While this design limits storage density, it has simple and
189 linux-[email protected] and the zswap maintainers.
193 page. It is a ZBUD derivative so the simplicity and determinism are
207 zsmalloc is a slab-based memory allocator designed to store
222 int "Maximum number of physical pages per-zspage"
228 that a zmalloc page (zspage) can consist of. The optimal zspage
295 specifically-sized allocations with user-controlled contents
299 user-controlled allocations. This may very slightly increase
301 of extra pages since the bulk of user-controlled allocations
302 are relatively long-lived.
317 Try running: slabinfo -DA
336 normal kmalloc allocation and makes kmalloc randomly pick one based
350 bool "Page allocator randomization"
353 Randomization of the page allocator improves the average
354 utilization of a direct-mapped memory-side-cache. See section
357 the presence of a memory-side-cache. There are also incidental
358 security benefits as it reduces the predictability of page
361 order of pages is selected based on cache utilization benefits
377 also breaks ancient binaries (including anything libc5 based).
382 On non-ancient distros (post-2000 ones) N is usually a safe choice.
397 ELF-FDPIC binfmt's brk and stack allocator.
401 userspace. Since that isn't generally a problem on no-MMU systems,
404 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
425 This option is best suited for non-NUMA systems with
441 memory hot-plug systems. This is normal.
445 hot-plug and hot-remove.
515 # Keep arch NUMA mapping infrastructure post-init.
571 Example kernel usage would be page structs and page tables.
573 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
606 sufficient kernel-capable memory (ZONE_NORMAL) must be
607 available to allocate page structs to describe ZONE_MOVABLE.
627 # Heavily threaded applications may benefit from splitting the mm-wide
631 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
632 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
633 # SPARC32 allocates multiple pte tables within a single page, and therefore
634 # a per-page lock leads to problems when multiple tables need to be locked
636 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
684 reliably. The page allocator relies on compaction heavily and
689 linux-[email protected].
698 # support for free page reporting
700 bool "Free page reporting"
702 Free page reporting allows for the incremental acquisition of
708 # support for page migration
711 bool "Page migration"
719 pages as migration can relocate pages to satisfy a huge page
735 HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
745 int "Maximum scale factor of PCP (Per-CPU pageset) batch allocate/free"
749 In page allocator, PCP (Per-CPU pageset) is refilled and drained in
750 batches. The batch number is scaled automatically to improve page
772 bool "Enable KSM for page merging"
779 the many instances by a single page with that content, so
832 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
841 long-term mappings means that the space is wasted.
851 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
868 applications by speeding up page faults during memory
911 XXX: For now, swap cluster backing transparent huge page
917 bool "Read-only THP for filesystems (EXPERIMENTAL)"
921 Allow khugepaged to put read-only file-backed pages in THP.
949 # UP and nommu archs use km based percpu allocator
975 subsystems to allocate big physically-contiguous blocks of memory.
1014 soft-dirty bit on pte-s. This bit it set when someone writes
1015 into a page just as regular dirty bit, but unlike the latter
1018 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
1024 int "Default maximum user stack size for 32-bit processes (MB)"
1029 This is the maximum stack size in Megabytes in the VM layout of 32-bit
1055 This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed
1060 bool "Enable idle page tracking"
1069 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1085 checking, an architecture-agnostic way to find the stack pointer
1117 "device-physical" addresses which is needed for using a DAX
1123 # Helpers to mirror range of the CPU page tables of a process into device page
1162 on EXPERT systems. /proc/vmstat will only show page counts
1173 bool "Enable infrastructure for get_user_pages()-related unit tests"
1177 to make ioctl calls that can launch kernel-based unit tests for
1182 the non-_fast variants.
1184 There is also a sub-test that allows running dump_page() on any
1186 range of user-space addresses. These pages are either pinned via
1219 # struct io_mapping based helper. Selected by drivers that need them
1233 not mapped to other processes and other kernel page tables.
1264 handle page faults in userland.
1275 file-backed memory types like shmem and hugetlbfs.
1278 # multi-gen LRU {
1280 bool "Multi-Gen LRU"
1282 # make sure folio->flags has enough spare bits
1286 Documentation/admin-guide/mm/multigen_lru.rst for details.
1292 This option enables the multi-gen LRU by default.
1301 This option has a per-memcg and per-node memory overhead.
1315 Allow per-vma locking during page fault handling.
1318 handling page faults instead of taking mmap_lock.
1345 stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss).
1351 bool "reclaim empty user page table pages"
1356 Try to reclaim empty user page table pages in paths other than munmap
1359 Note: now only empty user PTE page table pages will be reclaimed.