Lines Matching +full:4 +full:kb +full:- +full:page
12 that supports the automatic promotion and demotion of page sizes and
19 in the examples below we presume that the basic page size is 4K and
20 the huge page size is 2M, although the actual numbers may vary
26 requiring larger clear-page copy-page in page faults which is a
28 single page fault for each 2M virtual region touched by userland (so
48 Modern kernels support "multi-size THP" (mTHP), which introduces the
49 ability to allocate memory in blocks that are bigger than a base page
50 but smaller than traditional PMD-size (as described above), in
51 increments of a power-of-2 number of pages. mTHP can back anonymous
53 PTE-mapped, but in many cases can still provide similar benefits to
54 those outlined above: Page faults are significantly reduced (by a
55 factor of e.g. 4, 8, 16, etc), but latency spikes are much less
56 prominent because the size of each page isn't as huge as the PMD-sized
57 variant and there is less memory to clear in each page fault. Some
66 collapses sequences of basic pages into PMD-sized huge pages.
82 a flood of mmap system calls for every malloc(4k). Optimizing userland
84 lived page allocations even for hugepage unaware applications that
89 large region but only touch 1 byte of it, in that case a 2M page might
90 be allocated instead of a 4k page for no good. This is why it's
91 possible to disable hugepages system-wide and to only have them inside
108 -------------------
113 system wide. This can be achieved per-supported-THP-size with one of::
115 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
116 echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
117 echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
124 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
127 will inherit the top-level "enabled" value::
129 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
133 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
135 The top-level setting (for use with "inherit") can be set by issuing
142 By default, PMD-sized hugepages have enabled="inherit" and all other
190 should be self-explanatory.
192 By default kernel tries to use huge, PMD-mappable zero page on read
193 page fault to anonymous mapping. It's possible to disable huge zero
194 page by writing 0 or enable it back by writing 1::
201 PMD-mappable transparent hugepage::
207 "underused". A THP is underused if the number of zero-filled pages in
215 khugepaged will be automatically started when PMD-sized THP is enabled
216 (either of the per-size anon control or the top-level control are set
218 PMD-sized THP is disabled (when both the per-size anon control and the
219 top-level control are "never")
222 -------------------
226 PMD-sized THP and no attempt is made to collapse to other THP
230 invoke defrag algorithms synchronously during the page faults, it
256 being replaced by a PMD mapping, or (2) All 4K physical pages replaced by
270 of small pages into one large page::
280 swap when collapsing a group of pages into a transparent huge page::
290 processes. khugepaged might treat pages of THPs as shared if any page of
300 You can change the sysfs boot time default for the top-level "enabled"
306 passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
315 thp_anon=16K-64K:always;128K,512K:inherit;256K:madvise;1M-2M:never
361 Traditionally, tmpfs only supported a single huge page size ("PMD"). Today,
363 to as "multi-size THP" (mTHP). Huge pages of any size are commonly
366 While there is fine control over the huge page sizes to use for the internal
368 huge page sizes without any control over the exact sizes, behaving more like
372 ------------
378 Attempt to allocate huge pages every time we need a new page;
384 Only allocate huge page if it will be fully within i_size.
396 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
408 Force the huge option on for all - very useful for testing;
411 ----------------------
418 '/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled'
422 for tmpfs mounts, except that the different huge page sizes can be controlled
424 per-size knob is set to 'inherit'.
430 Attempt to allocate <size> huge pages every time we need a new page;
433 Inherit the top-level "shmem_enabled" value. By default, PMD-sized hugepages
440 Only allocate <size> huge page if it will be fully within i_size.
450 transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
458 The number of PMD-sized anonymous transparent huge pages currently used by the
460 To identify what applications are using PMD-sized anonymous transparent huge
463 PMD-sized THP for historical reasons and should have been called
479 is incremented every time a huge page is successfully
480 allocated and charged to handle a page fault.
484 a range of pages to collapse into one huge page and has
485 successfully allocated a new huge page to store the data.
488 is incremented if a page fault fails to allocate or charge
489 a huge page and instead falls back to using small pages.
492 is incremented if a page fault fails to charge a huge page and
498 of pages that should be collapsed into one huge page but failed
502 is incremented every time a shmem huge page is successfully
507 is incremented if a shmem huge page is attempted to be allocated
512 is incremented if a shmem huge page cannot be charged and instead
518 is incremented every time a file or shmem huge page is mapped into
522 is incremented every time a huge page is split into base
524 reason is that a huge page is old and is being reclaimed.
525 This action implies splitting all PMD the page mapped with.
529 page. This can happen if the page was pinned by somebody.
532 is incremented when a huge page is put onto split
533 queue. This happens when a huge page is partially unmapped and
538 is incremented when a huge page on the split queue was split
546 munmap() on part of huge page. It doesn't split huge page, only
547 page table entry.
550 is incremented every time a huge zero page used for thp is
552 the huge zero page, only its allocation.
556 huge zero page and falls back to using small pages.
559 is incremented every time a huge page is swapout in one
563 is incremented if a huge page has to be split before swapout.
565 for the huge page.
567 In /sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/stats, There are
568 also individual counters for each huge page size, which can be utilized to
573 is incremented every time a huge page is successfully
574 allocated and charged to handle a page fault.
577 is incremented if a page fault fails to allocate or charge
578 a huge page and instead falls back to using huge pages with
582 is incremented if a page fault fails to charge a huge page and
587 is incremented every time a huge page is swapped out to zswap in one
591 is incremented every time a huge page is swapped in from a non-zswap
595 is incremented if swapin fails to allocate or charge a huge page
600 is incremented if swapin fails to charge a huge page and instead
605 is incremented every time a huge page is swapped out to a non-zswap
609 is incremented if a huge page has to be split before swapout.
611 for the huge page.
614 is incremented every time a shmem huge page is successfully
618 is incremented if a shmem huge page is attempted to be allocated
622 is incremented if a shmem huge page cannot be charged and instead
627 is incremented every time a huge page is successfully split into
629 common reason is that a huge page is old and is being reclaimed.
633 page. This can happen if the page was pinned by somebody.
636 is incremented when a huge page is put onto split queue.
637 This happens when a huge page is partially unmapped and splitting
655 huge page for use. There are some counters in ``/proc/vmstat`` to help
660 memory compaction so that a huge page is free for use.
664 freed a huge page for use.