1= Transparent Hugepage Support =
   3== Objective ==
   5Performance critical computing applications dealing with large memory
   6working sets are already running on top of libhugetlbfs and in turn
   7hugetlbfs. Transparent Hugepage Support is an alternative means of
   8using huge pages for the backing of virtual memory with huge pages
   9that supports the automatic promotion and demotion of page sizes and
  10without the shortcomings of hugetlbfs.
  12Currently it only works for anonymous memory mappings but in the
  13future it can expand over the pagecache layer starting with tmpfs.
  15The reason applications are running faster is because of two
  16factors. The first factor is almost completely irrelevant and it's not
  17of significant interest because it'll also have the downside of
  18requiring larger clear-page copy-page in page faults which is a
  19potentially negative effect. The first factor consists in taking a
  20single page fault for each 2M virtual region touched by userland (so
  21reducing the enter/exit kernel frequency by a 512 times factor). This
  22only matters the first time the memory is accessed for the lifetime of
  23a memory mapping. The second long lasting and much more important
  24factor will affect all subsequent accesses to the memory for the whole
  25runtime of the application. The second factor consist of two
  26components: 1) the TLB miss will run faster (especially with
  27virtualization using nested pagetables but almost always also on bare
  28metal without virtualization) and 2) a single TLB entry will be
  29mapping a much larger amount of virtual memory in turn reducing the
  30number of TLB misses. With virtualization and nested pagetables the
  31TLB can be mapped of larger size only if both KVM and the Linux guest
  32are using hugepages but a significant speedup already happens if only
  33one of the two is using hugepages just because of the fact the TLB
  34miss is going to run faster.
  36== Design ==
  38- "graceful fallback": mm components which don't have transparent
  39  hugepage knowledge fall back to breaking a transparent hugepage and
  40  working on the regular pages and their respective regular pmd/pte
  41  mappings
  43- if a hugepage allocation fails because of memory fragmentation,
  44  regular pages should be gracefully allocated instead and mixed in
  45  the same vma without any failure or significant delay and without
  46  userland noticing
  48- if some task quits and more hugepages become available (either
  49  immediately in the buddy or through the VM), guest physical memory
  50  backed by regular pages should be relocated on hugepages
  51  automatically (with khugepaged)
  53- it doesn't require memory reservation and in turn it uses hugepages
  54  whenever possible (the only possible reservation here is kernelcore=
  55  to avoid unmovable pages to fragment all the memory but such a tweak
  56  is not specific to transparent hugepage support and it's a generic
  57  feature that applies to all dynamic high order allocations in the
  58  kernel)
  60- this initial support only offers the feature in the anonymous memory
  61  regions but it'd be ideal to move it to tmpfs and the pagecache
  62  later
  64Transparent Hugepage Support maximizes the usefulness of free memory
  65if compared to the reservation approach of hugetlbfs by allowing all
  66unused memory to be used as cache or other movable (or even unmovable
  67entities). It doesn't require reservation to prevent hugepage
  68allocation failures to be noticeable from userland. It allows paging
  69and all other advanced VM features to be available on the
  70hugepages. It requires no modifications for applications to take
  71advantage of it.
  73Applications however can be further optimized to take advantage of
  74this feature, like for example they've been optimized before to avoid
  75a flood of mmap system calls for every malloc(4k). Optimizing userland
  76is by far not mandatory and khugepaged already can take care of long
  77lived page allocations even for hugepage unaware applications that
  78deals with large amounts of memory.
  80In certain cases when hugepages are enabled system wide, application
  81may end up allocating more memory resources. An application may mmap a
  82large region but only touch 1 byte of it, in that case a 2M page might
  83be allocated instead of a 4k page for no good. This is why it's
  84possible to disable hugepages system-wide and to only have them inside
  85MADV_HUGEPAGE madvise regions.
  87Embedded systems should enable hugepages only inside madvise regions
  88to eliminate any risk of wasting any precious byte of memory and to
  89only run faster.
  91Applications that gets a lot of benefit from hugepages and that don't
  92risk to lose memory by using hugepages, should use
  93madvise(MADV_HUGEPAGE) on their critical mmapped regions.
  95== sysfs ==
  97Transparent Hugepage Support can be entirely disabled (mostly for
  98debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to
  99avoid the risk of consuming more memory resources) or enabled system
 100wide. This can be achieved with one of:
 102echo always >/sys/kernel/mm/transparent_hugepage/enabled
 103echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
 104echo never >/sys/kernel/mm/transparent_hugepage/enabled
 106It's also possible to limit defrag efforts in the VM to generate
 107hugepages in case they're not immediately free to madvise regions or
 108to never try to defrag memory and simply fallback to regular pages
 109unless hugepages are immediately available. Clearly if we spend CPU
 110time to defrag memory, we would expect to gain even more by the fact
 111we use hugepages later instead of regular pages. This isn't always
 112guaranteed, but it may be more likely in case the allocation is for a
 113MADV_HUGEPAGE region.
 115echo always >/sys/kernel/mm/transparent_hugepage/defrag
 116echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
 117echo never >/sys/kernel/mm/transparent_hugepage/defrag
 119By default kernel tries to use huge zero page on read page fault.
 120It's possible to disable huge zero page by writing 0 or enable it
 121back by writing 1:
 123echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/use_zero_page
 124echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/use_zero_page
 126khugepaged will be automatically started when
 127transparent_hugepage/enabled is set to "always" or "madvise, and it'll
 128be automatically shutdown if it's set to "never".
 130khugepaged runs usually at low frequency so while one may not want to
 131invoke defrag algorithms synchronously during the page faults, it
 132should be worth invoking defrag at least in khugepaged. However it's
 133also possible to disable defrag in khugepaged by writing 0 or enable
 134defrag in khugepaged by writing 1:
 136echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
 137echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
 139You can also control how many pages khugepaged should scan at each
 144and how many milliseconds to wait in khugepaged between each pass (you
 145can set this to 0 to run khugepaged at 100% utilization of one core):
 149and how many milliseconds to wait in khugepaged if there's an hugepage
 150allocation failure to throttle the next allocation attempt.
 154The khugepaged progress can be seen in the number of pages collapsed:
 158for each pass:
 162== Boot parameter ==
 164You can change the sysfs boot time defaults of Transparent Hugepage
 165Support by passing the parameter "transparent_hugepage=always" or
 166"transparent_hugepage=madvise" or "transparent_hugepage=never"
 167(without "") to the kernel command line.
 169== Need of application restart ==
 171The transparent_hugepage/enabled values only affect future
 172behavior. So to make them effective you need to restart any
 173application that could have been using hugepages. This also applies to
 174the regions registered in khugepaged.
 176== Monitoring usage ==
 178The number of transparent huge pages currently used by the system is
 179available by reading the AnonHugePages field in /proc/meminfo. To
 180identify what applications are using transparent huge pages, it is
 181necessary to read /proc/PID/smaps and count the AnonHugePages fields
 182for each mapping. Note that reading the smaps file is expensive and
 183reading it frequently will incur overhead.
 185There are a number of counters in /proc/vmstat that may be used to
 186monitor how successfully the system is providing huge pages for use.
 188thp_fault_alloc is incremented every time a huge page is successfully
 189        allocated to handle a page fault. This applies to both the
 190        first time a page is faulted and for COW faults.
 192thp_collapse_alloc is incremented by khugepaged when it has found
 193        a range of pages to collapse into one huge page and has
 194        successfully allocated a new huge page to store the data.
 196thp_fault_fallback is incremented if a page fault fails to allocate
 197        a huge page and instead falls back to using small pages.
 199thp_collapse_alloc_failed is incremented if khugepaged found a range
 200        of pages that should be collapsed into one huge page but failed
 201        the allocation.
 203thp_split is incremented every time a huge page is split into base
 204        pages. This can happen for a variety of reasons but a common
 205        reason is that a huge page is old and is being reclaimed.
 207thp_zero_page_alloc is incremented every time a huge zero page is
 208        successfully allocated. It includes allocations which where
 209        dropped due race with other allocation. Note, it doesn't count
 210        every map of the huge zero page, only its allocation.
 212thp_zero_page_alloc_failed is incremented if kernel fails to allocate
 213        huge zero page and falls back to using small pages.
 215As the system ages, allocating huge pages may be expensive as the
 216system uses memory compaction to copy data around memory to free a
 217huge page for use. There are some counters in /proc/vmstat to help
 218monitor this overhead.
 220compact_stall is incremented every time a process stalls to run
 221        memory compaction so that a huge page is free for use.
 223compact_success is incremented if the system compacted memory and
 224        freed a huge page for use.
 226compact_fail is incremented if the system tries to compact memory
 227        but failed.
 229compact_pages_moved is incremented each time a page is moved. If
 230        this value is increasing rapidly, it implies that the system
 231        is copying a lot of data to satisfy the huge page allocation.
 232        It is possible that the cost of copying exceeds any savings
 233        from reduced TLB misses.
 235compact_pagemigrate_failed is incremented when the underlying mechanism
 236        for moving a page failed.
 238compact_blocks_moved is incremented each time memory compaction examines
 239        a huge page aligned range of pages.
 241It is possible to establish how long the stalls were using the function
 242tracer to record how long was spent in __alloc_pages_nodemask and
 243using the mm_page_alloc tracepoint to identify which allocations were
 244for huge pages.
 246== get_user_pages and follow_page ==
 248get_user_pages and follow_page if run on a hugepage, will return the
 249head or tail pages as usual (exactly as they would do on
 250hugetlbfs). Most gup users will only care about the actual physical
 251address of the page and its temporary pinning to release after the I/O
 252is complete, so they won't ever notice the fact the page is huge. But
 253if any driver is going to mangle over the page structure of the tail
 254page (like for checking page->mapping or other bits that are relevant
 255for the head page and not the tail page), it should be updated to jump
 256to check head page instead (while serializing properly against
 257split_huge_page() to avoid the head and tail pages to disappear from
 258under it, see the futex code to see an example of that, hugetlbfs also
 259needed special handling in futex code for similar reasons).
 261NOTE: these aren't new constraints to the GUP API, and they match the
 262same constrains that applies to hugetlbfs too, so any driver capable
 263of handling GUP on hugetlbfs will also work fine on transparent
 264hugepage backed mappings.
 266In case you can't handle compound pages if they're returned by
 267follow_page, the FOLL_SPLIT bit can be specified as parameter to
 268follow_page, so that it will split the hugepages before returning
 269them. Migration for example passes FOLL_SPLIT as parameter to
 270follow_page because it's not hugepage aware and in fact it can't work
 271at all on hugetlbfs (but it instead works fine on transparent
 272hugepages thanks to FOLL_SPLIT). migration simply can't deal with
 273hugepages being returned (as it's not only checking the pfn of the
 274page and pinning it during the copy but it pretends to migrate the
 275memory in regular page sizes and with regular pte/pmd mappings).
 277== Optimizing the applications ==
 279To be guaranteed that the kernel will map a 2M page immediately in any
 280memory region, the mmap region has to be hugepage naturally
 281aligned. posix_memalign() can provide that guarantee.
 283== Hugetlbfs ==
 285You can use hugetlbfs on a kernel that has transparent hugepage
 286support enabled just fine as always. No difference can be noted in
 287hugetlbfs other than there will be less overall fragmentation. All
 288usual features belonging to hugetlbfs are preserved and
 289unaffected. libhugetlbfs will also work fine as usual.
 291== Graceful fallback ==
 293Code walking pagetables but unware about huge pmds can simply call
 294split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
 295pmd_offset. It's trivial to make the code transparent hugepage aware
 296by just grepping for "pmd_offset" and adding split_huge_page_pmd where
 297missing after pmd_offset returns the pmd. Thanks to the graceful
 298fallback design, with a one liner change, you can avoid to write
 299hundred if not thousand of lines of complex code to make your code
 300hugepage aware.
 302If you're not walking pagetables but you run into a physical hugepage
 303but you can't handle it natively in your code, you can split it by
 304calling split_huge_page(page). This is what the Linux VM does before
 305it tries to swapout the hugepage for example.
 307Example to make mremap.c transparent hugepage aware with a one liner
 310diff --git a/mm/mremap.c b/mm/mremap.c
 311--- a/mm/mremap.c
 312+++ b/mm/mremap.c
 313@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
 314                return NULL;
 316        pmd = pmd_offset(pud, addr);
 317+       split_huge_page_pmd(vma, addr, pmd);
 318        if (pmd_none_or_clear_bad(pmd))
 319                return NULL;
 321== Locking in hugepage aware code ==
 323We want as much code as possible hugepage aware, as calling
 324split_huge_page() or split_huge_page_pmd() has a cost.
 326To make pagetable walks huge pmd aware, all you need to do is to call
 327pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
 328mmap_sem in read (or write) mode to be sure an huge pmd cannot be
 329created from under you by khugepaged (khugepaged collapse_huge_page
 330takes the mmap_sem in write mode in addition to the anon_vma lock). If
 331pmd_trans_huge returns false, you just fallback in the old code
 332paths. If instead pmd_trans_huge returns true, you have to take the
 333mm->page_table_lock and re-run pmd_trans_huge. Taking the
 334page_table_lock will prevent the huge pmd to be converted into a
 335regular pmd from under you (split_huge_page can run in parallel to the
 336pagetable walk). If the second pmd_trans_huge returns false, you
 337should just drop the page_table_lock and fallback to the old code as
 338before. Otherwise you should run pmd_trans_splitting on the pmd. In
 339case pmd_trans_splitting returns true, it means split_huge_page is
 340already in the middle of splitting the page. So if pmd_trans_splitting
 341returns true it's enough to drop the page_table_lock and call
 342wait_split_huge_page and then fallback the old code paths. You are
 343guaranteed by the time wait_split_huge_page returns, the pmd isn't
 344huge anymore. If pmd_trans_splitting returns false, you can proceed to
 345process the huge pmd and the hugepage natively. Once finished you can
 346drop the page_table_lock.
 348== compound_lock, get_user_pages and put_page ==
 350split_huge_page internally has to distribute the refcounts in the head
 351page to the tail pages before clearing all PG_head/tail bits from the
 352page structures. It can do that easily for refcounts taken by huge pmd
 353mappings. But the GUI API as created by hugetlbfs (that returns head
 354and tail pages if running get_user_pages on an address backed by any
 355hugepage), requires the refcount to be accounted on the tail pages and
 356not only in the head pages, if we want to be able to run
 357split_huge_page while there are gup pins established on any tail
 358page. Failure to be able to run split_huge_page if there's any gup pin
 359on any tail page, would mean having to split all hugepages upfront in
 360get_user_pages which is unacceptable as too many gup users are
 361performance critical and they must work natively on hugepages like
 362they work natively on hugetlbfs already (hugetlbfs is simpler because
 363hugetlbfs pages cannot be splitted so there wouldn't be requirement of
 364accounting the pins on the tail pages for hugetlbfs). If we wouldn't
 365account the gup refcounts on the tail pages during gup, we won't know
 366anymore which tail page is pinned by gup and which is not while we run
 367split_huge_page. But we still have to add the gup pin to the head page
 368too, to know when we can free the compound page in case it's never
 369splitted during its lifetime. That requires changing not just
 370get_page, but put_page as well so that when put_page runs on a tail
 371page (and only on a tail page) it will find its respective head page,
 372and then it will decrease the head page refcount in addition to the
 373tail page refcount. To obtain a head page reliably and to decrease its
 374refcount without race conditions, put_page has to serialize against
 375__split_huge_page_refcount using a special per-page lock called