hugetlb: update vmemmap_dedup.rst

Update the documentation regarding vmemmap optimization for hugetlb to
reflect the changes in how the kernel maps the tail pages.

Fake heads no longer exist. Remove their description.

[kas@kernel.org: update vmemmap_dedup.rst]
  Link: https://lkml.kernel.org/r/20260302105630.303492-1-kas@kernel.org
Link: https://lkml.kernel.org/r/20260227194302.274384-18-kas@kernel.org
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Baoquan He <bhe@redhat.com>
Cc: Christoph Lameter <cl@gentwo.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Frank van der Linden <fvdl@google.com>
Cc: Harry Yoo <harry.yoo@oracle.com>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Usama Arif <usamaarif642@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Kiryl Shutsemau
2026-02-27 19:42:55 +00:00
committed by Andrew Morton
parent 66b2a3d9ae
commit fed8676ca2

View File

@@ -124,33 +124,35 @@ Here is how things look before optimization::
| |
+-----------+
The value of page->compound_info is the same for all tail pages. The first
page of ``struct page`` (page 0) associated with the HugeTLB page contains the 4
``struct page`` necessary to describe the HugeTLB. The only use of the remaining
pages of ``struct page`` (page 1 to page 7) is to point to page->compound_info.
Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of ``struct page``
will be used for each HugeTLB page. This will allow us to free the remaining
7 pages to the buddy allocator.
The first page of ``struct page`` (page 0) associated with the HugeTLB page
contains the 4 ``struct page`` necessary to describe the HugeTLB. The remaining
pages of ``struct page`` (page 1 to page 7) are tail pages.
The optimization is only applied when the size of the struct page is a power
of 2. In this case, all tail pages of the same order are identical. See
compound_head(). This allows us to remap the tail pages of the vmemmap to a
shared, read-only page. The head page is also remapped to a new page. This
allows the original vmemmap pages to be freed.
Here is how things look after remapping::
HugeTLB struct pages(8 pages) page frame(8 pages)
+-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+
| | | 0 | -------------> | 0 |
| | +-----------+ +-----------+
| | | 1 | ---------------^ ^ ^ ^ ^ ^ ^
| | +-----------+ | | | | | |
| | | 2 | -----------------+ | | | | |
| | +-----------+ | | | | |
| | | 3 | -------------------+ | | | |
| | +-----------+ | | | |
| | | 4 | ---------------------+ | | |
| PMD | +-----------+ | | |
| level | | 5 | -----------------------+ | |
| mapping | +-----------+ | |
| | | 6 | -------------------------+ |
| | +-----------+ |
| | | 7 | ---------------------------+
HugeTLB struct pages(8 pages) page frame (new)
+-----------+ ---virt_to_page---> +-----------+ mapping to +----------------+
| | | 0 | -------------> | 0 |
| | +-----------+ +----------------+
| | | 1 | ------
| | +-----------+ |
| | | 2 | ------┼ +----------------------------+
| | +-----------+ | | A single, per-zone page |
| | | 3 | ------------> | frame shared among all |
| | +-----------+ | | hugepages of the same size |
| | | 4 | ------┼ +----------------------------+
| | +-----------+ |
| | | 5 | ------
| PMD | +-----------+ |
| level | | 6 | ------
| mapping | +-----------+ |
| | | 7 | ------
| | +-----------+
| |
| |
@@ -172,16 +174,6 @@ The contiguous bit is used to increase the mapping size at the pmd and pte
(last) level. So this type of HugeTLB page can be optimized only when its
size of the ``struct page`` structs is greater than **1** page.
Notice: The head vmemmap page is not freed to the buddy allocator and all
tail vmemmap pages are mapped to the head vmemmap page frame. So we can see
more than one ``struct page`` struct with ``PG_head`` (e.g. 8 per 2 MB HugeTLB
page) associated with each HugeTLB page. The ``compound_head()`` can handle
this correctly. There is only **one** head ``struct page``, the tail
``struct page`` with ``PG_head`` are fake head ``struct page``. We need an
approach to distinguish between those two different types of ``struct page`` so
that ``compound_head()`` can return the real head ``struct page`` when the
parameter is the tail ``struct page`` but with ``PG_head``.
Device DAX
==========