On Tue, Sep 10, 2019 at 4:29 PM Aneesh Kumar K.V
<aneesh.kumar(a)linux.ibm.com> wrote:
With commit: 7cc7867fb061 ("mm/devm_memremap_pages: enable sub-section remap")
pmem namespaces are remapped in 2M chunks. On architectures like ppc64 we
can map the memmap area using 16MB hugepage size and that can cover
a memory range of 16G.
While enabling new pmem namespaces, since memory is added in sub-section chunks,
before creating a new memmap mapping, kernel should check whether there is an
existing memmap mapping covering the new pmem namespace. Currently, this is
validated by checking whether the section covering the range is already
initialized or not. Considering there can be multiple namespaces in the same
section this can result in wrong validation. Update this to check for
sub-sections in the range. This is done by checking for all pfns in the range we
are mapping.
We could optimize this by checking only just one pfn in each sub-section. But
since this is not fast-path we keep this simple.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar(a)linux.ibm.com>
---
arch/powerpc/mm/init_64.c | 45 ++++++++++++++++++++-------------------
1 file changed, 23 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 4e08246acd79..7710ccdc19a2 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -70,30 +70,24 @@ EXPORT_SYMBOL_GPL(kernstart_addr);
#ifdef CONFIG_SPARSEMEM_VMEMMAP
/*
- * Given an address within the vmemmap, determine the pfn of the page that
- * represents the start of the section it is within. Note that we have to
- * do this by hand as the proffered address may not be correctly aligned.
- * Subtraction of non-aligned pointers produces undefined results.
- */
-static unsigned long __meminit vmemmap_section_start(unsigned long page)
-{
- unsigned long offset = page - ((unsigned long)(vmemmap));
-
- /* Return the pfn of the start of the section. */
- return (offset / sizeof(struct page)) & PAGE_SECTION_MASK;
-}
If you want to go with Dan's suggestion of keeping the function and
using PAGE_SUBSECTION_MASK then can you fold the pfn_to_page() below
into vmemmap_section_start()? The current behaviour of returning a pfn
makes no sense to me.
- * Check if this vmemmap page is already initialised. If any
section
+ * Check if this vmemmap page is already initialised. If any sub section
* which overlaps this vmemmap page is initialised then this page is
* initialised already.
*/
-static int __meminit vmemmap_populated(unsigned long start, int page_size)
+
+static int __meminit vmemmap_populated(unsigned long start, int size)
{
- unsigned long end = start + page_size;
- start = (unsigned long)(pfn_to_page(vmemmap_section_start(start)));
+ unsigned long end = start + size;
- for (; start < end; start += (PAGES_PER_SECTION *
sizeof(struct page)))
+ /* start is size aligned and it is always > sizeof(struct page) */
+ VM_BUG_ON(start & sizeof(struct page));
Shouldn't the test be: start & (sizeof(struct page) - 1)?
VM_BUG_ON(start != ALIGN(start, page_size)) would be clearer.
+ for (; start < end; start += sizeof(struct page))
+ /*
+ * pfn valid check here is intended to really check
+ * whether we have any subsection already initialized
+ * in this range. We keep it simple by checking every
+ * pfn in the range.
+ */
if (pfn_valid(page_to_pfn((struct page *)start)))
return 1;
Having a few lines of separation between the for () and the loop body
always looks a bit sketch, even if it's just a comment. Wrapping the
block in braces or moving the comment above the loop is probably a
good idea.
Looks fine otherwise