On Thu, Mar 28, 2019 at 2:17 PM David Hildenbrand <david(a)redhat.com> wrote:
>> You are using the term "Sub-section memory hotplug support", but is
>> actually what you mean? To rephrase, aren't we talking here about
>> "Sub-section device memory hotplug support" or similar?
> Specifically it is support for passing @start and @size arguments to
> arch_add_memory() that are not section aligned. It's not limited to
> "device memory" which is otherwise not a concept that
> arch_add_memory() understands, it just groks spans of pfns.
Okay, so everything that does not have a memory block devices as of now.
>> Reason I am asking is because I wonder how that would interact with the
>> memory block device infrastructure and hotplugging of system ram -
>> add_memory()/add_memory_resource(). I *assume* you are not changing the
>> add_memory() interface, so that one still only works with whole sections
>> (or well, memory_block_size_bytes()) - check_hotplug_memory_range().
> Like you found below, the implementation enforces that add_memory_*()
> interfaces maintain section alignment for @start and @size.
>> In general, mix and matching system RAM and persistent memory per
>> section, I am not a friend of that.
> You have no choice. The platform may decide to map PMEM and System RAM
> in the same section because the Linux section is too large compared to
> typical memory controller mapping granularity capability.
I might be very wrong here, but do we actually care about something like
64MB getting lost in the cracks? I mean if it simplifies core MM, let go
of the couple of MB of system ram and handle the PMEM part only. Treat
the system ram parts like memory holes we already have in ordinary
sections (well, there we simply set the relevant struct pages to
PG_reserved). Of course, if we have hundreds of unaligned devices and
stuff will start to add up ... but I assume this is not the case?
That's precisely what we do today and it has become untenable as the
collision scenarios pile up. This thread  is worth a read if you
care about some of the gory details why I'm back to pushing for
sub-section support, but most if it has already been summarized in the
current discussion on this thread.
>> Especially when it comes to memory
>> block devices. But I am getting the feeling that we are rather targeting
>> PMEM vs. PMEM with this patch series.
> The collisions are between System RAM, PMEM regions, and PMEM
> namespaces (sub-divisions of regions that each need their own mapping
Understood. I wonder if that PMEM only mapping (including separate
lifetime) could be handled differently. But I am absolutely no expert,
I refer you to the above thread trying to fix the libnvdimm-local hacks.
>>> Quote patch7:
>>> "The libnvdimm sub-system has suffered a series of hacks and broken
>>> workarounds for the memory-hotplug implementation's awkward
>>> section-aligned (128MB) granularity. For example the following backtrace
>>> is emitted when attempting arch_add_memory() with physical address
>>> ranges that intersect 'System RAM' (RAM) with 'Persistent
>>> within a given section:
>>> WARNING: CPU: 0 PID: 558 at kernel/memremap.c:300
>>> devm_memremap_pages attempted on mixed region [mem 0x200000000-0x2fbffffff
>>> Call Trace:
>>> __wrap_devm_memremap_pages+0x58/0x70 [nfit_test_iomap]
>>> pmem_attach_disk+0x19a/0x440 [nd_pmem]
>>> Recently it was discovered that the problem goes beyond RAM vs PMEM
>>> collisions as some platform produce PMEM vs PMEM collisions within a
>> As side-noted by Michal, I wonder if PMEM vs. PMEM cannot rather be
>> implemented "on top" of what we have right now. Or is this what we
>> already have that you call "hacks in nvdimm" code? (no NVDIMM expert,
>> sorry for the stupid questions)
> It doesn't work, because even if the padding was implemented 100%
> correct, which thus far has failed to be the case, the platform may
> change physical alignments from one boot to the next for a variety of
Would ignoring the System RAM parts (as mentioned above) help or doesn't
it make any difference in terms of complexity?
Doesn't help much, that's only one of many collision sources.
>>> given section. The libnvdimm workaround for that case
revealed that the
>>> libnvdimm section-alignment-padding implementation has been broken for a
>>> long while. A fix for that long-standing breakage introduces as many
>>> problems as it solves as it would require a backward-incompatible change
>>> to the namespace metadata interpretation. Instead of that dubious route
>>> , address the root problem in the memory-hotplug implementation."
>>> The approach is taken is to observe that each section already maintains
>>> an array of 'unsigned long' values to hold the pageblock_flags. A
>>> additional 'unsigned long' is added to house a 'sub-section
>>> bitmask. Each bit tracks the mapped state of one sub-section's worth of
>>> capacity which is SECTION_SIZE / BITS_PER_LONG, or 2MB on x86-64.
>>> The implication of allowing sections to be piecemeal mapped/unmapped is
>>> that the valid_section() helper is no longer authoritative to determine
>>> if a section is fully mapped. Instead pfn_valid() is updated to consult
>>> the section-active bitmask. Given that typical memory hotplug still has
>>> deep "section" dependencies the sub-section capability is limited
>>> 'want_memblock=false' invocations of arch_add_memory(), effectively
>>> devm_memremap_pages() users for now.
>> Ah, there it is. And my point would be, please don't ever unlock
>> something like that for want_memblock=true. Especially not for memory
>> added after boot via device drivers (add_memory()).
> I don't see a strong reason why not, as long as it does not regress
> existing use cases. It might need to be an opt-in for new tooling that
> is aware of finer granularity hotplug. That said, I have no pressing
> need to go there and just care about the arch_add_memory() capability
> for now.
Especially onlining/offlining of memory might end up very ugly. And that
goes hand in hand with memory block devices. They are either online or
offline, not something in between. (I went that path and Michal
correctly told me why it is not a good idea)
I was recently trying to teach memory block devices who their owner
of which type they are. Right now I am looking into the option of using
drivers. Memory block devices that could belong to different drivers at
a time are well ... totally broken.
Sub-section support is aimed at a similar case where different
portions of an 128MB span need to handed out to devices / drivers with
I assume it would still be a special
case, though, but conceptually speaking about the interface it would be
Memory block devices (and therefore 1..X sections) should have one owner
only. Anything else just does not fit.
Yes, but I would say the problem there is that the
memory-block-devices interface design is showing its age and is being
pressured with how systems want to deploy and use memory today.