On 04/08/17 16:25, Catalin Marinas wrote:
Two minor comments below.
On Tue, Jul 25, 2017 at 11:55:42AM +0100, Robin Murphy wrote:
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -960,6 +960,17 @@ config ARM64_UAO
> regular load/store instructions if the cpu does not implement the
> +config ARM64_PMEM
> + bool "Enable support for persistent memory"
> + select ARCH_HAS_PMEM_API
> + help
> + Say Y to enable support for the persistent memory API based on the
> + ARMv8.2 DCPoP feature.
> + The feature is detected at runtime, and the kernel will use DC CVAC
> + operations if DC CVAP is not supported (following the behaviour of
> + DC CVAP itself if the system does not define a point of persistence).
Any reason not to have this default y?
Mostly because it's untested, and not actually useful without some way
of describing persistent memory regions to the kernel (I'm currently
trying to make sense of what exactly ARCH_HAS_MMIO_FLUSH is supposed to
mean in order to enable ACPI NFIT support).
There's also the potential issue that we can't disable ARCH_HAS_PMEM_API
at runtime on pre-v8.2 systems where DC CVAC may not strictly give the
guarantee of persistence that that is supposed to imply. However, I
guess that's more of an open problem, since even on a v8.2 CPU reporting
(mandatory) DC CVAP support we've still no way to actually know whether
the interconnect/memory controller/etc. of any old system is up to the
job. At this point I'm mostly hoping that people will only be sticking
NVDIMMs into systems that *are* properly designed to support them, v8.2
CPUs or not.
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -172,6 +172,20 @@ ENDPIPROC(__clean_dcache_area_poc)
> + * __clean_dcache_area_pop(kaddr, size)
> + *
> + * Ensure that any D-cache lines for the interval [kaddr, kaddr+size)
> + * are cleaned to the PoP.
> + *
> + * - kaddr - kernel address
> + * - size - size in question
> + */
> + dcache_by_line_op cvap, sy, x0, x1, x2, x3
> + ret
> * __dma_flush_area(start, size)
> * clean & invalidate D / U line
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index a682a0a2a0fa..a461a00ceb3e 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -183,3 +183,21 @@ bool kernel_page_present(struct page *page)
> #endif /* CONFIG_HIBERNATION */
> #endif /* CONFIG_DEBUG_PAGEALLOC */
> +#ifdef CONFIG_ARCH_HAS_PMEM_API
> +#include <asm/cacheflush.h>
> +static inline void arch_wb_cache_pmem(void *addr, size_t size)
> + /* Ensure order against any prior non-cacheable writes */
> + dmb(sy);
> + __clean_dcache_area_pop(addr, size);
Could we keep the dmb() in the actual __clean_dcache_area_pop()
Mark held the opinion that it should follow the same pattern as the
other cache maintenance primitives - e.g. we don't have such a dmb in
__inval_cache_range(), but do place them at callsites where we know it
may be necessary (head.S) - and I found it hard to disagree. The callers
in patch #6 should never need a barrier, and arguably we may not even
need this one, since it looks like pmem should currently always be
mapped as MEMREMAP_WB if ARCH_HAS_PMEM_API.
I can do the changes myself if you don't have any objections.
If you would prefer to play safe and move it back into the assembly
that's fine by me, but note that the associated comments in patch #6
should also be removed if so.