On Thu, Feb 16, 2017 at 7:52 PM, Ross Zwisler
<ross.zwisler(a)linux.intel.com> wrote:
On Thu, Jan 19, 2017 at 07:50:29PM -0800, Dan Williams wrote:
> The direct-I/O write path for a pmem device must ensure that data is flushed
> to a power-fail safe zone when the operation is complete. However, other
> dax capable block devices, like brd, do not have this requirement.
> Introduce a 'copy_from_iter' dax operation so that pmem can inject
> cache management without imposing this overhead on other dax capable
> block_device drivers.
>
> Cc: <x86(a)kernel.org>
> Cc: Jan Kara <jack(a)suse.cz>
> Cc: Jeff Moyer <jmoyer(a)redhat.com>
> Cc: Ingo Molnar <mingo(a)redhat.com>
> Cc: Christoph Hellwig <hch(a)lst.de>
> Cc: "H. Peter Anvin" <hpa(a)zytor.com>
> Cc: Al Viro <viro(a)zeniv.linux.org.uk>
> Cc: Thomas Gleixner <tglx(a)linutronix.de>
> Cc: Matthew Wilcox <mawilcox(a)microsoft.com>
> Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
> Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
> ---
> arch/x86/include/asm/pmem.h | 31 -------------------------------
> drivers/nvdimm/pmem.c | 10 ++++++++++
> fs/dax.c | 11 ++++++++++-
> include/linux/blkdev.h | 1 +
> include/linux/pmem.h | 24 ------------------------
> 5 files changed, 21 insertions(+), 56 deletions(-)
>
> diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
> index f26ba430d853..0ca5e693f4a2 100644
> --- a/arch/x86/include/asm/pmem.h
> +++ b/arch/x86/include/asm/pmem.h
> @@ -64,37 +64,6 @@ static inline void arch_wb_cache_pmem(void *addr, size_t size)
> clwb(p);
> }
>
> -/*
> - * copy_from_iter_nocache() on x86 only uses non-temporal stores for iovec
> - * iterators, so for other types (bvec & kvec) we must do a cache write-back.
> - */
> -static inline bool __iter_needs_pmem_wb(struct iov_iter *i)
> -{
> - return iter_is_iovec(i) == false;
> -}
> -
> -/**
> - * arch_copy_from_iter_pmem - copy data from an iterator to PMEM
> - * @addr: PMEM destination address
> - * @bytes: number of bytes to copy
> - * @i: iterator with source data
> - *
> - * Copy data from the iterator 'i' to the PMEM buffer starting at
'addr'.
> - */
> -static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,
> - struct iov_iter *i)
> -{
> - size_t len;
> -
> - /* TODO: skip the write-back by always using non-temporal stores */
> - len = copy_from_iter_nocache(addr, bytes, i);
> -
> - if (__iter_needs_pmem_wb(i))
> - arch_wb_cache_pmem(addr, bytes);
This writeback is no longer conditional in the pmem_copy_from_iter() version,
which means that for iovec iterators you do a non-temporal store and then
afterwards take the time to loop through and flush the cachelines? This seems
incorrect, and I wonder if this could be the cause of the performance
regression reported by 0-day?
I'm pretty sure you're right. What I was planning for the next version
of this patch is to handle the unaligned case in the local assembly so
that we never need to do a flush loop after the fact.