On Fri, Dec 15, 2017 at 6:09 AM, Christoph Hellwig <hch(a)lst.de> wrote:
This is a pretty big function, which should be out of line in
general,
and a no-op stub if CONFIG_ZONE_DEVICЕ is not set.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Logan Gunthorpe <logang(a)deltatee.com>
[..]
+/**
+ * get_dev_pagemap() - take a new live reference on the dev_pagemap for @pfn
+ * @pfn: page frame number to lookup page_map
+ * @pgmap: optional known pgmap that already has a reference
+ *
+ * @pgmap allows the overhead of a lookup to be bypassed when @pfn lands in the
+ * same mapping.
+ */
+struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
+ struct dev_pagemap *pgmap)
+{
+ const struct resource *res = pgmap ? pgmap->res : NULL;
+ resource_size_t phys = PFN_PHYS(pfn);
+
+ /*
+ * In the cached case we're already holding a live reference so
+ * we can simply do a blind increment
+ */
+ if (res && phys >= res->start && phys <= res->end) {
+ percpu_ref_get(pgmap->ref);
+ return pgmap;
+ }
I was going to say keep the cached case in the static inline, but with
the optimization to the calling convention in the following patch I
think that makes this moot.
So,
Reviewed-by: Dan Williams <dan.j.williams(a)intel.com>