[PATCH 0/9] ndctl, create-namespace: updates and fixes
by Dan Williams
Close some holes in namespace creation that allowed invalid
configurations to leak through to the kernel rather than trapping and
warning about them earlier.
Add a man page and rpm specfile support for the recently added daxctl
utility.
---
Dan Williams (9):
ndctl, create-namespace: skip idle regions when scanning for capacity
ndctl, create-namespace: skip blk regions when scanning for memory or dax mode
ndctl, create-namespace: improve failure message
ndctl, create-namespace: add an 'align' option
daxctl, list: add man page
daxctl: add daxctl to rpm spec
ndctl: return unit size from parse_size64()
ndctl: fix ndctl_region_get_interleave_ways(), set minimum
ndctl, create-namespace: enforce --size must be even multiple of interleave-width
Documentation/Makefile.am | 3 -
Documentation/daxctl-list.txt | 78 ++++++++++++++++
Documentation/ndctl-create-namespace.txt | 13 +++
ndctl.spec.in | 8 +-
ndctl/builtin-xaction-namespace.c | 143 +++++++++++++++++++++++++-----
ndctl/lib/libndctl.c | 7 +
ndctl/lib/libndctl.sym | 1
ndctl/libndctl.h.in | 1
util/size.c | 17 +++-
util/size.h | 2
10 files changed, 243 insertions(+), 30 deletions(-)
create mode 100644 Documentation/daxctl-list.txt
5 years, 5 months
Great Opportunities: AI/Data-Mining/NLP, Blockchain CTO, etc.
by Nicholas Meyler
Exciting Searches for January
(1) NLP/Machine-Learning Data Scientist (Pleasanton, CA):
My client has remained at the center of scientific discovery for more than 50 years, manufacturing and distributing a broad range of products for the life science research and clinical diagnostic markets. The Company is renowned worldwide among hospitals, universities, major research institutions, as well as biotechnology and pharmaceutical companies for its commitment to quality and customer service. Founded in 1952, they are headquartered in Hercules, California, and serve more than 85,000 research and industry customers worldwide through its global network of operations.
My exciting Biotech Client is seeking a Natural Language Processing (NPL)/Machine Learning (ML) scientist to lead an elite data science team responsible for creating innovative tools, methods and best practices around the scientific literature. The ideal candidate cares deeply about developing and implementing solutions that dramatically improve one's ability to integrate and quantify knowledge extracted from millions of scientific publications.
What you’ll be doing:
· provide day-to-day oversight and management of the date science staff and activities
· collaborate with scientists and other business partners
· handle competing requests from a range of data customers
· lead by example through ideation, prototyping and testing
· develop best practices that provide results of the highest quality above that which is typically seen in academia and industry
· identify opportunities to blend article, technology, product metrics with internal and external data assets
· provision interactive text analytics visualizations
· build analytics prototypes and demos for end users
· provide thought leadership, provide strategic road map for text mining of scientific literature and how the literature fits into our intelligence landscape, supports new product development, business development, complements existing instrument software and potentially directly faces external customers
· building date pipelines for processing natural language text at scale
· developing front-end tools and interfaces (e.g. notification and alert systems, dashboard reports) for internal clients to view and interact with processed data
· stay up to speed on state of the art methods for efficient text processing
What you need for this role:
· MS or PhD in Computer Science, math, statistics or related field
· 10+ years text mining and machine learning
· prior experience managing and coaching deeply technical staff in an Agile environment
· prior experience with Search, machine learning and other text mining methodologies
· knowledge and experience in one or more of the following areas: Natural Language Processing, Machine Learning, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, and/or Discourse Modeling
· knowledge of National Library of Medicine, PubMed,PubMed Central, and Medical Index Subject Headings and related scientific literature tools and resources
What’s in it for you:
· Competitive pay and great benefits including medical, dental, vision, 401k and more
· Opportunities for growth and training
· Stability of a profitable 60+ year old company
My new client is one of the world's leading manufacturers of capacitors, seeking a Development Engineer
(2) Senior Development Engineer – Ceramic Dielectric (Spartanburg Area, South Carolina)
This engineer will lead material and process development in the area of ceramic dielectrics for ceramic capacitor products. The candidate should have knowledge and experience with ceramic dielectric formulations, and milling and dispersion of fine ceramic powders. The candidate will be familiar with processing of ceramic powders, dispersants, mixing and milling of micron and sub-micron size powders, formulating multi-component compositions, ceramic thick films and coatings, and electronic properties of ceramics. The candidate will be expected to quickly learn the key aspects of the Company manufacturing technology and process. This person will play a critical role in developing new products using leading edge technologies and will execute experimental work with a minimum of supervision.
Requirements:
A strong technical background and experience are required in Ceramic Science and Engineering, processing with ceramic powders and/or coatings, Design of Experiments, and problem solving. Good communication skills both verbal and written are essential. Experience with material characterization techniques, and electrical property measurements would be preferred, especially with respect to MLCC or other electronic components. Experience of successful product development in the field of electronic components is beneficial.
Education/Experience:
B.S. in Engineering (Materials, Ceramic or Chemical) at minimum; M.S. or Ph.D. in Engineering or Science is strongly preferred. Minimum 5 years of applicable experience.
Years of experience:
5 or more
Computer Skills:
Proficiency in using Windows PC and Microsoft Office suite, in particular: Word, Excel, PowerPoint, Microsoft-Project and statistical analysis software such as MiniTab
NOTE: Specifically looking for experience with Barium Titanate, Calcium Zirconate, Yttrium, Ytterbium, etc. materials, including Rare Earth dopants; crystal chemistry; thick films
#3) Attention Bitcoin/Blockchain and HF Trading Experts:
My exciting new client in Los Angles is the first investment bank for digital finance, using bitcoin and blockchain technology, working with the New York Cryptocurrencies Exchange.
For now, they focus on building a crypto-currency trading platform with a focus on ICO (Initial Coin Offering) tokens with smart contracts and in parallel. They are building an investment bank to create a transparent and compliant process for companies to raise funds via ICO. The Group is lead by a successful entrepreneur who recently made a successful exit from his second business with a $100+ million ticket.
CTO Position Summary
The Chief Technology Officer (CTO) is responsible for overseeing all technical aspects of the blockchain and fintech projects. Using an active and practical approach, the CTO will direct all employees in IT and IO departments to attain the company’s strategic goals established in the company’s strategic plan.
Specific responsibilities:
CTO must be able to communicate and collaborate with other departments:
1. CEO, Strategy Board & Product Owner
●Predict and stay ahead of any technical points and issues that might significantly affect the company.
●Advise the CEO and Strategy Board on the long‐term technical, strategic direction of the company and where to, or to not, make large strategic technological bets.
●Provide the CEO and Product Owner with different options on the technical direction of the company and provide sufficient information for deciding what is the best solution to take at any given time.
●Be an ultimate authority for the CEO, Strategy Board, and Product Owner by providing a neutral view which puts the company’s long‐term interests above all else.
2. Engineering/Product development
●Continually improve production pipelines, being involved in the daily execution and engineering team management once the priorities are set.
●Lead development team, assess team performance and help execute recruiting/retention efforts.
●Regular reporting to CEO and Product Owner.
●Continuously optimize across the whole organization to avoid any duplication of effort.
●Ensure alignment of the greater technical organization and, when necessary, arbitrate techno‐centric turf scraps, architecture conflicts, etc.
●Serve as master architect across product lines.
3. Business Development, Partnerships
●Communicate with authority about the market; listen to customer needs; quickly understand their issues, and give good advice on the company’s products to the customers.
●Provide technical due diligence of partner technologies and acquisition targets to make sure they properly fit with the company’s platforms and offerings.
●Keep track of all the tech startups in the same space, and have them stack ranked based on what he/she can glean about their prospects. The CTO should have clear thoughts
about ‐ possible acquisition targets, what expertise is the company missing? Which companies are doing the best work across all of the ancillary areas? Which companies
have the best technical teams? What could competitors buy that would hurt the company? etc.
●Predict if a new technology would have a significant impact on the long‐term technological roadmap for the company.
●Predict long‐term competitive trends due to the constant shifts in the market.
4. Marketing
●Serve as the public face of technology for the company.
●Evangelize the company vision and technical direction through conferences, speaking engagements, and press/media/analyst activities.
●Maintain healthy relationships with designated key industry analysts.
●Support the marketing team in building a large active community around the company’s products (meetups, hackathons, industry conferences, etc.).
●Social engagement marketing through twitter, blog posts, articles/whitepapers, etc.
Education and Experience
●BS in related field and at least seven years’ experience in the Information Technology arena.
●At least two years management and strategic experience in this field or MBA/MS in related field with five years’ experience, 2 of which must be managerial and strategic.
●Electronics Trading Systems/FX.
●Java/Python developer/team lead experience 7+ years.
●Systems Architect skills or background
●Blockchain, Smart contracts, Cryptocurrencies background
●Financial/Blockchain startups background
Additional skills
●Has undergone or overseen technical due diligence of electronic trading platform or similar product
●Strong writing and presentation skills
●Agile master
●Ability to manage remote teams
●Russian language (optional)
Additional requirements
●Location: Los‐Angeles (permanent residence or able to move) preferable, West Coast-based working remotely with weekly flights to Los Angeles ‐ optional
If you are interested in any of these outstanding opportunities, please send me a resume. Random resume submissions are always welcome, too. Referrals and recommendations are greatly appreciated.
Merely receiving this written material does not constitute or imply a "job offer", but is primarily a networking and informational tool for interested recipients.
Best Regards,
Nicholas Meyler
GM/President, Technology
Wingate Dunross, Inc.
ph (818)597-3200 ext. 211
<nickm(a)wdsearch.com>
Article by Doug Peckover, Inventor of "Tokenization" Security: <https://www.linkedin.com/pulse/privacy-vs-security-you-ready-nicholas-mey...>. This IP is for sale!
http://app.streamsend.com/private/u4Kt/nKR/rPOzpjo/unsubscribe/28182471
5 years, 5 months
[ndctl PATCH 0/7] introduce 'daxctl list', and 'ndctl list' updates
by Dan Williams
* The 'ndctl list' command awkwardly prints out all the corresponding
device-dax information when a namespace is in 'dax' mode. Conversely if
someone is only interested in listing device-dax information they need to
contend with libnvdimm data.
Introduce a separate daxctl utility with its own 'list' command for this
purpose, and make the listing of device-dax data through 'ndctl list'
optional (new --device-dax option).
* Enhance 'ndctl list' with the option to filter by namespace mode (new
--mode option).
* Allow 'ndctl {enable,disable}-region' to limit itself to regions
matching a given type (blk or pmem).
* Fix 'ndctl list' to trim region mapping data (i.e. the dimms in a
region), when a specific dimm is indicated with --dimm.
---
Dan Williams (7):
ndctl, daxctl: refactor main boilerplate for a new 'daxctl' utility
ndctl, daxctl: move json helpers to be available across both utilities
ndctl, list: add option to filter namespace by mode
ndctl, list: add '--device-dax' option
daxctl: add list command
ndctl, {enable,disable}-region: filter by type
ndctl, list: limit mappings when --dimm is specified
Makefile.am | 4 +
builtin.h | 31 +++++++
configure.ac | 1
daxctl/Makefile.am | 13 +++
daxctl/daxctl.c | 91 +++++++++++++++++++++
daxctl/lib/Makefile.am | 3 +
daxctl/libdaxctl.h | 1
daxctl/list.c | 112 ++++++++++++++++++++++++++
ndctl.spec.in | 12 +++
ndctl/Makefile.am | 3 -
ndctl/builtin-bat.c | 2
ndctl/builtin-create-nfit.c | 2
ndctl/builtin-dimm.c | 14 ++-
ndctl/builtin-list.c | 45 ++++++++++
ndctl/builtin-test.c | 2
ndctl/builtin-xable-region.c | 35 +++++++-
ndctl/builtin-xaction-namespace.c | 10 +-
ndctl/builtin.h | 33 --------
ndctl/libndctl.h.in | 1
ndctl/ndctl.c | 160 +++++++++----------------------------
test/Makefile.am | 4 -
test/device-dax.c | 4 -
test/multi-pmem.c | 2
util/filter.c | 21 +++++
util/filter.h | 6 +
util/help.c | 44 ++--------
util/json.c | 121 ++++++++++++++++++++++------
util/json.h | 8 ++
util/main.c | 123 ++++++++++++++++++++++++++++
util/main.h | 10 ++
30 files changed, 671 insertions(+), 247 deletions(-)
create mode 100644 builtin.h
create mode 100644 daxctl/Makefile.am
create mode 100644 daxctl/daxctl.c
create mode 100644 daxctl/list.c
delete mode 100644 ndctl/builtin.h
rename ndctl/builtin-help.c => util/help.c
rename ndctl/util/json.c => util/json.c
rename ndctl/util/json.h => util/json.h
create mode 100644 util/main.c
create mode 100644 util/main.h
5 years, 5 months
[PATCH v5] x86: fix kaslr and memmap collision
by Dave Jiang
CONFIG_RANDOMIZE_BASE relocates the kernel to a random base address.
However it does not take into account the memmap= parameter passed in from
the kernel cmdline. This results in the kernel sometimes being put in
the middle of memmap. Teaching kaslr to not insert the kernel in
memmap defined regions. We will support up to 4 memmap regions. Any
additional regions will cause kaslr to disable. The mem_avoid set has
been augmented to add up to 4 unusable regions of memmaps provided by the
user to exclude those regions from the set of valid address range to insert
the uncompressed kernel image. The nn@ss ranges will be skipped by the
mem_avoid set since it indicates memory useable.
Signed-off-by: Dave Jiang <dave.jiang(a)intel.com>
---
v2:
Addressing comments from Ingo.
- Handle entire list of memmaps
v3:
Fix 32bit build issue
v4:
Addressing comments from Baoquan
- Not exclude nn@ss ranges
v5:
Addressing additional comments from Baoquan
- Update commit header and various coding style changes
diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index e5612f3..59c2075 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -332,7 +332,10 @@ int strncmp(const char *cs, const char *ct, size_t count);
size_t strnlen(const char *s, size_t maxlen);
unsigned int atou(const char *s);
unsigned long long simple_strtoull(const char *cp, char **endp, unsigned int base);
+unsigned long simple_strtoul(const char *cp, char **endp, unsigned int base);
+long simple_strtol(const char *cp, char **endp, unsigned int base);
size_t strlen(const char *s);
+char *strchr(const char *s, int c);
/* tty.c */
void puts(const char *);
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index a66854d..036b514 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -11,6 +11,7 @@
*/
#include "misc.h"
#include "error.h"
+#include "../boot.h"
#include <generated/compile.h>
#include <linux/module.h>
@@ -56,11 +57,16 @@ struct mem_vector {
unsigned long size;
};
+/* only supporting at most 4 unusable memmap regions with kaslr */
+#define MAX_MEMMAP_REGIONS 4
+
enum mem_avoid_index {
MEM_AVOID_ZO_RANGE = 0,
MEM_AVOID_INITRD,
MEM_AVOID_CMDLINE,
MEM_AVOID_BOOTPARAMS,
+ MEM_AVOID_MEMMAP_BEGIN,
+ MEM_AVOID_MEMMAP_END = MEM_AVOID_MEMMAP_BEGIN + MAX_MEMMAP_REGIONS - 1,
MEM_AVOID_MAX,
};
@@ -77,6 +83,121 @@ static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
return true;
}
+/**
+ * _memparse - parse a string with mem suffixes into a number
+ * @ptr: Where parse begins
+ * @retptr: (output) Optional pointer to next char after parse completes
+ *
+ * Parses a string into a number. The number stored at @ptr is
+ * potentially suffixed with K, M, G, T, P, E.
+ */
+static unsigned long long _memparse(const char *ptr, char **retptr)
+{
+ char *endptr; /* local pointer to end of parsed string */
+
+ unsigned long long ret = simple_strtoull(ptr, &endptr, 0);
+
+ switch (*endptr) {
+ case 'E':
+ case 'e':
+ ret <<= 10;
+ case 'P':
+ case 'p':
+ ret <<= 10;
+ case 'T':
+ case 't':
+ ret <<= 10;
+ case 'G':
+ case 'g':
+ ret <<= 10;
+ case 'M':
+ case 'm':
+ ret <<= 10;
+ case 'K':
+ case 'k':
+ ret <<= 10;
+ endptr++;
+ default:
+ break;
+ }
+
+ if (retptr)
+ *retptr = endptr;
+
+ return ret;
+}
+
+static int
+parse_memmap(char *p, unsigned long long *start, unsigned long long *size)
+{
+ char *oldp;
+
+ if (!p)
+ return -EINVAL;
+
+ /* we don't care about this option here */
+ if (!strncmp(p, "exactmap", 8))
+ return -EINVAL;
+
+ oldp = p;
+ *size = _memparse(p, &p);
+ if (p == oldp)
+ return -EINVAL;
+
+ switch (*p) {
+ case '@':
+ /* skip this region, usable */
+ *start = 0;
+ *size = 0;
+ return 0;
+ case '#':
+ case '$':
+ case '!':
+ *start = _memparse(p + 1, &p);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int mem_avoid_memmap(void)
+{
+ char arg[128];
+ int rc = 0;
+
+ /* see if we have any memmap areas */
+ if (cmdline_find_option("memmap", arg, sizeof(arg)) > 0) {
+ int i = 0;
+ char *str = arg;
+
+ while (str && (i < MAX_MEMMAP_REGIONS)) {
+ unsigned long long start, size;
+ char *k = strchr(str, ',');
+
+ if (k)
+ *k++ = 0;
+
+ rc = parse_memmap(str, &start, &size);
+ if (rc < 0)
+ break;
+ str = k;
+ /* a usable region that should not be skipped */
+ if (size == 0)
+ continue;
+
+ mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].start = start;
+ mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].size = size;
+ i++;
+ }
+
+ /* more than 4 memmaps, fail kaslr */
+ if ((i >= MAX_MEMMAP_REGIONS) && str)
+ rc = -EINVAL;
+ }
+
+ return rc;
+}
+
/*
* In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
* The mem_avoid array is used to store the ranges that need to be avoided
@@ -438,6 +559,12 @@ void choose_random_location(unsigned long input,
return;
}
+ /* Mark the memmap regions we need to avoid */
+ if (mem_avoid_memmap()) {
+ warn("KASLR disabled: memmap exceeds limit of 4, giving up.");
+ return;
+ }
+
boot_params->hdr.loadflags |= KASLR_FLAG;
/* Prepare to add new identity pagetables on demand. */
diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c
index cc3bd58..0464aaa 100644
--- a/arch/x86/boot/string.c
+++ b/arch/x86/boot/string.c
@@ -122,6 +122,31 @@ unsigned long long simple_strtoull(const char *cp, char **endp, unsigned int bas
}
/**
+ * simple_strtoul - convert a string to an unsigned long
+ * @cp: The start of the string
+ * @endp: A pointer to the end of the parsed string will be placed here
+ * @base: The number base to use
+ */
+unsigned long simple_strtoul(const char *cp, char **endp, unsigned int base)
+{
+ return simple_strtoull(cp, endp, base);
+}
+
+/**
+ * simple_strtol - convert a string to a signed long
+ * @cp: The start of the string
+ * @endp: A pointer to the end of the parsed string will be placed here
+ * @base: The number base to use
+ */
+long simple_strtol(const char *cp, char **endp, unsigned int base)
+{
+ if (*cp == '-')
+ return -simple_strtoul(cp + 1, endp, base);
+
+ return simple_strtoul(cp, endp, base);
+}
+
+/**
* strlen - Find the length of a string
* @s: The string to be sized
*/
@@ -155,3 +180,16 @@ char *strstr(const char *s1, const char *s2)
}
return NULL;
}
+
+/**
+ * strchr - Find the first occurrence of the character c in the string s.
+ * @s: the string to be searched
+ * @c: the character to search for
+ */
+char *strchr(const char *s, int c)
+{
+ while (*s != (char)c)
+ if (*s++ == '\0')
+ return NULL;
+ return (char *)s;
+}
5 years, 5 months
[PATCH v4] libnvdimm: clear poison in mem map metadata
by Dave Jiang
Clearing out the poison in the metadata block of the namespace before
we use it. Range from start + 8k to pfn_sb->dataoff.
Signed-off-by: Dave Jiang <dave.jiang(a)intel.com>
---
drivers/nvdimm/pfn_devs.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
index a2ac9e6..fa5ba33 100644
--- a/drivers/nvdimm/pfn_devs.c
+++ b/drivers/nvdimm/pfn_devs.c
@@ -527,11 +527,36 @@ static struct vmem_altmap *__nvdimm_setup_pfn(struct nd_pfn *nd_pfn,
.base_pfn = init_altmap_base(base),
.reserve = init_altmap_reserve(base),
};
+ sector_t sector;
+ resource_size_t meta_start, meta_size;
+ long cleared;
+ unsigned int sz_align;
memcpy(res, &nsio->res, sizeof(*res));
res->start += start_pad;
res->end -= end_trunc;
+ meta_start = res->start + SZ_8K;
+ meta_size = offset;
+
+ sector = meta_start >> 9;
+ sz_align = ALIGN(meta_size + (meta_start & (512 - 1)), 512);
+
+ if (unlikely(is_bad_pmem(&nsio->bb, sector, sz_align))) {
+ if (!IS_ALIGNED(meta_start, 512) ||
+ !IS_ALIGNED(meta_size, 512))
+ return ERR_PTR(-EIO);
+
+ cleared = nvdimm_clear_poison(&nd_pfn->dev,
+ meta_start, meta_size);
+ if (cleared <= 0)
+ return ERR_PTR(-EIO);
+
+ badblocks_clear(&nsio->bb, sector, cleared >> 9);
+ if (cleared != meta_size)
+ return ERR_PTR(-EIO);
+ }
+
if (nd_pfn->mode == PFN_MODE_RAM) {
if (offset < SZ_8K)
return ERR_PTR(-EINVAL);
5 years, 5 months
[PATCH v2 0/4] Write protect DAX PMDs in *sync path
by Ross Zwisler
Currently dax_mapping_entry_mkclean() fails to clean and write protect the
pmd_t of a DAX PMD entry during an *sync operation. This can result in
data loss, as detailed in patch 4.
You can find a working tree here:
https://git.kernel.org/cgit/linux/kernel/git/zwisler/linux.git/log/?h=dax...
This series applies cleanly to mmotm-2016-12-19-16-31.
Changes since v1:
- Included Dan's patch to kill DAX support for UML.
- Instead of wrapping the DAX PMD code in dax_mapping_entry_mkclean() in
an #ifdef, we now create a stub for pmdp_huge_clear_flush() for the case
when CONFIG_TRANSPARENT_HUGEPAGE isn't defined. (Dan & Jan)
Dan Williams (1):
dax: kill uml support
Ross Zwisler (3):
dax: add stub for pmdp_huge_clear_flush()
mm: add follow_pte_pmd()
dax: wrprotect pmd_t in dax_mapping_entry_mkclean
fs/Kconfig | 2 +-
fs/dax.c | 49 ++++++++++++++++++++++++++++++-------------
include/asm-generic/pgtable.h | 10 +++++++++
include/linux/mm.h | 4 ++--
mm/memory.c | 41 ++++++++++++++++++++++++++++--------
5 files changed, 79 insertions(+), 27 deletions(-)
--
2.7.4
5 years, 5 months
[ndctl PATCH] test: document workaround for unit-test-module load priority
by Dan Williams
On some distributions it appears the recommended kernel build procedure
can sometimes prefer the in-tree module over an out-of-tree module of
the same name in the /lib/modules/<kver>/extra directory. This can be
worked around with an explicit depmod.d policy. Document how to setup
this workaround.
Reported-by: Dave Jiang <dave.jiang(a)intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
README.md | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/README.md b/README.md
index e9dec67a0ea2..38fc050210c3 100644
--- a/README.md
+++ b/README.md
@@ -56,3 +56,29 @@ git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm.git`
5. Now run `make check` in the ndctl source directory, or `ndctl test`,
if ndctl was built with `--enable-test`.
+Troubleshooting
+===============
+
+The unit tests will validate that the environment is set up correctly
+before they try to run. If the platform is misconfigured, i.e. the unit
+test modules are not available, or the test versions of the modules are
+superseded by the "in-tree/production" version of the modules `make
+check` will skip tests and report a message like the following in
+test/test-suite.log:
+`SKIP: libndctl`
+`==============`
+`test/init: nfit_test_init: nfit.ko: appears to be production version: /lib/modules/4.8.8-200.fc24.x86_64/kernel/drivers/acpi/nfit/nfit.ko.xz`
+`__ndctl_test_skip: explicit skip test_libndctl:2684`
+`nfit_test unavailable skipping tests`
+
+If the unit test modules are indeed available in the modules 'extra'
+directory the default depmod policy can be overridden by adding a file
+to /etc/depmod.d with the following contents:
+`override nfit * extra`
+`override dax * extra`
+`override dax_pmem * extra`
+`override libnvdimm * extra`
+`override nd_blk * extra`
+`override nd_btt * extra`
+`override nd_e820 * extra`
+`override nd_pmem * extra`
5 years, 5 months
[PATCH v5 1/2] mm, dax: make pmd_fault() and friends to be the same as fault()
by Dave Jiang
Instead of passing in multiple parameters in the pmd_fault() handler,
a vmf can be passed in just like a fault() handler. This will simplify
code and remove the need for the actual pmd fault handlers to allocate a
vmf. Related functions are also modified to do the same.
Signed-off-by: Dave Jiang <dave.jiang(a)intel.com>
Reviewed-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
---
drivers/dax/dax.c | 16 +++++++---------
fs/dax.c | 42 ++++++++++++++++++-----------------------
fs/ext4/file.c | 9 ++++-----
fs/xfs/xfs_file.c | 10 ++++------
include/linux/dax.h | 7 +++----
include/linux/mm.h | 3 +--
include/trace/events/fs_dax.h | 15 +++++++--------
mm/memory.c | 6 ++----
8 files changed, 46 insertions(+), 62 deletions(-)
diff --git a/drivers/dax/dax.c b/drivers/dax/dax.c
index c753a4c..947e49a 100644
--- a/drivers/dax/dax.c
+++ b/drivers/dax/dax.c
@@ -379,10 +379,9 @@ static int dax_dev_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
}
static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
- struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd,
- unsigned int flags)
+ struct vm_area_struct *vma, struct vm_fault *vmf)
{
- unsigned long pmd_addr = addr & PMD_MASK;
+ unsigned long pmd_addr = vmf->address & PMD_MASK;
struct device *dev = &dax_dev->dev;
struct dax_region *dax_region;
phys_addr_t phys;
@@ -414,23 +413,22 @@ static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
- return vmf_insert_pfn_pmd(vma, addr, pmd, pfn,
- flags & FAULT_FLAG_WRITE);
+ return vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn,
+ vmf->flags & FAULT_FLAG_WRITE);
}
-static int dax_dev_pmd_fault(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd, unsigned int flags)
+static int dax_dev_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
int rc;
struct file *filp = vma->vm_file;
struct dax_dev *dax_dev = filp->private_data;
dev_dbg(&dax_dev->dev, "%s: %s: %s (%#lx - %#lx)\n", __func__,
- current->comm, (flags & FAULT_FLAG_WRITE)
+ current->comm, (vmf->flags & FAULT_FLAG_WRITE)
? "write" : "read", vma->vm_start, vma->vm_end);
rcu_read_lock();
- rc = __dax_dev_pmd_fault(dax_dev, vma, addr, pmd, flags);
+ rc = __dax_dev_pmd_fault(dax_dev, vma, vmf);
rcu_read_unlock();
return rc;
diff --git a/fs/dax.c b/fs/dax.c
index d3fe880..446e861 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1310,18 +1310,17 @@ static int dax_pmd_load_hole(struct vm_area_struct *vma, pmd_t *pmd,
return VM_FAULT_FALLBACK;
}
-int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmd, unsigned int flags, struct iomap_ops *ops)
+int dax_iomap_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
+ struct iomap_ops *ops)
{
struct address_space *mapping = vma->vm_file->f_mapping;
- unsigned long pmd_addr = address & PMD_MASK;
- bool write = flags & FAULT_FLAG_WRITE;
+ unsigned long pmd_addr = vmf->address & PMD_MASK;
+ bool write = vmf->flags & FAULT_FLAG_WRITE;
unsigned int iomap_flags = (write ? IOMAP_WRITE : 0) | IOMAP_FAULT;
struct inode *inode = mapping->host;
int result = VM_FAULT_FALLBACK;
struct iomap iomap = { 0 };
- pgoff_t max_pgoff, pgoff;
- struct vm_fault vmf;
+ pgoff_t max_pgoff;
void *entry;
loff_t pos;
int error;
@@ -1331,10 +1330,10 @@ int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
* supposed to hold locks serializing us with truncate / punch hole so
* this is a reliable test.
*/
- pgoff = linear_page_index(vma, pmd_addr);
+ vmf->pgoff = linear_page_index(vma, pmd_addr);
max_pgoff = (i_size_read(inode) - 1) >> PAGE_SHIFT;
- trace_dax_pmd_fault(inode, vma, address, flags, pgoff, max_pgoff, 0);
+ trace_dax_pmd_fault(inode, vma, vmf, max_pgoff, 0);
/* Fall back to PTEs if we're going to COW */
if (write && !(vma->vm_flags & VM_SHARED))
@@ -1346,13 +1345,13 @@ int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
if ((pmd_addr + PMD_SIZE) > vma->vm_end)
goto fallback;
- if (pgoff > max_pgoff) {
+ if (vmf->pgoff > max_pgoff) {
result = VM_FAULT_SIGBUS;
goto out;
}
/* If the PMD would extend beyond the file size */
- if ((pgoff | PG_PMD_COLOUR) > max_pgoff)
+ if ((vmf->pgoff | PG_PMD_COLOUR) > max_pgoff)
goto fallback;
/*
@@ -1360,7 +1359,7 @@ int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
* setting up a mapping, so really we're using iomap_begin() as a way
* to look up our filesystem block.
*/
- pos = (loff_t)pgoff << PAGE_SHIFT;
+ pos = (loff_t)vmf->pgoff << PAGE_SHIFT;
error = ops->iomap_begin(inode, pos, PMD_SIZE, iomap_flags, &iomap);
if (error)
goto fallback;
@@ -1370,28 +1369,24 @@ int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
* the tree, for instance), it will return -EEXIST and we just fall
* back to 4k entries.
*/
- entry = grab_mapping_entry(mapping, pgoff, RADIX_DAX_PMD);
+ entry = grab_mapping_entry(mapping, vmf->pgoff, RADIX_DAX_PMD);
if (IS_ERR(entry))
goto finish_iomap;
if (iomap.offset + iomap.length < pos + PMD_SIZE)
goto unlock_entry;
- vmf.pgoff = pgoff;
- vmf.flags = flags;
- vmf.gfp_mask = mapping_gfp_mask(mapping) | __GFP_IO;
-
switch (iomap.type) {
case IOMAP_MAPPED:
- result = dax_pmd_insert_mapping(vma, pmd, &vmf, address,
- &iomap, pos, write, &entry);
+ result = dax_pmd_insert_mapping(vma, vmf->pmd, vmf,
+ vmf->address, &iomap, pos, write, &entry);
break;
case IOMAP_UNWRITTEN:
case IOMAP_HOLE:
if (WARN_ON_ONCE(write))
goto unlock_entry;
- result = dax_pmd_load_hole(vma, pmd, &vmf, address, &iomap,
- &entry);
+ result = dax_pmd_load_hole(vma, vmf->pmd, vmf, vmf->address,
+ &iomap, &entry);
break;
default:
WARN_ON_ONCE(1);
@@ -1399,7 +1394,7 @@ int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
}
unlock_entry:
- put_locked_mapping_entry(mapping, pgoff, entry);
+ put_locked_mapping_entry(mapping, vmf->pgoff, entry);
finish_iomap:
if (ops->iomap_end) {
int copied = PMD_SIZE;
@@ -1417,12 +1412,11 @@ int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
}
fallback:
if (result == VM_FAULT_FALLBACK) {
- split_huge_pmd(vma, pmd, address);
+ split_huge_pmd(vma, vmf->pmd, vmf->address);
count_vm_event(THP_FAULT_FALLBACK);
}
out:
- trace_dax_pmd_fault_done(inode, vma, address, flags, pgoff, max_pgoff,
- result);
+ trace_dax_pmd_fault_done(inode, vma, vmf, max_pgoff, result);
return result;
}
EXPORT_SYMBOL_GPL(dax_iomap_pmd_fault);
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index d663d3d..10b64ba 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -275,21 +275,20 @@ static int ext4_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
return result;
}
-static int ext4_dax_pmd_fault(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd, unsigned int flags)
+static int
+ext4_dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
int result;
struct inode *inode = file_inode(vma->vm_file);
struct super_block *sb = inode->i_sb;
- bool write = flags & FAULT_FLAG_WRITE;
+ bool write = vmf->flags & FAULT_FLAG_WRITE;
if (write) {
sb_start_pagefault(sb);
file_update_time(vma->vm_file);
}
down_read(&EXT4_I(inode)->i_mmap_sem);
- result = dax_iomap_pmd_fault(vma, addr, pmd, flags,
- &ext4_iomap_ops);
+ result = dax_iomap_pmd_fault(vma, vmf, &ext4_iomap_ops);
up_read(&EXT4_I(inode)->i_mmap_sem);
if (write)
sb_end_pagefault(sb);
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index d818c16..4f65a9d 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1526,9 +1526,7 @@ xfs_filemap_fault(
STATIC int
xfs_filemap_pmd_fault(
struct vm_area_struct *vma,
- unsigned long addr,
- pmd_t *pmd,
- unsigned int flags)
+ struct vm_fault *vmf)
{
struct inode *inode = file_inode(vma->vm_file);
struct xfs_inode *ip = XFS_I(inode);
@@ -1539,16 +1537,16 @@ xfs_filemap_pmd_fault(
trace_xfs_filemap_pmd_fault(ip);
- if (flags & FAULT_FLAG_WRITE) {
+ if (vmf->flags & FAULT_FLAG_WRITE) {
sb_start_pagefault(inode->i_sb);
file_update_time(vma->vm_file);
}
xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
- ret = dax_iomap_pmd_fault(vma, addr, pmd, flags, &xfs_iomap_ops);
+ ret = dax_iomap_pmd_fault(vma, vmf, &xfs_iomap_ops);
xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
- if (flags & FAULT_FLAG_WRITE)
+ if (vmf->flags & FAULT_FLAG_WRITE)
sb_end_pagefault(inode->i_sb);
return ret;
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 6e36b11..9761c90 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -71,16 +71,15 @@ static inline unsigned int dax_radix_order(void *entry)
return PMD_SHIFT - PAGE_SHIFT;
return 0;
}
-int dax_iomap_pmd_fault(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmd, unsigned int flags, struct iomap_ops *ops);
+int dax_iomap_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
+ struct iomap_ops *ops);
#else
static inline unsigned int dax_radix_order(void *entry)
{
return 0;
}
static inline int dax_iomap_pmd_fault(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmd, unsigned int flags,
- struct iomap_ops *ops)
+ struct vm_fault *vmf, struct iomap_ops *ops)
{
return VM_FAULT_FALLBACK;
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 30f416a..aef645b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -347,8 +347,7 @@ struct vm_operations_struct {
void (*close)(struct vm_area_struct * area);
int (*mremap)(struct vm_area_struct * area);
int (*fault)(struct vm_area_struct *vma, struct vm_fault *vmf);
- int (*pmd_fault)(struct vm_area_struct *, unsigned long address,
- pmd_t *, unsigned int flags);
+ int (*pmd_fault)(struct vm_area_struct *vma, struct vm_fault *vmf);
void (*map_pages)(struct vm_fault *vmf,
pgoff_t start_pgoff, pgoff_t end_pgoff);
diff --git a/include/trace/events/fs_dax.h b/include/trace/events/fs_dax.h
index c3b0aae..a98665b 100644
--- a/include/trace/events/fs_dax.h
+++ b/include/trace/events/fs_dax.h
@@ -8,9 +8,8 @@
DECLARE_EVENT_CLASS(dax_pmd_fault_class,
TP_PROTO(struct inode *inode, struct vm_area_struct *vma,
- unsigned long address, unsigned int flags, pgoff_t pgoff,
- pgoff_t max_pgoff, int result),
- TP_ARGS(inode, vma, address, flags, pgoff, max_pgoff, result),
+ struct vm_fault *vmf, pgoff_t max_pgoff, int result),
+ TP_ARGS(inode, vma, vmf, max_pgoff, result),
TP_STRUCT__entry(
__field(unsigned long, ino)
__field(unsigned long, vm_start)
@@ -29,9 +28,9 @@ DECLARE_EVENT_CLASS(dax_pmd_fault_class,
__entry->vm_start = vma->vm_start;
__entry->vm_end = vma->vm_end;
__entry->vm_flags = vma->vm_flags;
- __entry->address = address;
- __entry->flags = flags;
- __entry->pgoff = pgoff;
+ __entry->address = vmf->address;
+ __entry->flags = vmf->flags;
+ __entry->pgoff = vmf->pgoff;
__entry->max_pgoff = max_pgoff;
__entry->result = result;
),
@@ -54,9 +53,9 @@ DECLARE_EVENT_CLASS(dax_pmd_fault_class,
#define DEFINE_PMD_FAULT_EVENT(name) \
DEFINE_EVENT(dax_pmd_fault_class, name, \
TP_PROTO(struct inode *inode, struct vm_area_struct *vma, \
- unsigned long address, unsigned int flags, pgoff_t pgoff, \
+ struct vm_fault *vmf, \
pgoff_t max_pgoff, int result), \
- TP_ARGS(inode, vma, address, flags, pgoff, max_pgoff, result))
+ TP_ARGS(inode, vma, vmf, max_pgoff, result))
DEFINE_PMD_FAULT_EVENT(dax_pmd_fault);
DEFINE_PMD_FAULT_EVENT(dax_pmd_fault_done);
diff --git a/mm/memory.c b/mm/memory.c
index e37250f..8ec36cf 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3447,8 +3447,7 @@ static int create_huge_pmd(struct vm_fault *vmf)
if (vma_is_anonymous(vma))
return do_huge_pmd_anonymous_page(vmf);
if (vma->vm_ops->pmd_fault)
- return vma->vm_ops->pmd_fault(vma, vmf->address, vmf->pmd,
- vmf->flags);
+ return vma->vm_ops->pmd_fault(vma, vmf);
return VM_FAULT_FALLBACK;
}
@@ -3457,8 +3456,7 @@ static int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
if (vma_is_anonymous(vmf->vma))
return do_huge_pmd_wp_page(vmf, orig_pmd);
if (vmf->vma->vm_ops->pmd_fault)
- return vmf->vma->vm_ops->pmd_fault(vmf->vma, vmf->address,
- vmf->pmd, vmf->flags);
+ return vmf->vma->vm_ops->pmd_fault(vmf->vma, vmf);
/* COW handled on pte level: split pmd */
VM_BUG_ON_VMA(vmf->vma->vm_flags & VM_SHARED, vmf->vma);
5 years, 5 months