[ndctl PATCH] ndctl, docs: cleanup the man page for create-namespace
by Vishal Verma
Clean up some rendering artifacts in the man page for
ndctl-create-namespace.
Cc: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
---
Documentation/ndctl/ndctl-create-namespace.txt | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/Documentation/ndctl/ndctl-create-namespace.txt b/Documentation/ndctl/ndctl-create-namespace.txt
index c8b1c99..29374d3 100644
--- a/Documentation/ndctl/ndctl-create-namespace.txt
+++ b/Documentation/ndctl/ndctl-create-namespace.txt
@@ -171,14 +171,14 @@ OPTIONS
Section 6.5.10 NVDIMM Label Methods) support "labelled
namespace" operation.
- There are two cases where the kernel will default to
- label-less operation:
+ - There are two cases where the kernel will default to
+ label-less operation:
- * NVDIMM does not support labels
+ * NVDIMM does not support labels
- * The NVDIMM supports labels, but the Label Index Block (see
- UEFI 2.7) is not present and there is no capacity aliasing
- between 'blk' and 'pmem' regions.
+ * The NVDIMM supports labels, but the Label Index Block (see
+ UEFI 2.7) is not present and there is no capacity aliasing
+ between 'blk' and 'pmem' regions.
In the latter case the configuration can be upgraded to
labelled operation by writing an index block on all DIMMs in a
@@ -196,7 +196,6 @@ OPTIONS
following commands, note that all data on all regions is
forfeited by running these commands:
- [verse]
ndctl disable-region all
ndctl init-labels all
ndctl enable-region all
@@ -222,5 +221,4 @@ linkndctl:ndctl-zero-labels[1],
linkndctl:ndctl-init-labels[1],
linkndctl:ndctl-disable-namespace[1],
linkndctl:ndctl-enable-namespace[1],
-http://www.uefi.org/sites/default/files/resources/UEFI_Spec_2_7.pdf[UEFI NVDIMM Label Protocol
-]
+http://www.uefi.org/sites/default/files/resources/UEFI_Spec_2_7.pdf[UEFI NVDIMM Label Protocol]
--
2.14.3
4 years, 3 months
[PATCH 0/5] Teach EDAC about non-volatile DIMMs and add partial support to skx_edac
by Tony Luck
This series was posted as RFC and got some review back in December:
https://marc.info/?l=linux-edac&m=151257213825419&w=2
It couldn't go upstream back then because I was waiting for some updates
to the ACPICA headers for NFIT support. Those patches are complete now,
so here is the series again.
The "partial support" part is that the changes here only do the detection of
the non-volatile DIMMs. Coming later will be another patch/series to teach
skx_edac how to decode system addresses to NVDIMM addresses. But this part
stands on its own and is a useful first step.
Tony Luck (5):
EDAC: Drop duplicated array of strings for memory type names
edac: Add new memory type for non-volatile DIMMs
acpi, nfit: Add function to look up nvdimm device and provide SMBIOS
handle
firmware: dmi: Add function to look up a handle and return DIMM size
EDAC, skx_edac: Detect non-volatile DIMMs
drivers/acpi/nfit/core.c | 27 ++++++++++++++++++
drivers/edac/Kconfig | 5 +++-
drivers/edac/edac_mc.c | 41 +++++++++++++-------------
drivers/edac/edac_mc_sysfs.c | 26 ++---------------
drivers/edac/skx_edac.c | 68 ++++++++++++++++++++++++++++++++++++++++----
drivers/firmware/dmi_scan.c | 31 ++++++++++++++++++++
include/acpi/nfit.h | 26 +++++++++++++++++
include/linux/dmi.h | 2 ++
include/linux/edac.h | 3 ++
9 files changed, 178 insertions(+), 51 deletions(-)
create mode 100644 include/acpi/nfit.h
--
2.14.1
4 years, 3 months
[PATCH] x86, memremap: fix altmap accounting at free
by Dan Williams
Commit 24b6d4164348 "mm: pass the vmem_altmap to vmemmap_free" converted
the vmemmap_free() path to pass the altmap argument all the way through
the call chain rather than looking it up based on the page.
Unfortunately that ends up over freeing altmap allocated pages in some
cases since free_pagetable() is used to free both memmap space and pte
space, where only the memmap stored in huge pages uses altmap
allocations.
Given that altmap allocations for memmap space are special cased in
vmemmap_populate_hugepages() add a symmetric / special case
free_hugepage_table() to handle altmap freeing, and cleanup the unneeded
passing of altmap to leaf functions that do not require it.
Without this change the sanity check accounting in
devm_memremap_pages_release() will throw a warning with the following
signature.
nd_pmem pfn10.1: devm_memremap_pages_release: failed to free all reserved pages
WARNING: CPU: 44 PID: 3539 at kernel/memremap.c:310 devm_memremap_pages_release+0x1c7/0x220
CPU: 44 PID: 3539 Comm: ndctl Tainted: G L 4.16.0-rc1-linux-stable #7
RIP: 0010:devm_memremap_pages_release+0x1c7/0x220
[..]
Call Trace:
release_nodes+0x225/0x270
device_release_driver_internal+0x15d/0x210
bus_remove_device+0xe2/0x160
device_del+0x130/0x310
? klist_release+0x56/0x100
? nd_region_notify+0xc0/0xc0 [libnvdimm]
device_unregister+0x16/0x60
This was missed in testing since not all configurations will trigger
this warning.
Fixes: 24b6d4164348 ("mm: pass the vmem_altmap to vmemmap_free")
Reported-by: Jane Chu <jane.chu(a)oracle.com>
Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
arch/x86/mm/init_64.c | 60 +++++++++++++++++++++++--------------------------
1 file changed, 28 insertions(+), 32 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 8b72923f1d35..af11a2890235 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -800,17 +800,11 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap,
#define PAGE_INUSE 0xFD
-static void __meminit free_pagetable(struct page *page, int order,
- struct vmem_altmap *altmap)
+static void __meminit free_pagetable(struct page *page, int order)
{
unsigned long magic;
unsigned int nr_pages = 1 << order;
- if (altmap) {
- vmem_altmap_free(altmap, nr_pages);
- return;
- }
-
/* bootmem page has reserved flag */
if (PageReserved(page)) {
__ClearPageReserved(page);
@@ -826,9 +820,17 @@ static void __meminit free_pagetable(struct page *page, int order,
free_pages((unsigned long)page_address(page), order);
}
-static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd,
+static void __meminit free_hugepage_table(struct page *page,
struct vmem_altmap *altmap)
{
+ if (altmap)
+ vmem_altmap_free(altmap, PMD_SIZE / PAGE_SIZE);
+ else
+ free_pagetable(page, get_order(PMD_SIZE));
+}
+
+static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd)
+{
pte_t *pte;
int i;
@@ -839,14 +841,13 @@ static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd,
}
/* free a pte talbe */
- free_pagetable(pmd_page(*pmd), 0, altmap);
+ free_pagetable(pmd_page(*pmd), 0);
spin_lock(&init_mm.page_table_lock);
pmd_clear(pmd);
spin_unlock(&init_mm.page_table_lock);
}
-static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud,
- struct vmem_altmap *altmap)
+static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud)
{
pmd_t *pmd;
int i;
@@ -858,14 +859,13 @@ static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud,
}
/* free a pmd talbe */
- free_pagetable(pud_page(*pud), 0, altmap);
+ free_pagetable(pud_page(*pud), 0);
spin_lock(&init_mm.page_table_lock);
pud_clear(pud);
spin_unlock(&init_mm.page_table_lock);
}
-static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d,
- struct vmem_altmap *altmap)
+static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d)
{
pud_t *pud;
int i;
@@ -877,7 +877,7 @@ static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d,
}
/* free a pud talbe */
- free_pagetable(p4d_page(*p4d), 0, altmap);
+ free_pagetable(p4d_page(*p4d), 0);
spin_lock(&init_mm.page_table_lock);
p4d_clear(p4d);
spin_unlock(&init_mm.page_table_lock);
@@ -885,7 +885,7 @@ static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d,
static void __meminit
remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
- struct vmem_altmap *altmap, bool direct)
+ bool direct)
{
unsigned long next, pages = 0;
pte_t *pte;
@@ -916,7 +916,7 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
* freed when offlining, or simplely not in use.
*/
if (!direct)
- free_pagetable(pte_page(*pte), 0, altmap);
+ free_pagetable(pte_page(*pte), 0);
spin_lock(&init_mm.page_table_lock);
pte_clear(&init_mm, addr, pte);
@@ -939,7 +939,7 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
page_addr = page_address(pte_page(*pte));
if (!memchr_inv(page_addr, PAGE_INUSE, PAGE_SIZE)) {
- free_pagetable(pte_page(*pte), 0, altmap);
+ free_pagetable(pte_page(*pte), 0);
spin_lock(&init_mm.page_table_lock);
pte_clear(&init_mm, addr, pte);
@@ -974,9 +974,8 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
if (IS_ALIGNED(addr, PMD_SIZE) &&
IS_ALIGNED(next, PMD_SIZE)) {
if (!direct)
- free_pagetable(pmd_page(*pmd),
- get_order(PMD_SIZE),
- altmap);
+ free_hugepage_table(pmd_page(*pmd),
+ altmap);
spin_lock(&init_mm.page_table_lock);
pmd_clear(pmd);
@@ -989,9 +988,8 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
page_addr = page_address(pmd_page(*pmd));
if (!memchr_inv(page_addr, PAGE_INUSE,
PMD_SIZE)) {
- free_pagetable(pmd_page(*pmd),
- get_order(PMD_SIZE),
- altmap);
+ free_hugepage_table(pmd_page(*pmd),
+ altmap);
spin_lock(&init_mm.page_table_lock);
pmd_clear(pmd);
@@ -1003,8 +1001,8 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
}
pte_base = (pte_t *)pmd_page_vaddr(*pmd);
- remove_pte_table(pte_base, addr, next, altmap, direct);
- free_pte_table(pte_base, pmd, altmap);
+ remove_pte_table(pte_base, addr, next, direct);
+ free_pte_table(pte_base, pmd);
}
/* Call free_pmd_table() in remove_pud_table(). */
@@ -1033,8 +1031,7 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
IS_ALIGNED(next, PUD_SIZE)) {
if (!direct)
free_pagetable(pud_page(*pud),
- get_order(PUD_SIZE),
- altmap);
+ get_order(PUD_SIZE));
spin_lock(&init_mm.page_table_lock);
pud_clear(pud);
@@ -1048,8 +1045,7 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
if (!memchr_inv(page_addr, PAGE_INUSE,
PUD_SIZE)) {
free_pagetable(pud_page(*pud),
- get_order(PUD_SIZE),
- altmap);
+ get_order(PUD_SIZE));
spin_lock(&init_mm.page_table_lock);
pud_clear(pud);
@@ -1062,7 +1058,7 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
pmd_base = pmd_offset(pud, 0);
remove_pmd_table(pmd_base, addr, next, direct, altmap);
- free_pmd_table(pmd_base, pud, altmap);
+ free_pmd_table(pmd_base, pud);
}
if (direct)
@@ -1094,7 +1090,7 @@ remove_p4d_table(p4d_t *p4d_start, unsigned long addr, unsigned long end,
* to adapt for boot-time switching between 4 and 5 level page tables.
*/
if (CONFIG_PGTABLE_LEVELS == 5)
- free_pud_table(pud_base, p4d, altmap);
+ free_pud_table(pud_base, p4d);
}
if (direct)
4 years, 3 months
[ndctl PATCH] ndctl, list: fix namespace json object parenting
by Dan Williams
When listing namespaces and regions a generic jplatform object is
created to house the regions array. However, commit f8cc6fee4e4d "ndctl,
list: refactor core topology walking into util_filter_walk()"
inadvertently added namespaces to that jplatform. Namespaces should only
be added to jplatform in the case when regions are not being listed.
Otherwise we get duplicated output and a double-put assertion as seen
below:
# ndctl list -RNu
{
"regions":[
{
"dev":"region1",
"size":"511.00 GiB (548.68 GB)",
"available_size":0,
"type":"pmem",
"numa_node":0,
"namespaces":[
{
"dev":"namespace1.0",
"mode":"raw",
"size":"511.00 GiB (548.68 GB)",
"sector_size":512,
"blockdev":"pmem1",
"numa_node":0
},
{
"dev":"namespace0.0",
"mode":"fsdax",
"size":"4.00 GiB (4.29 GB)",
"sector_size":512,
"blockdev":"pmem0",
"numa_node":0
}
]
},
{
"dev":"region0",
"size":"4.00 GiB (4.29 GB)",
"available_size":0,
"type":"pmem"
}
],
"namespaces":[
{
"dev":"namespace1.0",
"mode":"raw",
"size":"511.00 GiB (548.68 GB)",
"sector_size":512,
"blockdev":"pmem1",
"numa_node":0
},
{
"dev":"namespace0.0",
"mode":"fsdax",
"size":"4.00 GiB (4.29 GB)",
"sector_size":512,
"blockdev":"pmem0",
"numa_node":0
}
]
}
ndctl: json_object.c:188: json_object_put: Assertion `jso->_ref_count > 0' failed.
Aborted (core dumped)
Fixes: f8cc6fee4e4d(" ndctl, list: refactor core topology walking...")
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
ndctl/list.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ndctl/list.c b/ndctl/list.c
index 0ca5b6dee5eb..0683bc8d6174 100644
--- a/ndctl/list.c
+++ b/ndctl/list.c
@@ -366,7 +366,7 @@ static int list_display(struct list_filter_arg *lfa)
json_object_object_add(jplatform, "dimms", jdimms);
if (jregions)
json_object_object_add(jplatform, "regions", jregions);
- if (jnamespaces)
+ if (jnamespaces && !jregions)
json_object_object_add(jplatform, "namespaces",
jnamespaces);
printf("%s\n", json_object_to_json_string_ext(jplatform,
4 years, 3 months
xfstests 344 deadlock on NOVA
by Andiry Xu
Hi,
I encounter a problem when running xfstests on NOVA. I appreciate your
help very much.
Some background: NOVA adopts a per-inode logging design. Metadata
changes are appended to the log and persisted before returning to the
user space. For example, a write() in NOVA works like this:
Allocate new pmem blocks and fill with user data
Append the metadata that describes this write to the end of the inode log
Update the log tail pointer atomically to commit the write
Update in-DRAM radix tree to point to the metadata (for fast lookup)
The log appending and radix tree update are protected by inode_lock().
For mmap, nova_dax_get_blocks (similar to ext2_get_blocks) needs to
persist the metadata for new allocations. So it has to append the new
allocation metadata to the log, and hence needs to lock the inode.
This causes deadlock in xfstests 344 with concurrent pwrite and mmap
write:
Thread 1:
pwrite
-> nova_dax_file_write()
---> inode_lock()
-----> invalidate_inode_pages2_range()
-------> schedule()
Thread 2:
dax_fault
-> nova_dax_get_blocks()
---> inode_lock() // deadlock
If I remove invalidate_inode_pages2_range() in write path, xfstests
344 passed, but 428 will fail.
It appears to me that ext2/ext4 does not have this issue because they
don't persist metadata immediately and hence do not take inode lock.
But nova_dax_get_blocks() has to persist the metadata and needs to
lock the inode to access the log. Is there a way to workaround this?
Thank you very much.
Thanks,
Andiry
4 years, 3 months
Re: [ndctl PATCH] libndctl.sym: move new interfaces since v59 to a nee section
by Dan Williams
On Fri, Mar 9, 2018 at 3:18 PM, Vishal Verma <vishal.l.verma(a)intel.com> wrote:
> Since we're adding new interfaces, the libndctl 'current' will change to
> '15'. Add a corresponding section in libndctl.sym, and move the new
> symbols to it.
>
> Cc: Dan Williams <dan.j.williams(a)intel.com>
> Cc: Dave Jiang <dave.jiang(a)intel.com>
> Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
> Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
s/nee/new/ in the Subject: line, but other than that:
Reviewed-by: Dan Williams <dan.j.williams(a)intel.com>
4 years, 3 months
[ndctl PATCH] ndctl: fail NUMA filtering when unsupported
by Ross Zwisler
For systems that don't support NUMA, numactl gives a loud and fatal error:
# numactl -N 0 ls
numactl: This system does not support NUMA policy
Follow this model in ndctl for NUMA based filtering:
# ./ndctl/ndctl list --numa-node=0
Error: This system does not support NUMA
This is done instead of just quietly filtering out all dimms, regions and
namespaces because the NUMA node they were trying to match didn't exist in
the system.
Signed-off-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Suggested-by: Dan Williams <dan.j.williams(a)intel.com>
---
util/filter.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/util/filter.c b/util/filter.c
index 291d7ed..fdc46a3 100644
--- a/util/filter.c
+++ b/util/filter.c
@@ -14,7 +14,10 @@
#include <string.h>
#include <stdlib.h>
#include <limits.h>
+#include <unistd.h>
+#include <sys/stat.h>
#include <util/util.h>
+#include <sys/types.h>
#include <ndctl/ndctl.h>
#include <util/filter.h>
#include <ndctl/libndctl.h>
@@ -328,6 +331,13 @@ int util_filter_walk(struct ndctl_ctx *ctx, struct util_filter_ctx *fctx,
}
if (param->numa_node && strcmp(param->numa_node, "all") != 0) {
+ struct stat st;
+
+ if (stat("/sys/devices/system/node", &st)) {
+ error("This system does not support NUMA\n");
+ return -EINVAL;
+ }
+
numa_node = strtol(param->numa_node, &end, 0);
if (end == param->numa_node || end[0]) {
error("invalid numa_node: '%s'\n", param->numa_node);
--
2.14.3
4 years, 3 months
[ndctl PATCH] libndctl.sym: move new interfaces since v59 to a nee section
by Vishal Verma
Since we're adding new interfaces, the libndctl 'current' will change to
'15'. Add a corresponding section in libndctl.sym, and move the new
symbols to it.
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Dave Jiang <dave.jiang(a)intel.com>
Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
---
ndctl/lib/libndctl.sym | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/ndctl/lib/libndctl.sym b/ndctl/lib/libndctl.sym
index 6de1e0b..2127661 100644
--- a/ndctl/lib/libndctl.sym
+++ b/ndctl/lib/libndctl.sym
@@ -339,11 +339,15 @@ global:
ndctl_cmd_fw_info_get_query_interval;
ndctl_cmd_fw_info_get_max_query_time;
ndctl_cmd_fw_info_get_run_version;
- ndctl_cmd_fw_info_get_next_version;
ndctl_cmd_fw_start_get_context;
ndctl_cmd_fw_fquery_get_fw_rev;
ndctl_cmd_fw_xlat_firmware_status;
+} LIBNDCTL_13;
+
+LIBNDCTL_15 {
+global:
+ ndctl_cmd_fw_info_get_next_version;
ndctl_dimm_cmd_new_ack_shutdown_count;
ndctl_region_get_numa_node;
ndctl_dimm_fw_update_supported;
-} LIBNDCTL_13;
+} LIBNDCTL_14;
--
2.14.3
4 years, 3 months
[ndctl PATCH v2] ndctl, list: fix sector_size listing
by Dan Williams
Commit a7320456f1bc "ndctl: add sector_size to 'ndctl list' output"
mishandles the case where a namespace does not specify a sector_size,
but otherwise supports a sector_size selection:
# ndctl list --namespace=namespace1.0
{
"dev":"namespace1.0",
"mode":"memory",
"size":32763805696,
"uuid":"7c985ba5-6d33-48bd-8fde-6c25a520abe0",
"sector_size":-1,
"blockdev":"pmem1",
"numa_node":0
}
Fixes: a7320456f1bc ("ndctl: add sector_size to 'ndctl list' output")
Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
v2: drop the additional change to filter the sector_size == 512 case.
util/json.c | 32 +++++++++++++++-----------------
1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/util/json.c b/util/json.c
index 17d8f135b686..a464f21d6d5a 100644
--- a/util/json.c
+++ b/util/json.c
@@ -646,6 +646,7 @@ struct json_object *util_namespace_to_json(struct ndctl_namespace *ndns,
struct json_object *jndns = json_object_new_object();
struct json_object *jobj, *jbbs = NULL;
unsigned long long size = ULLONG_MAX;
+ unsigned int sector_size = UINT_MAX;
enum ndctl_namespace_mode mode;
const char *bdev = NULL, *name;
unsigned int bb_count = 0;
@@ -769,29 +770,26 @@ struct json_object *util_namespace_to_json(struct ndctl_namespace *ndns,
} else
bdev = ndctl_namespace_get_block_device(ndns);
- jobj = NULL;
- if (btt) {
- jobj = json_object_new_int(ndctl_btt_get_sector_size(btt));
- if (!jobj)
- goto err;
- } else if (!dax) {
- unsigned int sector_size = ndctl_namespace_get_sector_size(ndns);
-
- /*
- * The kernel will default to a 512 byte sector size on PMEM
- * namespaces that don't explicitly have a sector size. This
- * happens because they use pre-v1.2 labels or because they
- * don't have a label space (devtype=nd_namespace_io).
- */
- if (!sector_size)
+ if (btt)
+ sector_size = ndctl_btt_get_sector_size(btt);
+ else if (!dax) {
+ sector_size = ndctl_namespace_get_sector_size(ndns);
+ if (!sector_size || sector_size == UINT_MAX)
sector_size = 512;
+ }
+ /*
+ * The kernel will default to a 512 byte sector size on PMEM
+ * namespaces that don't explicitly have a sector size. This
+ * happens because they use pre-v1.2 labels or because they
+ * don't have a label space (devtype=nd_namespace_io).
+ */
+ if (sector_size < UINT_MAX) {
jobj = json_object_new_int(sector_size);
if (!jobj)
goto err;
- }
- if (jobj)
json_object_object_add(jndns, "sector_size", jobj);
+ }
if (bdev && bdev[0]) {
jobj = json_object_new_string(bdev);
4 years, 3 months
[ndctl PATCH] ndctl, list: fix sector_size listing
by Dan Williams
Commit a7320456f1bc "ndctl: add sector_size to 'ndctl list' output"
mishandles the case where a namespace does not specify a sector_size,
but otherwise supports a sector_size selection:
# ndctl list --namespace=namespace1.0
{
"dev":"namespace1.0",
"mode":"memory",
"size":32763805696,
"uuid":"7c985ba5-6d33-48bd-8fde-6c25a520abe0",
"sector_size":-1,
"blockdev":"pmem1",
"numa_node":0
}
Fix this and clean up the output to only provide a sector_size in the
non-default (i.e. != 512 bytes) case.
Fixes: a7320456f1bc ("ndctl: add sector_size to 'ndctl list' output")
Cc: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
---
util/json.c | 31 +++++++++++++------------------
1 file changed, 13 insertions(+), 18 deletions(-)
diff --git a/util/json.c b/util/json.c
index 17d8f135b686..e7f789b36385 100644
--- a/util/json.c
+++ b/util/json.c
@@ -646,6 +646,7 @@ struct json_object *util_namespace_to_json(struct ndctl_namespace *ndns,
struct json_object *jndns = json_object_new_object();
struct json_object *jobj, *jbbs = NULL;
unsigned long long size = ULLONG_MAX;
+ unsigned int sector_size = UINT_MAX;
enum ndctl_namespace_mode mode;
const char *bdev = NULL, *name;
unsigned int bb_count = 0;
@@ -769,29 +770,23 @@ struct json_object *util_namespace_to_json(struct ndctl_namespace *ndns,
} else
bdev = ndctl_namespace_get_block_device(ndns);
- jobj = NULL;
- if (btt) {
- jobj = json_object_new_int(ndctl_btt_get_sector_size(btt));
- if (!jobj)
- goto err;
- } else if (!dax) {
- unsigned int sector_size = ndctl_namespace_get_sector_size(ndns);
-
- /*
- * The kernel will default to a 512 byte sector size on PMEM
- * namespaces that don't explicitly have a sector size. This
- * happens because they use pre-v1.2 labels or because they
- * don't have a label space (devtype=nd_namespace_io).
- */
- if (!sector_size)
- sector_size = 512;
+ if (btt)
+ sector_size = ndctl_btt_get_sector_size(btt);
+ else if (!dax)
+ sector_size = ndctl_namespace_get_sector_size(ndns);
+ /*
+ * The kernel will default to a 512 byte sector size on PMEM
+ * namespaces that don't explicitly have a sector size. This
+ * happens because they use pre-v1.2 labels or because they
+ * don't have a label space (devtype=nd_namespace_io).
+ */
+ if (sector_size < UINT_MAX && sector_size > 512) {
jobj = json_object_new_int(sector_size);
if (!jobj)
goto err;
- }
- if (jobj)
json_object_object_add(jndns, "sector_size", jobj);
+ }
if (bdev && bdev[0]) {
jobj = json_object_new_string(bdev);
4 years, 3 months