Does BlobFS Asynchronous API support multi thread writing?
by chen.zhenghua@zte.com.cn
Hi everyone,
I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
2 months, 2 weeks
RFC: NVMf namespace masking
by Jonas Pfefferle
Hi all,
I would be happy to get some feedback on my NVMf target namespace masking
implementation using attach/detach:
https://review.spdk.io/gerrit/c/spdk/spdk/+/7821
The patch introduces namespace masking for NVMe-over-fabrics
targets by allowing to (dynamically) attach and detach
controllers to/from namespaces, cf. NVMe spec 1.4 - section 6.1.4.
Since SPDK only supports the dynamic controller model a new
controller is allocated on every fabric connect command.
This allows to attach/detach controllers of a specific
host NQN to/from a namespace. A host can only perform
operations to an active namespace. Inactive namespaces can
be listed (not supported by SPDK) but no additional
information can be retrieved:
"Unless otherwise noted, specifying an inactive NSID in a
command that uses the Namespace Identifier (NSID) field shall
cause the controller to abort the command with status
Invalid Field in Command" - NVMe spec 1.4 - section 6.1.5
Note that this patch does not implement the NVMe namespace
attachment command but allows to attach/detach via RPCs only.
To preserve current behavior all controllers are auto attached.
To not not auto attach controllers the nvmf_subsystem_add_ns
shall be called with "--no-auto-attach". We introduce two new
RPC calls:
- nvmf_ns_attach_ctrlr <subsysNQN> <NSID> [--host <hostNQN>]
- nvmf_ns_detach_ctrlr <subsysNQN> <NSID> [--host <hostNQN>]
If no host NQN is specified all controllers
(new and currently connected) will attach/detach to/from the
namespace specified.
The list in spdk_nvmf_ns is used to keep track of hostNQNs
which controllers should be attached on connect.
The active_ns array in spdk_nvmf_ctrlr is used for fast lookup
to check whether a NSID is active/inactive on command execution.
Thanks,
Jonas
8 months, 2 weeks
[Release] 21.07: Kernel DSA, Init lib, Userspace DTrace
by Zawadzki, Tomasz
On behalf of the SPDK community I'm pleased to announce the release of SPDK 21.07!
This release contains the following new features:
- Kernel DSA: Added support in IDXD library for the kernel DSA driver.
- Init library: Added Init library that initializes the SPDK subsystems.
- Userspace DTrace: Added support running bpftrace scripts against SPDK applications. See https://spdk.io/doc/usdt.html.
- zipf utility: Added zipf random number generator with power law probability distribution. When used with bdevperf and nvme perf tools - blocks over the full range of LBAs will be used, but will more frequently select lower-numbered LBAs.
The full changelog for this release is available at:
https://github.com/spdk/spdk/releases/tag/v21.07
This release contains 711 commits from 56 authors with over 35k lines of code changed.
We'd especially like to recognize all of our first time contributors:
Curt Bruns
Jakub Wyka
John Levon
Jonathan Teh
Matt Dumm
Matthew Burbridge
Rajarshi Chowdhury
Scott Peterson
Swapnil Ingle
Tyler Sun
Wu Mengjin
Yuri Kirichok
Thanks to everyone for your contributions, participation, and effort!
Thanks,
Tomek
11 months
Wild Evening with Independent Call Girls in Lucknow
by Lavanya Kaur
Hello lovers, this is Lavanya Kaur from bold, erotic sexy, and independent call girl from Lucknow Escorts service. I am a genuine escort who is always ready to make love with you and give you the best pleasure on lonely nights. Lucknow escorts service offers you hot female escorts in Lucknow for the best pleasuring nights. Escorts service in Lucknow giving you a chance to meet your inner desires. I am a young escort and you always feel fresh with me. I am 24*7 available to offer you elite and genuine class sexual service. I have all ways to give you real seduction and be able to achieve real pleasure at night. Lucknow call girl service is the best escort service in Lucknow who refers to me as their top-rated escort.
Rock The Night With Me!
I am an elite and top sex performer in Lucknow who gives you the best sexual pleasure at very effective rates. I have the unique charm to seduce you and give you passionate moments of pleasure and fun. Lucknow Call Girls Service has many escorts like mine, and they all are available at very cheap rates. If you want to hire slut in Lucknow then surely come to Lucknow, call girls service, and hire our mid-night horny escorts for midnight lovemaking.
Visit Us:- http://www.lavanyakaur.com
Our Website:- http://www.anupreetkaur.com
11 months, 1 week
NVMe 2.0 support
by oscar.huang@microchip.com
What version of NVMe is supported by the latest SPDK?
Is there a schedule when NVMe 2.0 will be supported?
Thanks
-Oscar
11 months, 1 week
NVMeofTCP SPDK host abort test
by Gyan Prakash
Hello all,
I am using NVMeofTCP SPDK host and running IO abort test ( build/examples
folder) to test my NVMeofTCP target. I am running abort test with
different queue depth like 4, 8, 16, 32.
Test completes fine for queue depth 4, 8, and 16 but it hangs and seems to
be in some kind of continuous loop for queue depth 32.
In network trace, I see that for queue depth 32, host has sent a write
command, target responded with r2t, host never sends the requested data.
From the host console message, it looks like the host is in a continuous
loop.
I am providing the SPDK abort command console output for q depth (-q 4, -q
16 and -q 32). Queue Depth 32 has so many errors on the host console and
same error is being printed over and over. Please see below for more
details.
Can you please let me know how can we fix this
Thanks,
GP
*with q depth = 32*
./abort -q 32 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4
traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
[2021-07-16 09:55:49.649538] Starting SPDK v21.07-pre git sha1 b73d3e689 /
DPDK 21.02.0 initialization...
[2021-07-16 09:55:49.649617] [ DPDK EAL parameters: [2021-07-16
09:55:49.649630] abort [2021-07-16 09:55:49.649641] --no-shconf [2021-07-16
09:55:49.649652] -c 0x1 [2021-07-16 09:55:49.649660] -m 4096 [2021-07-16
09:55:49.649669] --no-pci [2021-07-16 09:55:49.649680]
--log-level=lib.eal:6 [2021-07-16 09:55:49.649691]
--log-level=lib.cryptodev:5 [2021-07-16 09:55:49.649702]
--log-level=user1:6 [2021-07-16 09:55:49.649713] --iova-mode=pa [2021-07-16
09:55:49.649724] --base-virtaddr=0x200000000000 [2021-07-16
09:55:49.649736] --match-allocations [2021-07-16 09:55:49.649746]
--file-prefix=spdk_pid126855 [2021-07-16 09:55:49.649758] ]
EAL: No available 1048576 kB hugepages reported
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attached to NVMe over Fabrics controller at 10.10.10.167:4420:
nqn.2015-09.com.cdw:nvme.1
controller IO queue size 16 less than required
Consider using lower queue depth or small IO size because IO requests may
be queued at the NVMe driver.
Associating TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID
1 with lcore 0
Initialization complete. Launching workers.
[2021-07-16 09:55:52.997426] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997463] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997474] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22110 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997483] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997490] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997497] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997503] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22120 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997510] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997515] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997523] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997529] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22130 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997534] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997542] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997549] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997557] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22140 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997565] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997571] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997577] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997583] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22150 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997590] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997595] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997600] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997611] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22160 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997619] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997627] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997634] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997642] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22170 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997650] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997658] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997666] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997674] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22180 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997681] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997689] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997697] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997704] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22190 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997711] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997719] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997726] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997733] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22200 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997740] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997747] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997754] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997762] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22210 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997768] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997775] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997782] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997789] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22220 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997796] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997803] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997810] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997817] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22230 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997826] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997834] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997840] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997848] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997855] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997862] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997870] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997877] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22250 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997885] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997892] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997898] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997905] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22260 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997912] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997919] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997926] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997937] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22270 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997945] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997953] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997959] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997967] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22280 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997974] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.997981] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.997989] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.997996] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22290 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998003] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.998010] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.998017] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.998024] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22300 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998032] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.998040] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.998047] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.998055] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22310 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998062] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.998070] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.998076] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.998083] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998090] nvme_qpair.c: 455:spdk_nvme_print_completion:
*NOTICE*: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0
dnr:1
[2021-07-16 09:55:52.998097] nvme_qpair.c:
594:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
[2021-07-16 09:55:52.998103] nvme_qpair.c:
536:nvme_qpair_manual_complete_request: *NOTICE*: Command completed
manually:
[2021-07-16 09:55:52.998110] nvme_qpair.c: 272:nvme_io_qpair_print_command:
*N
*with q depth=4*
./abort -q 4 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4
traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
[2021-07-16 09:55:04.024076] Starting SPDK v21.07-pre git sha1 b73d3e689 /
DPDK 21.02.0 initialization...
[2021-07-16 09:55:04.024156] [ DPDK EAL parameters: [2021-07-16
09:55:04.024169] abort [2021-07-16 09:55:04.024180] --no-shconf [2021-07-16
09:55:04.024190] -c 0x1 [2021-07-16 09:55:04.024198] -m 4096 [2021-07-16
09:55:04.024207] --no-pci [2021-07-16 09:55:04.024217]
--log-level=lib.eal:6 [2021-07-16 09:55:04.024226]
--log-level=lib.cryptodev:5 [2021-07-16 09:55:04.024235]
--log-level=user1:6 [2021-07-16 09:55:04.024247] --iova-mode=pa [2021-07-16
09:55:04.024257] --base-virtaddr=0x200000000000 [2021-07-16
09:55:04.024268] --match-allocations [2021-07-16 09:55:04.024279]
--file-prefix=spdk_pid126833 [2021-07-16 09:55:04.024289] ]
EAL: No available 1048576 kB hugepages reported
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attached to NVMe over Fabrics controller at 10.10.10.167:4420:
nqn.2015-09.com.cdw:nvme.1
Associating TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID
1 with lcore 0
Initialization complete. Launching workers.
NS: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 I/O
completed: 30102, failed: 15
CTRLR: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) abort
submitted 45, failed to submit 30072
success 15, unsuccess 30, failed 0
*with q depth= 16*
./abort -q 16 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4
traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.c:nvme.1'
[2021-07-16 09:55:32.400777] Starting SPDK v21.07-pre git sha1 b73d3e689 /
DPDK 21.02.0 initialization...
[2021-07-16 09:55:32.400855] [ DPDK EAL parameters: [2021-07-16
09:55:32.400868] abort [2021-07-16 09:55:32.400876] --no-shconf [2021-07-16
09:55:32.400886] -c 0x1 [2021-07-16 09:55:32.400896] -m 4096 [2021-07-16
09:55:32.400905] --no-pci [2021-07-16 09:55:32.400915]
--log-level=lib.eal:6 [2021-07-16 09:55:32.400924]
--log-level=lib.cryptodev:5 [2021-07-16 09:55:32.400934]
--log-level=user1:6 [2021-07-16 09:55:32.400944] --iova-mode=pa [2021-07-16
09:55:32.400953] --base-virtaddr=0x200000000000 [2021-07-16
09:55:32.400963] --match-allocations [2021-07-16 09:55:32.400971]
--file-prefix=spdk_pid126846 [2021-07-16 09:55:32.400980] ]
EAL: No available 1048576 kB hugepages reported
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attached to NVMe over Fabrics controller at 10.10.10.167:4420:
nqn.2015-09.com.cdw:nvme.1
controller IO queue size 16 less than required
Consider using lower queue depth or small IO size because IO requests may
be queued at the NVMe driver.
Associating TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID
1 with lcore 0
Initialization complete. Launching workers.
NS: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 I/O
completed: 49920, failed: 16
CTRLR: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) abort
submitted 43, failed to submit 49893
success 16, unsuccess 27, failed 0
11 months, 2 weeks
Preparation for SPDK 21.07 release
by Zawadzki, Tomasz
Hello all,
The merge window for SPDK 21.07 release will close by July 23rd.
Please ensure all patches you believe should be included in the release are merged to master branch by this date.
You can do it by adding a hashtag '21.07' in Gerrit on those patches.
The current set of patches that are tagged and need to be reviewed can be seen here:
https://review.spdk.io/gerrit/q/hashtag:%2221.07%22+status:open
On July 23rd new branch 'v21.07.x' will be created, and a patch on it will be tagged as release candidate.
Then, by July 30th, a formal release will take place tagging the last patch on the branch as SPDK 21.07.
Between release candidate and formal release, only critical fixes shall be backported to the 'v21.07.x' branch.
Development can continue without disruptions on 'master' branch.
Thanks,
Tomek
11 months, 3 weeks