nvmf_tgt seg fault
by Gruher, Joseph R
Hi everyone-
I'm running a dual socket Skylake server with P4510 NVMe and 100Gb Mellanox CX4 NIC. OS is Ubuntu 18.04 with kernel 4.18.16. SPDK version is 18.10, FIO version is 3.12. I'm running the SPDK NVMeoF target and exercising it from an initiator system (similar config to the target but with 50Gb NIC) using FIO with the bdev plugin. I find 128K sequential workloads reliably and immediately seg fault nvmf_tgt. I can run 4KB random workloads without experiencing the seg fault, so the problem seems tied to the block size and/or IO pattern. I can run the same IO pattern against a local PCIe device using SPDK without a problem, I only see the failure when running the NVMeoF target with FIO running the IO patter from an SPDK initiator system.
Steps to reproduce and seg fault output follow below.
Start the target:
sudo ~/install/spdk/app/nvmf_tgt/nvmf_tgt -m 0x0000F0 -r /var/tmp/spdk1.sock
Configure the target:
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d1 -t pcie -a 0000:1a:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d2 -t pcie -a 0000:1b:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d3 -t pcie -a 0000:1c:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d4 -t pcie -a 0000:1d:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d5 -t pcie -a 0000:3d:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d6 -t pcie -a 0000:3e:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d7 -t pcie -a 0000:3f:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_nvme_bdev -b d8 -t pcie -a 0000:40:00.0
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_raid_bdev -n raid1 -s 4 -r 0 -b "d1n1 d2n1 d3n1 d4n1 d5n1 d6n1 d7n1 d8n1"
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_store raid1 store1
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l1 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l2 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l3 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l4 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l5 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l6 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l7 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l8 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l9 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l10 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l11 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock construct_lvol_bdev -l store1 l12 1200000
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn1 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn2 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn3 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn4 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn5 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn6 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn7 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn8 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn9 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn10 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn11 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_create nqn.2018-11.io.spdk:nqn12 -a
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn1 store1/l1
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn2 store1/l2
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn3 store1/l3
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn4 store1/l4
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn5 store1/l5
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn6 store1/l6
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn7 store1/l7
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn8 store1/l8
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn9 store1/l9
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn10 store1/l10
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn11 store1/l11
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_ns nqn.2018-11.io.spdk:nqn12 store1/l12
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn1 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn2 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn3 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn4 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn5 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn6 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn7 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn8 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn9 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn10 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn11 -t rdma -a 10.5.0.202 -s 4420
sudo ./rpc.py -s /var/tmp/spdk1.sock nvmf_subsystem_add_listener nqn.2018-11.io.spdk:nqn12 -t rdma -a 10.5.0.202 -s 4420
FIO file on initiator:
[global]
rw=rw
rwmixread=100
numjobs=1
iodepth=32
bs=128k
direct=1
thread=1
time_based=1
ramp_time=10
runtime=10
ioengine=spdk_bdev
spdk_conf=/home/don/fio/nvmeof.conf
group_reporting=1
unified_rw_reporting=1
exitall=1
randrepeat=0
norandommap=1
cpus_allowed_policy=split
cpus_allowed=1-2
[job1]
filename=b0n1
Config file on initiator:
[Nvme]
TransportID "trtype:RDMA traddr:10.5.0.202 trsvcid:4420 subnqn:nqn.2018-11.io.spdk:nqn1 adrfam:IPv4" b0
Run FIO on initiator and nvmf_tgt seg faults immediate:
sudo LD_PRELOAD=/home/don/install/spdk/examples/bdev/fio_plugin/fio_plugin fio sr.ini
Seg fault looks like this:
mlx5: donsl202: got completion with error:
00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
00000001 00000000 00000000 00000000
00000000 9d005304 0800011b 0008d0d2
rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error on CQ 0x7f079c01d170, Request 0x139670660105216 (4): local protection error
rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV QP#1 changed to: IBV_QPS_ERR
rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error on CQ 0x7f079c01d170, Request 0x139670660105216 (5): Work Request Flushed Error
rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV QP#1 changed to: IBV_QPS_ERR
rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error on CQ 0x7f079c01d170, Request 0x139670660106280 (5): Work Request Flushed Error
rdma.c: 501:spdk_nvmf_rdma_set_ibv_state: *NOTICE*: IBV QP#1 changed to: IBV_QPS_ERR
rdma.c:2698:spdk_nvmf_rdma_poller_poll: *WARNING*: CQ error on CQ 0x7f079c01d170, Request 0x139670660106280 (5): Work Request Flushed Error
Segmentation fault
Adds this to dmesg:
[71561.859644] nvme nvme1: Connect rejected: status 8 (invalid service ID).
[71561.866466] nvme nvme1: rdma connection establishment failed (-104)
[71567.805288] reactor_7[9166]: segfault at 88 ip 00005630621e6580 sp 00007f07af5fc400 error 4 in nvmf_tgt[563062194000+df000]
[71567.805293] Code: 48 8b 30 e8 82 f7 ff ff e9 7d fe ff ff 0f 1f 44 00 00 41 81 f9 80 00 00 00 75 37 49 8b 07 4c 8b 70 40 48 c7 40 50 00 00 00 00 <49> 8b 96 88 00 00 00 48 89 50 58 49 8b 96 88 00 00 00 48 89 02 48
3 years, 9 months
slide used at dev meetup wrt CI, etc.
by Luse, Paul E
Hi All,
There was a request to send out the slide(s) Seth and I were talking to at the dev meetup. I *think* this is the main one that was being asked for - if anyone remembers other specific pictures or concepts please let me know and I'll try to dig up whatever else we shared...
Thx
Paul
3 years, 9 months
New Conference Tool - First Impressions and Next Steps
by Luse, Paul E
We had 14 on the call this morning and it worked well for everyone so we'll go ahead and try it again for next week's Asia time zone friendly meeting. It should be the same meting info as this morning's call but I will update the community webpage later this week so folks don't have to hunt down the email or look on IRC to get the login info.
One thing, please try using the app instead of the browser. It seems pretty lightweight and the only minor issues people had seemed to be related to using the browser (example: getting voice prompts in German instead of English, LOL).
If it works well next week and there are no objections we call this our new tool. Note that we don't have any 'bot protection' right now, with any luck we won't need any but posting the info on the website prior to the Asia meeting will be a good test.
Thanks everyone!
Paul
3 years, 9 months
Testing out the compress/reduce flow
by Anand Subramanian
Hi Jim, Paul,
Is there any way to help test out the basic flows for the md or the data for the compress/reduce work so far?
Is there an easy way to setup a testbed for this without pm (some steps to maybe repro the compress lvol or to setup what you folks have would help)? Maybe that will help w.r.t. some initial testing and/or fixing of some of the initial low-hanging issues?
Thanks,
Anand
3 years, 9 months
Re: [SPDK] Testing out the compress/reduce flow
by Harris, James R
Hi Anand,
I think the main thing right now is just reviewing patches as they are posted. I have a series of patches I've been working on locally that I've been trying to get out for review a few patches at a time. But after Paul and I discussed some of the interfaces last week, we're going to change how the pmem file is opened. libreduce is going to do that now rather than the user of the library. I'm reworking those interfaces now and will have 5 hours on planes tomorrow to wrap up those patches and get them out to GerritHub for review.
-Jim
On 11/12/18, 4:07 PM, "SPDK on behalf of Luse, Paul E" <spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:
Hi Anand,
Thanks for asking! Yes, there's always ways to help :) As you know Jim is doing the SPDK reduce library so I'll let him comment on what might make sense there. I'm doing the vbdev module and I'm probably just a few days away from being able to use some help with unit test. Right now I'm in the middle of getting the path app<-->vbdev<-->dpdk compressdev<-->bdev> coded and there will likely be a little throw away work there because once it interfaces with the reduce lib a lot of things will change however we want to make sure, since both compressdev API and reduclibe API are new, that we can test them individually through the vbdev first. With Jim's library we'll even be able to test the complete path without doing any real compression, etc.
Stay tuned and don't ever hesitate to hit us up on the mailing list. I'll send a note out later this week and let you know the best time to start - initially it will be a patch based on the vbdev compress patch that start by adding the basic framework for UT and can cover the non-compression centric functions. Won't be long!
Thanks
Paul
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Anand Subramanian
Sent: Monday, November 12, 2018 2:42 PM
To: spdk(a)lists.01.org
Subject: [SPDK] Testing out the compress/reduce flow
Hi Jim, Paul,
Is there any way to help test out the basic flows for the md or the data for the compress/reduce work so far?
Is there an easy way to setup a testbed for this without pm (some steps to maybe repro the compress lvol or to setup what you folks have would help)? Maybe that will help w.r.t. some initial testing and/or fixing of some of the initial low-hanging issues?
Thanks,
Anand
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
3 years, 9 months
Trying out a new conference service for Tue Nov 13 Euro Meeting
by Luse, Paul E
All-
Given all the problems we've had in the past with Skype and recently with WebEx we're going to try a new service for just one meeting and see how it goes. No other meetings are changing yet, still use WebEx for all other meetings - we're only going to try this new service for the next scheduled Euro meeting on Tue Nov 13. See https://spdk.io/community/
We won't be updating the comm webpage just for this trial. I'll send another email out when we get closer with the info below and also announce on IRC. Just wanted to give you all a heads-up. Plan on attempting to join about 5 min in advance to get it all setup (its super easy though).
If anyone has experience with https://www.freeconferencecall.com please feel free to share. If this doesn't work well, we'll go back to the drawing board. If it does work well, we'll obviously look at switching all community meetings over to this service but probably want to try it a few more times first.
Thanks!
Paul
Tue Nov 13 Euro Call:
Access code: 652734#
Online meeting ID: paul_e_luse
Join the online meeting: https://join.freeconferencecall.com/paul_e_luse
3 years, 9 months
Darek Stojaczyk - new SPDK core maintainer
by Harris, James R
The SPDK project has a team of core maintainers who are responsible for providing technical oversight for the SPDK project, including final review and merging of patches into the SPDK code base. As the SPDK project continues to grow, the core maintainer team needs to grow as well. With that in mind, I am pleased to announce that we are adding Darek Stojaczyk to the core maintainer team!
Darek has made vast contributions to both the SPDK community and its code base. He has assumed a lead role in development of the SPDK vhost target and virtio polled mode drivers, as well as keeping SPDK in sync with DPDK (including the recent changes to enable dynamic memory management). Darek is an active member of the community - providing valuable patch reviews, GitHub issue resolutions, and copious activity on IRC #spdk as “darsto”. He is from Intel, based in Gdansk, Poland, and has been contributing to SPDK since early 2017.
Thanks,
Jim Harris (representing the core maintainer team)
P.S. Please visit http://www.spdk.io/development/#core for further details on the core maintainer team and their responsibilities.
3 years, 9 months
FIO NVMe Performance Results
by Gruher, Joseph R
Hi folks-
I'm testing SPDK 18.10 on Ubuntu 18.04 with kernel 4.18.16. I used FIO with the kernel NVMe driver to measure the performance of a local (PCIe attached) Intel P4500 NVMe device on a 4KB random read workload and obtained 477K IOPS, roughly in line with the drive spec. Then I tested the same drive with the SPDK FIO plugin and only achieved 13K IOPS. The FIO test files and the results are pasted below. Any ideas where I'm going wrong here?
Thanks!
don@donst201:~/fio/single/rr$ ls /sys/devices/pci0000:17/0000:17:02.0/0000:1c:00.0/nvme/nvme1/
address cntlid dev device firmware_rev model nvme2n1 power rescan_controller reset_controller serial state subsysnqn subsystem transport uevent
don@donst201:~/fio/single/rr$ cat nvme2n1.ini |grep -v '#'
[global]
rw=randrw
rwmixread=100
numjobs=4
iodepth=32
bs=4k
direct=1
thread=1
time_based=1
ramp_time=0
runtime=10
ioengine=libaio
group_reporting=1
unified_rw_reporting=1
exitall=1
randrepeat=0
norandommap=1
[nvme2n1]
filename=/dev/nvme2n1
don@donst201:~/fio/single/rr$ sudo fio nvme2n1.ini
nvme2n1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.11
Starting 4 threads
Jobs: 4 (f=4): [r(4)][100.0%][r=1874MiB/s][r=480k IOPS][eta 00m:00s]
nvme2n1: (groupid=0, jobs=4): err= 0: pid=2575: Mon Nov 5 02:26:25 2018
mixed: IOPS=477k, BW=1862MiB/s (1952MB/s)(18.2GiB/10001msec)
slat (nsec): min=1295, max=333088, avg=2216.35, stdev=927.89
clat (nsec): min=506, max=5818.8k, avg=265924.16, stdev=223761.59
lat (usec): min=6, max=5821, avg=268.19, stdev=223.76
clat percentiles (usec):
| 1.00th=[ 13], 5.00th=[ 74], 10.00th=[ 87], 20.00th=[ 115],
| 30.00th=[ 139], 40.00th=[ 167], 50.00th=[ 204], 60.00th=[ 247],
| 70.00th=[ 306], 80.00th=[ 388], 90.00th=[ 523], 95.00th=[ 660],
| 99.00th=[ 988], 99.50th=[ 1156], 99.90th=[ 2343], 99.95th=[ 3195],
| 99.99th=[ 4424]
bw ( KiB/s): min=448192, max=483104, per=25.00%, avg=476597.10, stdev=6748.78, samples=80
iops : min=112048, max=120776, avg=119149.27, stdev=1687.20, samples=80
lat (nsec) : 750=0.01%
lat (usec) : 10=0.52%, 20=1.25%, 50=0.25%, 100=13.38%, 250=45.13%
lat (usec) : 500=28.23%, 750=8.08%, 1000=2.22%
lat (msec) : 2=0.82%, 4=0.10%, 10=0.02%
cpu : usr=14.08%, sys=36.02%, ctx=1930207, majf=0, minf=697
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=4766095,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
MIXED: bw=1862MiB/s (1952MB/s), 1862MiB/s-1862MiB/s (1952MB/s-1952MB/s), io=18.2GiB (19.5GB), run=10001-10001msec
Disk stats (read/write):
nvme2n1: ios=4709190/0, merge=0/0, ticks=4096/0, in_queue=1388752, util=100.00%
don@donst201:~/fio/single/rr$ sudo /home/don/install/spdk/spdk/scripts/setup.sh
Active mountpoints on /dev/nvme0n1, so not binding PCI dev 0000:03:00.0
0000:1c:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:1d:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:5e:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:5f:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:62:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:63:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:64:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:65:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:da:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:db:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:dc:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:dd:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:e0:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:e1:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:e2:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:e3:00.0 (8086 0a54): nvme -> uio_pci_generic
0000:00:04.0 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2021): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.0 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.1 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.2 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.3 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.4 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.5 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.6 (8086 2021): ioatdma -> uio_pci_generic
0000:80:04.7 (8086 2021): ioatdma -> uio_pci_generic
don@donst201:~/fio/single/rr$ cat 1c.ini |grep -v '#'
[global]
rw=randrw
rwmixread=100
numjobs=4
iodepth=32
bs=4k
direct=1
thread=1
time_based=1
ramp_time=0
runtime=10
ioengine=/home/don/install/spdk/spdk/examples/nvme/fio_plugin/fio_plugin
group_reporting=1
unified_rw_reporting=1
exitall=1
randrepeat=0
norandommap=1
[0000.1c.00.0]
filename=trtype=PCIe traddr=0000.1c.00.0 ns=1
don@donst201:~/fio/single/rr$ sudo fio 1c.ini
0000.1c.00.0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=32
...
fio-3.11
Starting 4 threads
Starting SPDK v18.10 / DPDK 18.08.0 initialization...
[ DPDK EAL parameters: fio --no-shconf -c 0x1 -m 512 --file-prefix=spdk_pid3668 ]
EAL: Detected 36 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:1c:00.0 on NUMA socket 0
EAL: probe driver: 8086:a54 spdk_nvme
Jobs: 1 (f=0): [f(1),_(3)][100.0%][eta 00m:00s]
0000.1c.00.0: (groupid=0, jobs=4): err= 0: pid=3709: Mon Nov 5 02:28:29 2018
mixed: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(517MiB/10011msec)
slat (nsec): min=109, max=15344, avg=127.91, stdev=73.44
clat (usec): min=187, max=18715, avg=9683.80, stdev=650.54
lat (usec): min=193, max=18715, avg=9683.93, stdev=650.53
clat percentiles (usec):
| 1.00th=[ 8717], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9110],
| 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765],
| 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945],
| 99.00th=[11338], 99.50th=[11338], 99.90th=[11731], 99.95th=[13960],
| 99.99th=[17695]
bw ( KiB/s): min=13080, max=13304, per=25.00%, avg=13214.70, stdev=55.30, samples=80
iops : min= 3270, max= 3326, avg=3303.65, stdev=13.84, samples=80
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.02%, 10=70.85%, 20=29.11%
cpu : usr=100.01%, sys=0.00%, ctx=23, majf=0, minf=0
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=132300,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
MIXED: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=517MiB (542MB), run=10011-10011msec
3 years, 9 months
Re: [SPDK] spdk_blob_io_unmap() usage
by Niu, Yawei
The problem has been solved (I mentioned in an earlier mail), thanks for your help!
On 05/11/2018, 4:20 PM, "SPDK on behalf of Pelplinski, Piotr" <spdk-bounces(a)lists.01.org on behalf of piotr.pelplinski(a)intel.com> wrote:
Blobstore is virtual layer that passes all unmap operations to underlying device.
It does not rely on zeroing data using unmap.
Does the spdk_bdev_unmap works for you if you use it on your NVMe drive without blobstore?
--
Best Regards,
Piotr Pelpliński
> -----Original Message-----
> From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Niu, Yawei
> Sent: Thursday, October 25, 2018 5:08 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] spdk_blob_io_unmap() usage
>
> Hi,
>
> I tried to test spdk_blob_io_unmap() and didn’t get the completion callback
> (not sure if it because I didn't wait long enough), I checked SPDK source and
> didn’t see any test case of spdk_blob_io_unmap(), so I was wondering if the
> unmap is supposed to be executed as fast as blob read/write? Or it's not well
> supported for certain SSD model? BTW, spdk_blob_io_read/write() works well
> for me.
>
> My SPDK commit:
> 051297114cb393d3eb1169520d474e81b4215bf0
>
> My SSD model:
> NVMe Controller at 0000:81:00.0 [8086:2701]
> =====================================================
> Controller Capabilities/Features
> ================================
> Vendor ID: 8086
> Subsystem Vendor ID: 8086
> Serial Number: PHKS7335003H375AGN
> Model Number: INTEL SSDPED1K375GA
> Firmware Version: E2010324
> ...
> Intel Marketing Information
> ==================
> Marketing Product Information: Intel (R) Optane (TM) SSD
> P4800X Series
>
>
> Namespace ID:1
> Deallocate: Supported
> Deallocated/Unwritten Error: Not Supported
> Deallocated Read Value: Unknown
> Deallocate in Write Zeroes: Not Supported
> Deallocated Guard Field: 0xFFFF
> Flush: Not Supported
> Reservation: Not Supported
> Size (in LBAs): 732585168 (698M)
> Capacity (in LBAs): 732585168 (698M)
> Utilization (in LBAs): 732585168 (698M)
> EUI64: E4D25C73F0210100
> Thin Provisioning: Not Supported
> Per-NS Atomic Units: No
> NGUID/EUI64 Never Reused: No
> Number of LBA Formats: 7
>
> Thanks
> -Niu
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
3 years, 9 months