SPDK NVMe-oF Performance Report on spdk.io
by Verma, Vishal4
Hi All,
We recently performed SPDK NVMe-oF Target and Initiator performance benchmarking on our HW setup based in Chandler lab. After capturing and analyzing the data we created this SPDK NVMe-oF performance report which covers all the necessary details regarding the environment setup, configuration and performance results.
This report is published on spdk.io here: https://ci.spdk.io/download/performance-reports/SPDK_nvmeof_perf_report_1...
Performance results are measured using FIO benchmark and it is based on SPDK 18.04 release.
This report highlights:
- SPDK NVMe-oF Target and Initiator I/O performance scaling capabilities.
- SPDK vs. Linux Kernel NVMe-oF performance (throughput) and efficiency information.
- Average I/O Latency information b/w SPDK vs. Linux NVMe-oF kernel.
Please let us know if any questions/concerns you may have regarding information provided in the report.
Thanks,
Vishal
2 years, 6 months
IP based load balancing of nvmf connections
by Avinash M N
Hi all,
I’ve uploaded an experimental patch for adding IP based load balancer.
URL: https://review.gerrithub.io/c/spdk/spdk/+/422190
Should we also check for Host NQN for load balancing? My understanding is that a host may use multiple Host NQNs for connections. Please let me know your thoughts about this.
@Walker, Benjamin<mailto:benjamin.walker@intel.com> Can you please add me to the trello board. My ID is avinashmn1
Thanks and Regards
Avinash
2 years, 6 months
Vagrant Orchestration Status
by Howell, Seth
Hi all,
Recently, we have started a focus on expanding the scope of vagrant orchestration for configuring test machines. I have created a trello board to track our progress through this expansion. I have split up the tasks into the following sections that I feel best represent our main three goals n regards to vagrant.
1. Enabling autorun.sh tests In Vagrant
2. Preparing custom Vagrant images
3. Enabling Vagrant images In CI (Specifically replacing current machines with Vagrant images)
Most of the tasks right now are focused around the first point because the next two depend upon successful completion of the first one. If you are interested in this topic, please look at the trello board located at https://trello.com/b/JV2oa1VK/vagrant-setup-expansion for more information. We will also be bringing this topic up periodically in community meetings.
Thanks,
Seth Howell
2 years, 6 months
Error with Jenkins Build Test
by John Barnard
I'm having a problem with getting one the SDPK Jenkins CI build tests to
pass (nvme_phy_autotest) for my patch 416570 (move target opts to transport
opts). The error is in an nvme test and my patch is for nvmf, so I don't
understand why it's not passing. The SPDK Automated Test System is passing
all tests. Will someone please look at this and let me know what's going
on (https://review.gerrithub.io/c/spdk/spdk/+/416570)
Thanks,
John Barnard
2 years, 6 months
Re: [SPDK] performing burn-in on NVMe using SPDK
by Harris, James R
Hi Tom,
I would suggest getting these types of details from the SSD vendor – different vendors may have different BKMs and vendor-specific log pages to get this kind of state information. Once you know exactly which log pages, etc. you want to get from the SSD, I would suggest taking a look at the SPDK nvme-cli fork which can be used to send arbitrary passthrough commands to an SPDK-managed device.
https://github.com/spdk/nvme-cli
Regards,
-Jim
On 8/14/18, 5:16 AM, "SPDK on behalf of Nabarro, Tom" <spdk-bounces(a)lists.01.org on behalf of tom.nabarro(a)intel.com> wrote:
Hello,
I'm looking for advice on how to verify state of NVMe devices with SPDK, should this be done using burn-in, if so then are there any BKMs on how this should be done? if not then how about something like a SMART Long Test? advice appreciated. Thanks
Tom
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
2 years, 6 months
performing burn-in on NVMe using SPDK
by Nabarro, Tom
Hello,
I'm looking for advice on how to verify state of NVMe devices with SPDK, should this be done using burn-in, if so then are there any BKMs on how this should be done? if not then how about something like a SMART Long Test? advice appreciated. Thanks
Tom
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
2 years, 6 months
Community meetings
by Harris, James R
Hi everyone,
Anyone who has called into the SPDK community meetings over the last few months has likely observed some problems with disruptions due to bots/spam. We are making some changes to hopefully eliminate these disruptions.
Moving forward, the URLs on the http://spdk.io/community page will take you to the Intel WebEx site, where you can type the meeting number and password. We have removed the URLs with the embedded meeting numbers, since these URLs also bypassed the password.
Also, please note that all of the meeting numbers have changed. The new meeting numbers can also be found on http://spdk.io/community.
Thanks,
-Jim
2 years, 6 months
Re: [SPDK] Proposal to drop test from CH test pool
by Harris, James R
I’m fine with it. Looks like fedora-05 is already running vhost so we could just have fedora-08 run lvol.
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Paul E Luse <paul.e.luse(a)intel.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Tuesday, August 7, 2018 at 8:21 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Proposal to drop test from CH test pool
All-
This came up in the community meeting this morning and was discussed. Wanted to throw it out for the maintainers to decide. There’s coverage here already in Jenkins and with CH pool EOL’ing soon (month or so) seems like it makes more sense just to drop the test as opposed to adding more resources.
Thx
Paul
1) Chandler test pool runs lvol tests with vhost - test time per patch currently over 10 minutes - should break up lvol tests onto separate system in Chandler test pool?
a) Note: Jenkins runs lvol tests separately from vhost
2 years, 6 months
Error when issue IO in QEMU to vhost scsi NVMe
by Adam Chang
Hi all:
I just create NVMe bdev and vhost-scsi controller which can be
accessed by QEMU, but it occurred error when IO issued from VM.
Here are my steps for SPDK configuration
Host OS:Ubuntu 18.04, Kernel 4.15.0-30
Guest OS: Ubuntu 18.04
QEMU: 2.12.0
SPDK: v18.07
1) sudo HUGEMEM=4096 scripts/setup.sh
0000:05:00.0 (8086 2522): nvme -> vfio-pci
Current user memlock limit: 4116 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.
2) sudo ./app/vhost/vhost -S /var/tmp -m 0x3 &
[ DPDK EAL parameters: vhost -c 0x3 -m 1024 --legacy-mem
--file-prefix=spdk_pid1921 ]
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid1921/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
app.c: 530:spdk_app_start: *NOTICE*: Total cores available: 2
reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 1 on
socket 0
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0
3) sudo ./scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1
vhost.0
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL: probe driver: 8086:2522 spdk_nvme
EAL: using IOMMU type 1 (Type 1)
Nvme0n1
4) sudo ./scripts/rpc.py add_vhost_scsi_lun vhost.0 0 Nvme0n1
5) start qemu:
taskset qemu-system-x86_64 -enable-kvm -m 1G \
-name bread,debug-threads=on \
-daemonize \
-pidfile /var/log/bread.pid \
-cpu host\
-smp 4,sockets=1,cores=4,threads=1 \
-object
memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -numa
node,memdev=mem0\
-drive
file=../ubuntu.img,media=disk,cache=unsafe,aio=threads,format=qcow2\
-chardev socket,id=char0,path=/var/tmp/vhost.0 \
-device vhost-user-scsi-pci,id=scsi0,chardev=char0\
-machine usb=on \
-device usb-tablet \
-device usb-mouse \
-device usb-kbd \
-vnc :2 \
-net nic,model=virtio\
-net user,hostfwd=tcp::2222-:22
then when I use fio to test the vhost nvme disk in guest VM, I got the
following error message in host console.
===========================================================================
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:32
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
nvme_pcie.c:1706:nvme_pcie_prp_list_append: *ERROR*:
vtophys(0x7f8fed64d000) failed
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95
nsid:1 lba:0 len:8
nvme_qpair.c: 306:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD
(00/02) sqid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
bdev_nvme.c:1521:bdev_nvme_queue_cmd: *ERROR*: readv failed: rc = -22
===========================================================================
I used the lsblk to check block device information in guest, and could see
the nvme disk with sdb.
>lsblk --output "NAME,KNAME,MODEL,HCTL,SIZE,VENDOR,SUBSYSTEMS"
===========================================================================
NAME KNAME MODEL HCTL SIZE VENDOR SUBSYSTEMS
fd0 fd0 4K block:platform
loop0 loop0 12.2M block
loop1 loop1 86.6M block
loop2 loop2 1.6M block
loop3 loop3 3.3M block
loop4 loop4 21M block
loop5 loop5 2.3M block
loop6 loop6 13M block
loop7 loop7 3.7M block
loop8 loop8 2.3M block
loop9 loop9 86.9M block
loop10 loop10 34.7M block
loop11 loop11 87M block
loop12 loop12 140.9M block
loop13 loop13 13M block
loop14 loop14 140M block
loop15 loop15 139.5M block
loop16 loop16 3.7M block
loop17 loop17 14.5M block
sda sda QEMU HARDDISK 0:0:0:0 32G ATA block:scsi:pci
sda1 sda1 32G block:scsi:pci
sdb sdb NVMe disk 2:0:0:0 27.3G INTEL
block:scsi:virtio:pci
sr0 sr0 QEMU DVD-ROM 1:0:0:0 1024M QEMU block:scsi:pci
===========================================================================
Does anyone can give me help how to solve this problem ?
Thanks.
Adam Chang
2 years, 6 months