Support for multiple RocksDB databases
by Josh Perschon
Hello,
I've been reviewing the SPDK RocksDB integration and was wondering if there
is support for multiple databases on the same bdev. BlobFS only supports 1
namespace/no directories, which leads me to think that only 1 database per
bdev is supported although the abstractions BlobFS uses for files are
different than most file systems so I'm not sure.
Thanks,
Josh
3 years, 6 months
NVMf target configuration issue
by Kirubakaran Kaliannan
Hi All,
I am trying to get the nvmf configuration going with SPDK.
I have the following configuration,
*# ofed 2.4 on both target and initiator*
*# SPDK + 3.18 kernel on target *
*# 4.9 kernel on initiator*
*# mellanox connect3X-Pro*
*/etc/spdk/nvmf.conf*
[Global]
[Rpc]
Enable No
Listen 127.0.0.1
[AIO]
AIO /dev/sdd AIO0
[Nvmf]
MaxQueuesPerSession 4
AcceptorPollRate 10000
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
Listen RDMA 10.3.7.2:4420
Host nqn.2016-06.io.spdk:init
SN SPDK00000000000001
Namespace AIO0
*From initiator I tried discover and connect*
*# nvme discover -t rdma -a 10.3.7.2 -s 4420*
Discovery Log Number of Records 1, Generation counter 4
=====Discovery Log Entry 0======
trtype: rdma
adrfam: ipv4
subtype: nvme subsystem
treq: not specified
portid: 0
trsvcid: 4420
subnqn: nqn.2016-06.io.spdk:cnode1
traddr: 10.3.7.2
rdma_prtype: not specified
rdma_qptype: connected
rdma_cms: rdma-cm
rdma_pkey: 0x0000
*# nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 10.3.7.2 -s 4420*
Failed to write to /dev/nvme-fabrics: Input/output error
[dmesg]
[59909.541392] nvme nvme0: new ctrl: NQN
"nqn.2014-08.org.nvmexpress.discovery", addr 10.3.7.2:4420
[59940.389952] nvme nvme0: Connect command failed, error wo/DNR bit: 388
*On target I get the following error*
# app/nvmf_tgt/nvmf_tgt -c /etc/spdk/nvmf.conf > /tmp/1
EAL: Detected 20 lcore(s)
reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine
Offload Enabled
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on
10.3.7.2 port 4420 ***
conf.c: 500:spdk_nvmf_construct_subsystem: *NOTICE*: Attaching block device
AIO0 to subsystem nqn.2016-06.io.spdk:cnode1
nvmf_tgt.c: 290:spdk_nvmf_startup: *NOTICE*: Acceptor running on core 0 on
socket 0
*request.c: 171:nvmf_process_connect: *ERROR*: Subsystem
'nqn.2016-06.io.spdk:cnode1' does not allow host
'nqn.2014-08.org.nvmexpress:NVMf:uuid:5e5e0064-2ad6-4a43-9c26-063e3ef6cf14'*
can you please help on why I am not able to connect in this configuration.
Regards,
-kiru
3 years, 6 months
SCST Usermode iSCSI Storage Server now handles Intel SPDK backing storage
by David Butterfield
The SCST Usermode iSCSI Storage Server can now utilize backing storage through
the Intel Storage Performance Development Kit (SPDK) API.
The SCST Usermode Server is a port of about 80 KLOC of the SCST Linux kernel
software to run entirely in usermode on an unmodified kernel, with virtually
no change to the existing SCST source code.
The diagram on the left side of this PDF page compares the usual kernel-based
SCST configuration [blue box] with the configuration adapted for usermode
[purple box]
https://github.com/DavidButterfield/SCST-Usermode-Adaptation/blob/usermod...
The diagram on the right side of that page illustrates the datapath from
Initiator to backing storage API -- showing paths through LIO (in-kernel), and
through Usermode SCST [purple box]. The Usermode SCST server can access
backing storage through any of these interfaces: preadv(2) and pwritev(2),
aio(7), or the tcmu-runner backstorage API [red arrow].
The tcmu-runner backstorage API is a usermode interface point between the
kernel-based LIO facility and usermode backstore-specific handlers. The
tcmu-runner project implements backstore handlers for Ceph/rbd, Gluster/glfs,
and QEMU/qcow [green box]. I have re-used that same API for Usermode SCST so
that it can make use of those same backstore handlers [red arrow].
I have also implemented two additional backstore handlers: a "ram" driver that
uses mmap(2) either anonymously or with a persistent backing file; and most
recently, an interface module to the Intel Storage Performance Development Kit
(SPDK) [red circle -- note that the new SPDK module is a prototype, presently
functional with Usermode SCST, but not yet through the LIO datapath].
Project is at https://github.com/DavidButterfield/SCST-Usermode-Adaptation --
the README there has a few diagrams and a link to a technical paper. The new
SPDK backstore handler is in usermode/spdk.c
The paper starts by describing the port of SCST from the Linux kernel to
usermode, including diagrams showing how this was done without changing the
SCST source code. Next I specify the configuration used for performance
measurements, followed by plots and analysis interpreting the results. I
introduce an experimental "Adaptive Nagle" algorithm to improve performance of
small Read operations. An appendix develops a performance model that attempts
to maintain some intuition in a fairly complicated analysis.
David Butterfield
3 years, 6 months