That's great. Keep any eye out for the items Ben mentions below - at least the first
one should be quick to implement and compare both profile data and measured performance.
Don’t' forget about the community meetings either, great place to chat about these
kinds of things. https://spdk.io/community/
Next one is tomorrow morn US time.
From: SPDK [mailto:email@example.com] On Behalf Of Mittal, Rishabh via SPDK
Sent: Thursday, August 15, 2019 6:50 PM
To: Harris, James R <james.r.harris(a)intel.com>; Walker, Benjamin
Cc: Mittal, Rishabh <rimittal(a)ebay.com>; Chen, Xiaoxi <xiaoxchen(a)ebay.com>;
Szmyd, Brian <bszmyd(a)ebay.com>; Kadayam, Hari <hkadayam(a)ebay.com>
Subject: Re: [SPDK] NBD with SPDK
Thanks. I will get the profiling by next week.
On 8/15/19, 6:26 PM, "Harris, James R" <james.r.harris(a)intel.com>
On 8/15/19, 4:34 PM, "Mittal, Rishabh" <rimittal(a)ebay.com> wrote:
What tool you use to take profiling.
Mostly I just use "perf top".
On 8/14/19, 9:54 AM, "Harris, James R" <james.r.harris(a)intel.com>
On 8/14/19, 9:18 AM, "Walker, Benjamin"
When an I/O is performed in the process initiating the I/O to a file, the
goes into the OS page cache buffers at a layer far above the bio stack
(somewhere up in VFS). If SPDK were to reserve some memory and hand it off
your kernel driver, your kernel driver would still need to copy it to
location out of the page cache buffers. We can't safely share the page
buffers with a user space process.
I think Rishabh was suggesting the SPDK reserve the virtual address space
Then the kernel could map the page cache buffers into that virtual address
That would not require a data copy, but would require the mapping
I think the profiling data would be really helpful - to quantify how much of
Is due to copying the 4KB of data. That can help drive next steps on how to
the SPDK NBD module.
As Paul said, I'm skeptical that the memcpy is significant in the
performance you're measuring. I encourage you to go look at some
and confirm that the memcpy is really showing up. I suspect the overhead
instead primarily in these spots:
1) Dynamic buffer allocation in the SPDK NBD backend.
As Paul indicated, the NBD target is dynamically allocating memory for
The NBD backend wasn't designed to be fast - it was designed to be
Pooling would be a lot faster and is something fairly easy to implement.
2) The way SPDK does the syscalls when it implements the NBD backend.
Again, the code was designed to be simple, not high performance. It simply
read() and write() on the socket for each command. There are much higher
performance ways of doing this, they're just more complex to
3) The lack of multi-queue support in NBD
Every I/O is funneled through a single sockpair up to user space. That
there is locking going on. I believe this is just a limitation of NBD
today - it
doesn't plug into the block-mq stuff in the kernel and expose
sockpairs. But someone more knowledgeable on the kernel stack would need
Couple of things that I am not really sure in this flow is :- 1. How memory
registration is going to work with RDMA driver.
2. What changes are required in spdk memory management
SPDK mailing list