Performance impact with "enable_zerocopy_send"
by Charlie Li
Hello,
In SPDK 20.10, "enable_zerocopy_send" is set to "false" by default.
I tried to set "enable_zerocopy_send" to true, but noticed the performance
dropped.
Is this as expected? My understanding is that setting
"enable_zerocopy_send" to true should improve performance.
Thanks,
Charlie
3 days, 6 hours
can't get new HTTP password for review.spdk.io
by Young Tack Jin
Hi there,
I can't get new HTTP password for review.spdk.io with this error.
---
An error occurred
Error 500 (Server Error): Internal server error
Endpoint: /gerrit/accounts/self/password.http
---
Who can help this issue?
Thanks,
YT
3 weeks, 4 days
Submitting NVME I/O write req with SGL
by Filip Janiszewski
Hi,
I've plenty of 64B buffers to write to disk, but I'm having trouble
triggering the proper spdk_nvme_ns_cmd_writev_with_md to do that - I was
trying to use as sample code the 'perf' test application and the
following patch: https://gerrithub.io/c/spdk/spdk/+/437905 , but I keep
getting errors like:
.
[2021-01-29 11:17:46.573755] nvme_ns_cmd.c:
268:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 64 not even
multiple of lba_size 512
.
So I've a couple of questions:
1) Is is possible to submit NVME IO write requests while the buffers are
smaller than LBA using SGL or something?
2) Is there any documentation for that?
3) Will this work only with FIO? (the FIO plugin seems to do that, not
sure).
Obviously I can't just merge my buffers info LBA size blocks as that
would mean copy memory, I can't effort to copy memory and I need to dump
those buffers to disk.. (Wasting an entire LBA of 512B for each single
64B buffer while work is not acceptable due to disk space waste..)
Thanks
--
BR, Filip
+48 666 369 823
1 month
Why does the flush operation in bdev_nvme do nothing?
by fandahao17@mail.ustc.edu.cn
Hi, there! I ran into the following code in module/bdev/nvme/bdev_nvme.c when I wanted to force a flush into my SSD to make sure all my previous writes are durable.
static int
bdev_nvme_flush(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
struct nvme_bdev_io *bio, uint64_t offset, uint64_t nbytes)
{
spdk_bdev_io_complete(spdk_bdev_io_from_ctx(bio), SPDK_BDEV_IO_STATUS_SUCCESS);
return 0;
}
Seems that it just completes without issuing any nvme FLUSH commands. This interests me because I thought it would call something like spdk_nvme_ns_cmd_flush(). I am curious how can I make sure the data is written into the non-volatile media of the SSD? Can I still guarantee the write-read order without using a FLUSH operation in between? Thanks!
1 month, 1 week
can't get new HTTP password for review.spdk.io
by youngtack.jin@gmail.com
Hi there,
I can't get new HTTP password for review.spdk.io with this error.
---
An error occurred
Error 500 (Server Error): Internal server error
Endpoint: /gerrit/accounts/self/password.http
---
Who can help this issue?
Thanks,
YT
1 month, 1 week
Make the most of Your Time with Hot Lucknow College Call Girls
by Priya Gupta
Attractive emotions that an individual can savor with the shocking Lucknow Escorts Service are really magnificent. During this time, female escort in Lucknow gets probably the most ideal alternative for you to disregard your fatigue and depression. Check every one of our Russian escort’s subtleties and find in contact with us for additional subtleties. On the off chance that you wish to claim the best female escort Service in Lucknow, we are only a visit away. Dating with lovely and gifted Lucknow Airhostess call Girls number will give you unfathomable exotic joy. You'll pick your most favored Bollywood Call Girls in Lucknow for certain long periods of suggestive Escorts Service to a single night rendezvous. Look at the vivid free escorts in Lucknow on the web and expertise the hotness of their body. Is it consistent with state that you are in search for the best Call Girls Service? Lucknow Escorts Agency is the best escort young lady Lucknow at quite an inconceivable rate. Our wonderful least expensive Russian call young ladies in Lucknow are OK with any erotic and sexual experience. http://www.mispriyagupta.com/
1 month, 1 week
Bad tail latency of 'spdk_nvme_qpair_process_completions'
by Jing Liu
Hi,
I was using the `perf` tool to test the performance of the 4K sequential
read workload using
the command:
# ./perf -q 1 -s 4096 -w read -L -t 15 -c 1
My device is Intel Optane SSD with SPDK 18.04, the worst-case latency is
about 850us.
Initially, I thought it is due to the device itself. However, it turns out
that the
'spdk_nvme_qpair_process_completions' itself costs that long when I do
detailed timing.
Specifically, what I do is like this:
```
poll_pre_tick = spdk_get_ticks();
int num = spdk_nvme_qpair_process_completions(ns_ctx->u.nvme.qpair,
g_max_completions);
poll_end_tick = spdk_get_ticks();
uint64_t diff_tick = poll_end_tick - poll_pre_tick;
if (num> 0 && diff_tick > CHECK_TICK_THR) {
fprintf(stdout, "======= poll uses tick:%lu us:%f\n",
diff_tick, (double)diff_tick * 1000 * 1000 / g_tsc_rate);
}
```
And the output is:
```
======= poll uses tick:2585384 us:1231.135238
======= poll uses tick:11154 us:5.311429
======= poll uses tick:12942 us:6.162857
======= poll uses tick:11422 us:5.439048
======= poll uses tick:19602 us:9.334286
======= poll uses tick:10874 us:5.178095
======= poll uses tick:11076 us:5.274286
======= poll uses tick:13210 us:6.290476
======= poll uses tick:10148 us:4.832381
======= poll uses tick:49488 us:23.565714
======= poll uses tick:382400 us:182.095238
.... elimiate some
======= poll uses tick:11268 us:5.365714
======= poll uses tick:68202 us:32.477143
======= poll uses tick:31284 us:14.897143
======= poll uses tick:63888 us:30.422857
======= poll uses tick:13520 us:6.438095
======= poll uses tick:18970 us:9.033333
======= poll uses tick:2808446 us:1337.355238
======= poll uses tick:43882 us:20.896190
======= poll uses tick:55554 us:26.454286
======= poll uses tick:39190 us:18.661905
======= poll uses tick:25210 us:12.004762
======= poll uses tick:11156 us:5.312381
======= poll uses tick:24674 us:11.749524
======= poll uses tick:11962 us:5.696190
======= poll uses tick:40048 us:19.070476
======= poll uses tick:64096 us:30.521905
========================================================
Latency(us)
Device Information : IOPS
MB/s Average min max
INTEL SSDPED1D960GAY (PHMB8361000P960EGN ) from core 0: 153269.20
598.71 6.51 6.15 1351.92
========================================================
Total : 153269.20
598.71 6.51 6.15 1351.92
```
This is pretty out of expectation since I thought in polling mode, the
device tail is not supposed to be part of the `On-CPU processing` time.
I'm wondering is this symptom reasonable and what could be possible reasons
for this? Why the `polling some bit + perf's callback' uses
that long time?
Thanks,
Jing
1 month, 2 weeks
"nvmf_create_transport" fails with 16 cores
by Charlie Li
Hello,
I am using SPDK 20.10 - "Starting SPDK v20.10 git sha1 e5d26ecc2 / DPDK
20.08.0 initialization..."
If I start SPDK NVMe-oF target with 15 cores - "sudo ./build/bin/nvmf_tgt
-m 0x7FFF",
"sudo ./scripts/rpc.py nvmf_create_transport -t TCP" works fine.
If I start SPDK NVMe-oF target with 16 cores - "sudo ./build/bin/nvmf_tgt
-m 0xFFFF",
"sudo ./scripts/rpc.py nvmf_create_transport -t TCP" fails with the
following message
request:
{
"trtype": "TCP",
"c2h_success": true,
"no_wr_batching": false,
"method": "nvmf_create_transport",
"req_id": 1
}
Got JSON-RPC error response
response:
{
"code": -32603,
"message": "Transport type 'TCP' create failed"
}
Is there any way to use 16 or more cores?
Thanks,
Charlie
1 month, 2 weeks