Chun-Kai
I will update docs to use latest fio instead of 3.3. I will also open an issue in our repo
with your finding so that if someone runs into this issue in future they can search our
issues repository and find the solution.
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Chang, Chun-kai
Sent: Tuesday, July 30, 2019 2:38 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] replay trace with bdev fio plugin
Hi John,
1. I use this job file: spdk/examples/bdev/fio_plugin/example_config.fio (runtime=2,
iodepth=128, rw=randrw, bs=4k)
2. I use the default 2MB huge pages
Thanks,
Chun-Kai
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Kariuki, John K
Sent: Tuesday, July 30, 2019 2:29 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] replay trace with bdev fio plugin
Chun-Kai
Thanks for providing an update on the issue and the learnings. I have 2 questions to help
me reproduce the issue on fio 3.3:
1. What IO size and queue depth are you using?
2. Are you using 1 GB huge page size or the default size of 2 MB?
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Chang, Chun-kai
Sent: Monday, July 29, 2019 5:18 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] replay trace with bdev fio plugin
Hi John and Paul,
Thanks for your help. I am now able to replay a 30MB trace with the latest fio (v3.15).
Previously with fio v3.3, I ran into the same error even when I increase HUGEMEM to 8192.
After recompiling SPDK with fio v3.15, I ran into a different runtime error (see below).
However, I am able to circumvent it by increasing HUGEMEM to 8192.
Perhaps the readme should recommend people to use the latest fio instead of v3.3?
'''
nvme_pcie.c:1390:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
nvme_qpair.c: 137:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:141 nsid:1
lba:275516464 len:8
nvme_qpair.c: 306:spdk_nvme_qpair_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07)
sqid:1 cid:141 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 Segmentation fault '''
Thank you,
Chun-Kai
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Luse, Paul E
Sent: Monday, July 29, 2019 2:47 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] replay trace with bdev fio plugin
Sweet thanks John!! Off my TODO list :)
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Kariuki, John K
Sent: Monday, July 29, 2019 2:28 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] replay trace with bdev fio plugin
Chun-Kai
I tried this out and it seems to work fine on my system. How much HUGEMEM are you
allocating? Can you check if you're running out of Huge Pages (cat /proc/meminfo |
grep Huge)? I am using the default 2 GB huge pages on my system.
How large are the input files? I used 2 files (27M and 230M)
1 minute test work
LD_PRELOAD=examples/bdev/fio_plugin/fio_plugin IODEPTH=1 fio --read_iolog io_workload
spdk_bdev.fio.conf
filename: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,
ioengine=spdk_bdev, iodepth=1
fio-3.14-6-g97134
Starting 1 thread
Starting SPDK v19.07-pre / DPDK 19.02.0 initialization...
[ DPDK EAL parameters: fio --no-shconf -c 0x1 --log-level=lib.eal:6
--log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000
--match-allocations --file-prefix=spdk_pid251622 ]
EAL: VFIO support initialized
EAL: using IOMMU type 1 (Type 1)
Jobs: 1 (f=1): [R(1)][11.5%][r=48.6MiB/s][r=12.4k IOPS][eta 09m:00s]
filename: (groupid=0, jobs=1): err= 0: pid=251734: Mon Jul 29 14:14:59 2019
read: IOPS=12.4k, BW=48.6MiB/s (50.0MB/s)(2915MiB/59963msec)
slat (nsec): min=123, max=16859, avg=149.10, stdev=59.45
clat (usec): min=18, max=3819, avg=79.85, stdev=43.61
lat (usec): min=18, max=3819, avg=80.00, stdev=43.61
clat percentiles (usec):
| 50.000th=[ 71], 99.000th=[ 178], 99.900th=[ 229], 99.990th=[ 2024],
| 99.999th=[ 3523]
bw ( KiB/s): min=47888, max=50752, per=99.97%, avg=49764.87, stdev=625.26,
samples=119
iops : min=11972, max=12688, avg=12441.18, stdev=156.32, samples=119
lat (usec) : 20=0.01%, 50=0.85%, 100=64.94%, 250=34.12%, 500=0.05%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%
cpu : usr=99.98%, sys=0.00%, ctx=5008, majf=0, minf=1863
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=746262,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=48.6MiB/s (50.0MB/s), 48.6MiB/s-48.6MiB/s (50.0MB/s-50.0MB/s), io=2915MiB
(3057MB), run=59963-59963msec
10 minute test:
LD_PRELOAD=examples/bdev/fio_plugin/fio_plugin IODEPTH=1 fio --read_iolog io_workload1
spdk_bdev.fio.conf
filename: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,
ioengine=spdk_bdev, iodepth=1
fio-3.14-6-g97134
Starting 1 thread
Starting SPDK v19.07-pre / DPDK 19.02.0 initialization...
[ DPDK EAL parameters: fio --no-shconf -c 0x1 --log-level=lib.eal:6
--log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000
--match-allocations --file-prefix=spdk_pid251805 ]
EAL: VFIO support initialized
EAL: using IOMMU type 1 (Type 1)
Jobs: 1 (f=1): [R(1)][99.8%][r=49.3MiB/s][r=12.6k IOPS][eta 00m:01s]
filename: (groupid=0, jobs=1): err= 0: pid=251914: Mon Jul 29 14:25:51 2019
read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(28.7GiB/599016msec)
slat (nsec): min=122, max=88538, avg=148.68, stdev=61.98
clat (usec): min=18, max=4020, avg=79.21, stdev=42.90
lat (usec): min=18, max=4020, avg=79.35, stdev=42.90
clat percentiles (usec):
| 50.000th=[ 71], 99.000th=[ 178], 99.900th=[ 225], 99.990th=[ 2040],
| 99.999th=[ 3458]
bw ( KiB/s): min=47840, max=51256, per=99.97%, avg=50162.78, stdev=582.27,
samples=1198
iops : min=11960, max=12814, avg=12540.63, stdev=145.55, samples=1198
lat (usec) : 20=0.01%, 50=0.86%, 100=65.33%, 250=33.72%, 500=0.05%
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
cpu : usr=99.96%, sys=0.02%, ctx=50020, majf=0, minf=167123
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=7514083,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=28.7GiB
(30.8GB), run=599016-599016msec
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Luse, Paul E
Sent: Monday, July 29, 2019 11:35 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] replay trace with bdev fio plugin
I've been using the plug-in a lot latently so would be glad to help dig in (I
don't know off the top of my head) right after we get the 19.07 release out, if anyone
else knows feel free to reply. Otherwise I'll followup later this week with my
experience/advice....
Thx
Paul
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Chang, Chun-kai
Sent: Monday, July 29, 2019 11:19 AM
To: spdk@lists.01.org<mailto:spdk@lists.01.org>
Subject: [SPDK] replay trace with bdev fio plugin
Hi all,
Does the bdev fio plugin support replaying fio trace with the --read_iolog flag?
I encountered the following runtime error when using this feature:
fio-3.3
Starting 1 thread
fio: pid=9537, err=12/file:memory.c:333, func=iomem allocation, error=Cannot allocate
memory Segmentation fault
The trace file I tried to replay was generated by running the fio plugin with the
--write_iolog flag and with <spdk dir>/examples/bdev/fio_plugin/example_config.fio
The target bdev is a NVMe drive which is specified in bdev.conf.in as following:
[Nvme]
TransportID "trtype:PCIe traddr:0000:0b:00.0" Nvme0
Trace replay works if I directly use fio v3.3 without the plugin.
I wonder if this is a limitation of the plugin. If so, how can I modify it to enable this
feature?
Thank you,
Chun-Kai
_______________________________________________
SPDK mailing list
SPDK@lists.01.org<mailto:SPDK@lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK@lists.01.org<mailto:SPDK@lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk