Hi Jim,
Thank you Adding the thread parameter did the trick, i am able to see the job completing
without errors.
So going forward we need the thread parameter in our job files, and it should be 1.
- Kiran
On 04-Apr-2017, at 9:19 PM, Harris, James R
<james.r.harris(a)intel.com> wrote:
>
> On Apr 4, 2017, at 4:25 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com
<mailto:kdikshit@cloudsimple.com>> wrote:
>
> Hi Jim,
>
> Thank you for the pointers relating to workload. Is these tests ( throughput test ,
latency test ) are done on single drives or are they done on bunch of drives.
Hi Kiran,
This will depend on what you are trying to measure. Some throughput tests measure how
many I/O can be performed using a single Intel Xeon CPU core, in which case we test with
multiple SSDs to max out the CPU core. Other tests such as QD=1 latency tests are usually
done using a single drive.
> Attached is FIO job file which i have used and getting the error.
Please add thread=1 to your job file. A patch will be pushed shortly that documents this
requirement for the SPDK plugin and enforces it at runtime with a more suitable error
message.
-Jim
>
>
> Thank you
> Kiran
>
>> On 31-Mar-2017, at 12:09 AM, Harris, James R <james.r.harris(a)intel.com>
wrote:
>>
>>>
>>> On Mar 30, 2017, at 5:43 AM, Kiran Dikshit <kdikshit(a)cloudsimple.com>
wrote:
>>>
>>> Hi All,
>>>
>>>
>>> I have 2 part question relating to performance benchmarking of the SPDK and
fio_plugin.
>>>
>>> 1. Is there any reference document specifying the workload types , queue
depth and block size used for benchmarking the SPDK performance numbers.
>>> Is there any OS level performance tuning to be done. It will be great if
we can get some insight on the performance testbed used.
>>>
>>> Note :
>>> I did find the DPDK performance optimisation guidelines
https://lists.01.org/pipermail/spdk/2016-June/000035.html. Which is useful.
>>>
>>
>> Hi Kiran,
>>
>> Most of the tests we run for driver performance comparisons are with 4KB random
reads. This puts the most stress on the driver, since on NAND SSDs, random read
performance is typically much higher than random write performance. For throughput tests,
queue depth is typically tested at 128. For latency tests, queue depth is typically
tested at 1.
>>
>>>
>>> 2. I am trying fio_plugin for benchmarking the performance, the jobs are
completing with following error
>>>
>>> “nvme_pcie.c: 996:nvme_pcie_qpair_complete_pending_admin_request: ***ERROR***
the active process (pid 6700) is not found for this controller.”
>>>
>>> I found that the nvme_pcie_qpair_complete_pending_admin_request()is checking
if the process exist, this is where the error message is coming from.
>>> I am not sure how this process is getting killed even before completion.
There is no there operation done on this system apart from running fio plugin.
>>> Was any similar issue seen in the past, which might be of help to get around
this error.
>>
>> Could you post the contents of your fio job file? Note that the SPDK fio plugin
is currently limited to a single job. I would expect to see an error like this if
specifying multiple jobs and without -thread (meaning a separate process per job).
>>
>> Thanks,
>>
>> -Jim
>>
>>>
>>> Below is my setup details
>>>
>>> OS : fedora 25 ( server edition )
>>> Kernel version : 4.8.6-300
>>> DPDK version : 6.11
>>>
>>> I have attached a single nvme drive of 745GB from Intel on which i am running
the FIO
>>>
>>> Below workaround were tried and the issue still persists not sure how to get
around this.
>>>
>>> workaround
>>>
>>> 1. Tried different workloads in FIO
>>> 2. I did detach the NVMe and attach a new NVMe drive
>>> 3. Re-installed the DPDK , SPDK and FIO tool
>>>
>>> Note :
>>> Following links were used to install and setup SPDK and FIO plugin
>>>
https://github.com/spdk/spdk —> SPDK
>>>
https://github.com/spdk/spdk/tree/master/examples/nvme/fio_plugin —>
FIO_plugin
>>>
>>>
>>> Thank you
>>> Kiran
>>> _______________________________________________
>>> SPDK mailing list
>>> SPDK(a)lists.01.org
>>>
https://lists.01.org/mailman/listinfo/spdk
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>>
https://lists.01.org/mailman/listinfo/spdk
>
> <seq_read.4KiB_csi.fio>