Question about IOAT in SPDK
by zbhhbz@yeah.net
Hi, I'm working on using SPDK's IOAT feature to accelerate IO operations.
I only see the regular routine of how to use IOAT in example/IOAT/perf.c,
I'd like to know whether IOAT can support this:
when an transaction completes, the IOAT hardware informs the CPU using interrupt instead of the CPU keep polling?
I see some flags in the ioat_spec.h (like int_enabled in spdk_ioat_pq_hw_desc).
can someone help me a little bit?
how can i use interrupt to inform cpu when the ioat transaction is done?
thanks,
have a nice day
2 months, 1 week
List moderation
by Harris, James R
Hi,
Apologies for the spam that has recently inflicted the SPDK mailing list. We have enabled moderation for all messages for now, including those from list members.
Regards,
Jim Harris
2 months, 2 weeks
Does BlobFS Asynchronous API support multi thread writing?
by chen.zhenghua@zte.com.cn
Hi everyone,
I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
2 months, 3 weeks
[RFC] Introduction of DAOS bdev
by Denis Barahtanov
Hello,
We are working on a new bdev type that leverages DAOS DFS as a backend
storage (https://github.com/daos-stack/daos).
The patch is on gerrit: https://review.spdk.io/gerrit/c/spdk/spdk/+/12260
Design wise this bdev is a file named as the bdev itself in the DAOS POSIX
container that uses daos event queue per io channel.
Having an event queue per io channel is showing the best IO throughput.
The implementation supports sharing pools and containers connection between
devices for better connection usage.
The semantic of usage is the same as any other bdev type, to build spdk
with daos support daos-devel package has to be installed:
$ ./configure --with-daos
To run it, the target machine should have daos_agent up and running, as
well as the pool and POSIX container ready to use.
And then to export bdev over tcp:
$ ./nvmf_tgt -m [21-24] &
$ ./scripts/rpc.py nvmf_create_transport -t TCP -u 2097152 -i 2097152
$ ./scripts/rpc.py bdev_daos_create -b daosdev0 -p pool-label -c cont-label
1048576 4096
$ ./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk1:cnode1 -a -s
SPDK00000000000001 -d SPDK_Virtual_Controller_1
$ ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk1:cnode1
daosdev0
$ ./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk1:cnode1
-t tcp -a <IP> -s 4420
On the initiator side, make sure that `nvme-tcp` module is loaded:
$ nvme connect-all -t tcp -a 172.31.91.61 -s 4420
$ nvme list
Node SN Model
Namespace Usage Format FW Rev
---------------- --------------------
---------------------------------------- ---------
-------------------------- ---------------- --------
/dev/nvme8n1 SPDK00000000000001 SPDK_Virtual_Controller_1
1 1.10 TB / 1.10 TB 4 KiB + 0 B 22.05
Looking forward to any suggestions.
Best regards,
Denis.
2 months, 3 weeks