I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
In SPDK 20.10, "enable_zerocopy_send" is set to "false" by default.
I tried to set "enable_zerocopy_send" to true, but noticed the performance
Is this as expected? My understanding is that setting
"enable_zerocopy_send" to true should improve performance.
Quick announcement about expected SPDK CI downtime. It will mostly affect European timezone.
Wednesday 24th Feb 2021, 8 AM GMT
Scheduled maintenance work on CI host system.
We expect the downtime to last no more than 3 hours.
I've noticed that recently SPDK compilation in the UNH community lab
seems to be failing, and I don't see an obvious reason for the failure.
The logs haven't been too helpful - it appears that there is a symbol
that isn't available when linking.
Job details (for example):
Is it possible to turn on more verbose logging during the compilation of
SPDK? Maybe show the arguments to the compiler for the specific object?
Maybe the SPDK folks can see something obviously wrong?
Before each SPDK release a hashtag for patches is used to focus on the ones that should make it into particular release. During code freeze window, such patches are merged to latest SPDK and backported to a branch for the particular release.
Particular hashtag was in use for short period of time, targeting specific release.
This works quite well for the quarterly releases, since patches are current and actively worked on by the submitter. Meanwhile for LTS maintenance releases, such patches had to be identified and backported despite being submitted weeks or months prior.
To provide better support for SPDK 21.01 LTS - I'd like to propose using '21.01' hashtag throughout the year. Any patch that resolves critical issue and should find it's way into the LTS can be marked with this hashtag.
Shortly after merging to the latest SPDK, a backport will be submitted to the 'v21.01.x' branch for review.
Benefits would include always up to date 'v21.01.x' maintenance branch. Along with quicker identification and less risk of omissions of required changes.
I'd like to encourage submitters and reviewers to use '21.01' hashtag for any patch fit for the LTS.
Please let me know if you have any questions or suggestions.
I have a question regarding how read-write ordering is ensured in SPDK's Blobstore. As I understand, the Blobstore interface offers a filesystem-like interface, which I think would guarantee all writes are seen by subsequent reads, even though these reads are submitted before the write completes.
However, when I go through the code I found Blobstore seems to directly send the requests to the bdev layer, which asks the NVMe driver to issue the request to the SSD. AFAIK, the NVMe controller doesn't guarantee command ordering (NVMe spec says, "if a Read is submitted for LBA x and there is a Write also submitted for LBA x, there is no guarantee of the order of completion for those commands, the Read may finish first or the Write may finish first").
I wonder how Blobstore ensures all writes are seen by subsequent reads? If Blobstore doesn't provide such guarantee, what is the usual way to ensure the ordering?
University of Science and Technology of China
I can't get new HTTP password for review.spdk.io with this error.
An error occurred
Error 500 (Server Error): Internal server error
Who can help this issue?
I have an application I built derived from "hello_world" for bdev. It's been working with spdk_top right along. Very nice.
I modified a copy of bdevperf and spdk_top did nothing with it. I finally tried adding the "-r <path>" option that comes out of the framework, and it magically danced to life!
The question is why I don't have to use the "-r <path>" with the modified "hello_world" for bdev and why I do have to specify "-r <path>" (specifically -r /var/tmp/spdk.sock) for bdevperf. I don't see any differences in the spdk application startup. Or a better question is why the RPC server is inhibited in the one bdevperf case unless and until the option is given to the application. Can you give me some place to look?
I just sent back to "hello_world" for bdev, added a spin-delay before the spdk_app_stop(), and it also responded to spdk_top. (Kinda boring output, but it connected...) No "-r <path>" necessary...