HugePage difference master vs v19.01.x
by Chuck Tuffli
I'm noticing a difference in HugePages_Free between the master branch
and v19.01.x when running the hello_bdev example. Before running
hello_bdev:
$ grep ^Huge /proc/meminfo
HugePages_Total: 256
HugePages_Free: 256
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
In both the below cases, spdk is configured via:
$ ./configure --disable-tests
and the application is run:
$ sudo -H ./examples/bdev/hello_world/hello_bdev -c
examples/bdev/hello_world/bdev.conf -b Malloc1
$ uname -srvm
Linux 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64
After running this on master:
$ grep ^Huge /proc/meminfo
HugePages_Total: 256
HugePages_Free: 255
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
but when running this on the v19.01.x branch:
$ grep ^Huge /proc/meminfo
HugePages_Total: 256
HugePages_Free: 79
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Does this indicate a leak of HugePages on v19?
--chuck
1 year, 11 months
vpp 18.10 with spdk
by Jonathan Richardson
Hi,
Just wondering if there is any plan to support a newer version of vpp. I
want to use vpp 18.10 so will upgrade the spdk code. It looks like the vpp
version tags are similar to SPDK. Is there any reason why the vpp support
isn’t newer? If not I’ll upstream my changes when done. Do you have any
guidelines on backwards compatibility should the version used by spdk be
made configurable?
Thanks,
Jon
1 year, 11 months
Trello Scrub Time
by Luse, Paul E
Hi All,
Trello has been a great collaboration tool for the community but like anything else written in English, it tends to not keep up with the pace of development all the time It's really to everyone's benefit to keep a constant eye on it but I know it's not easy so I'm "recruiting" some volunteers below :)
Please just take 30-60 min and go through the trello board next to your name and scrub it - mainly delete or move to done those things that are no longer relevant for some reason. If you're unsure hit up this email dist or IRC. If you can't take this on let's say over the next 2 weeks please just shoot me a note privately. It's no big deal, I just want to make sure we don't have any gaps.
NOTE: if an entire board is no longer relevant (or if you can combine a few into one) please do so or ask if you're unsure; these things have exploded over the last year which is both good and bad :)
Seth nvmeof backlog, vagrant setup
Paul Bdev backlog, framework backlog
Both Tomeks OCF, FTL, Vpp integration
Karol continuous integration backlog, Json configuration backlog
Pawel orchestration and tooling backlog, spdk validation
Darek Vhost/virtio backlog, logical volumes
Changpeng Nvme backlog
Ziye blobstore backlog
Shuhei iscsi backlog
Gang misc backlog, opal support
Lance packaging
Ben things to do
Jim vmd driver
Thanks!!!
Paul
1 year, 11 months
Trello SQ Quality Items
by Luse, Paul E
I went ahead and put names and proposed release dates on each of the SQ Quality items that are up on Trello. Please take a minute to review and let me know if you need more info or would rather not pick up the item that I saddled you with. Karol, there's quite a few on there with your name, feel free to transfer those to other folks over there to balance things out a bit.
I'll check on progress in the community meetings and as each are confirmed, will move them to the roadmap accordingly. There's one item that I need to add that Jim suggested, I'll get it up there soon but will be a dozen or so cards - one per module with the task being to review test coverage, both unit and system, and identify and gaps/make suggestions for improvement. That's all that it will be, investigation and then we'll take the results and divvy them up over the rest of the year.
Thanks Everyone!!
Paul
1 year, 11 months
nvme format
by Nabarro, Tom
I'm looking for a mechanism to wipe device through spdk roughly equivalent to nvme-cli "nvme format". Will spdk_nvme_ctrlr_format suffice?
Tom
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
1 year, 11 months
Re: [SPDK] io size limitation on spdk_blob_io_write()
by Niu, Yawei
Thanks for the reply, Maciek.
Yes, our cluster size is 1GB by default, and we have our own finer block allocator inside of blob (we need a 4k block size allocator, I'm afraid that's not feasible for the blob allocator), so using small cluster size isn't an option for us.
Would you like to improve the blob io interface to split I/O according to backend bdev limitations (I think it's similar to the cross cluster boundary split)? Otherwise, we have to be aware of the bdev limitations underneath the blobstore, which looks not quite clean to me. What do you think?
Thanks
-Niu
On 21/03/2019, 3:41 PM, "SPDK on behalf of Szwed, Maciej" <spdk-bounces(a)lists.01.org on behalf of maciej.szwed(a)intel.com> wrote:
Hi Niu,
We do split I/O according to backend bdev limitations but only if you create bdev and use spdk_bdev_read/write/... commands. For blob interface there isn't any mechanism for that unfortunately.
I'm guessing that you are using cluster size for blobs at least 128MB. You can try to set cluster size to lower value than the limitation of NVMe bdev and blobstore layer will always split I/O up to cluster size.
Regards,
Maciek
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Niu, Yawei
Sent: Thursday, March 21, 2019 2:36 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] io size limitation on spdk_blob_io_write()
Hi,
We discovered that spdk_blob_io_write() will fail with large io size (128MB) over NVMe bdev, I checked the SPDK code a bit, and seems the failure reason is the size exceeded the limitation of NVMe bdev io request size (which depends on the io queue depth & max transfer size).
We may work around to the problem by splitting the io into several spdk_blob_io_write() calls, but I was wondering if blobstore should hide these bdev details/limitations for blobstore caller and split the I/O according to backend bdev limitations (just like what we did for cross cluster boundary io)? So that blobstore caller doesn’t need to differentiate what type of bdev underneath? Any thoughts?
Thanks
-Niu
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
1 year, 11 months
io size limitation on spdk_blob_io_write()
by Niu, Yawei
Hi,
We discovered that spdk_blob_io_write() will fail with large io size (128MB) over NVMe bdev, I checked the SPDK code a bit, and seems the failure reason is the size exceeded the limitation of NVMe bdev io request size (which depends on the io queue depth & max transfer size).
We may work around to the problem by splitting the io into several spdk_blob_io_write() calls, but I was wondering if blobstore should hide these bdev details/limitations for blobstore caller and split the I/O according to backend bdev limitations (just like what we did for cross cluster boundary io)? So that blobstore caller doesn’t need to differentiate what type of bdev underneath? Any thoughts?
Thanks
-Niu
1 year, 11 months
bdev's examine_config/disk with async_init
by Sztyber, Konrad
Hi,
I'm trying to add examine_disk to bdev_ftl, which is initialized asynchronously, to make use of another bdev as a caching device for the FTL. Since that bdev might not yet be initialized when bdev_ftl is starting up, I've added examine_disk to defer its initialization. However, I'm having trouble with handling situations when this another bdev doesn't exist. In that case examine_disk isn't called with the expected bdev and the whole bdev's layer initialization is hanged, because bdev_ftl doesn't call spdk_bdev_module_init_done, as it didn't finished initializing all of the bdevs mentioned in the config.
The only idea I've come up with is to add a timer inside bdev_ftl, wait till it expires and discard all the bdevs that are still uninitialized. Does anybody know of a better solution?
Thanks,
Konrad
--------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | Kapital zakladowy 200.000 PLN.
Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i moze zawierac informacje poufne. W razie przypadkowego otrzymania tej wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.
1 year, 11 months
SPDK 18.10.2 Status
by Stojaczyk, Dariusz
Hi,
We are preparing for SPDK 18.10.2 release, which is meant to provide support for DPDK 18.11.1. SPDK 18.10.1 is currently only capable of running with a custom version of DPDK 18.08 (upstream version + a few bugfixes on top from our fork). DPDK had plans to release DPDK 18.08.1 containing all those bugfixes, but it's likely to be cancelled [1]. There will be only DPDK 18.11.1+ containing the required bugfixes and that's why we're updating SPDK 18.10.
I applied around 30 patches on top of SPDK 18.10.x to make it work with DPDK 18.11. 10 of those patches are required to make SPDK 18.10.x work on our CI, as there were various changes done after SPDK 18.10 release. Those changes mostly modify setup.sh to ignore OCSSD NVMe devices and to prevent those from being used in our regular NVMe tests.
The other 20 patches specifically enable DPDK 18.11+ and then finally update the dpdk submodule.
The top of the entire series is here and it's passing the tests: https://review.gerrithub.io/c/spdk/spdk/+/447851, but I still need to reorder some patches and modify their commit messages to make it clear they were cherry-picked.
For now only those 10 initial patches are fully ready: https://review.gerrithub.io/c/spdk/spdk/+/447952
Obviously only the last patch in this series passes the tests, others will need to be merged without +1 from CI.
If you know of any additional bugfixes that should be backported to SPDK 18.10.2, please add an "18.10.2" hashtag on corresponding gerrithub patches from master.
[1] https://mails.dpdk.org/archives/dev/2019-March/126394.html
D.
1 year, 11 months