Hi,


On our system, we make extensive use of hugepages, so only a fraction of the hugepages are for spdk usage, and the memory allocated may be fragmented at the hugepage level.

Initially we used "--socket-mem=2048,0", but init time was very long, probably since dpdk built its hugepage info from all the hugepages on the system.


Currently I am working around the long init time by this patch to dpdk:

diff --git a/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c b/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
index 18858e2..f7e8199 100644
--- a/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
+++ b/lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
@@ -97,6 +97,10 @@ get_num_hugepages(const char *subdir)
        if (num_pages > UINT32_MAX)
                num_pages = UINT32_MAX;
 
+#define MAX_NUM_HUGEPAGES (2048)
+        if (num_pages > MAX_NUM_HUGEPAGES)
+                num_pages = MAX_NUM_HUGEPAGES;
+
        return num_pages;
 }


For the fragmentation I am running a small program that initializes dpdk before the rest of the hugepage owners start allocating their pages.

Is there a better way to limit the # of pages that dpdk works on, and to preallocate a contiguous amount of hugepages?

Shahar