Raj

What is your Reactor Mask?

Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0

 

# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0

 

4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

 

When the nvmf target starts I get the following output

Starting Intel(R) DPDK initialization ...

[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]

EAL: Detected 96 lcore(s)

EAL: Auto-detected process type: PRIMARY

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

done.

Occupied cpu core mask is 0xf000000

Occupied cpu socket mask is 0x2

Ioat Copy Engine Offload Enabled

Total cores available: 4

Reactor started on core 24 on socket 1

Reactor started on core 25 on socket 1

Reactor started on core 27 on socket 1

Reactor started on core 26 on socket 1

*** RDMA Transport Init ***

*** RDMA Transport Init ***

allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1

allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1

*** NVMf Target Listening on 192.168.100.8 port 4420 ***

EAL: PCI device 0000:81:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

EAL: PCI device 0000:86:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1

*** NVMf Target Listening on 192.168.100.9 port 4420 ***

EAL: PCI device 0000:81:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

EAL: PCI device 0000:86:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2

Acceptor running on core 24 on socket 1

 

Hope this helps.

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

 

Hello,

 

I have a server with two NUMA nodes.  On each node, configured a NIC.

 

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

 

Here is snippet of nvmf.conf:

..

[Subsystem1]

NQN nqn.2016-06.io.spdk:cnode1

Core 0

Mode Direct

Listen RDMA 100.10.10.180:4420

NVMe 0000:06:00.0

 

 

[Subsystem2]

NQN nqn.2016-06.io.spdk:cnode2

Core 1

Mode Direct

Listen RDMA 101.10.10.180:4420

NVMe 0000:86:00.0

 

 

But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

 

Following warning confirms it’s uses lcore 0.

 

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0

“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

 

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

 

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

 

Thanks,