Raj,

 

SPDK will check RNIC Device/NVMe Device/Running Core ‘s NUMA node is the same or not when started, and give a warning information as you seen in your environment.

 

Form your information:

NVMe 0000:06:00.0: Socket 0

NVMe 0000:07:00.0: Socket 0

NVMe 0000:88:00.0: Socket 1

NVMe 0000:89:00.0: Socket 1

 

So can you check your RNIC devices’ numa node?

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Friday, November 18, 2016 7:12 AM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

 

Thanks for the details John.  Though it helps, I think I’m still missing something more.

 

Here is the latest output from nvmf_tgt.

 

/rajp/spdk# app/nvmf_tgt/nvmf_tgt -c etc/spdk/nvmf.conf -p 15

Starting Intel(R) DPDK initialization ...

[ DPDK EAL parameters: nvmf -c f000 -n 4 -m 2048 --master-lcore=15 --file-prefix=rte0 --proc-type=auto ]

EAL: Detected 48 lcore(s)

EAL: Auto-detected process type: PRIMARY

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

done.

Occupied cpu core mask is 0xf000

Occupied cpu socket mask is 0x3

Ioat Copy Engine Offload Enabled

Total cores available: 4

Reactor started on core 0xc

Reactor started on core 0xd

Reactor started on core 0xe

Reactor started on core 0xf

*** RDMA Transport Init ***

allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 15

allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 13

*** NVMf Target Listening on 101.10.10.180 port 4420 ***

EAL: PCI device 0000:06:00.0 on NUMA socket 0

EAL:   probe driver: 144d:a821 SPDK NVMe

EAL: PCI device 0000:07:00.0 on NUMA socket 0

EAL:   probe driver: 144d:a821 SPDK NVMe

EAL: PCI device 0000:88:00.0 on NUMA socket 1

EAL:   probe driver: 144d:a821 SPDK NVMe

EAL: PCI device 0000:89:00.0 on NUMA socket 1

EAL:   probe driver: 144d:a821 SPDK NVMe

Attaching NVMe device 0x7ff98d65b6c0 at 0:6:0.0 to subsystem 0x1acf750

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 14

Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NIC. This may result in reduced performance.

*** NVMf Target Listening on 100.10.10.180 port 4420 ***

EAL: PCI device 0000:06:00.0 on NUMA socket 0

EAL:   probe driver: 144d:a821 SPDK NVMe

EAL: PCI device 0000:07:00.0 on NUMA socket 0

EAL:   probe driver: 144d:a821 SPDK NVMe

EAL: PCI device 0000:88:00.0 on NUMA socket 1

EAL:   probe driver: 144d:a821 SPDK NVMe

EAL: PCI device 0000:89:00.0 on NUMA socket 1

EAL:   probe driver: 144d:a821 SPDK NVMe

Attaching NVMe device 0x7ff98d637700 at 0:88:0.0 to subsystem 0x1ad5900

Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.

Acceptor running on core 12

 

 

Here is conf file:

 

# NVMf Target Configuration File

#

# Please write all parameters using ASCII.

# The parameter must be quoted if it includes whitespace.

#

# Configuration syntax:

# Leading whitespace is ignored.

# Lines starting with '#' are comments.

# Lines ending with '\' are concatenated with the next line.

# Bracketed ([]) names define sections

 

[Global]

  # Users can restrict work items to only run on certain cores by

  #  specifying a ReactorMask.  Default ReactorMask mask is defined as

  #  -c option in the 'ealargs' setting at beginning of file nvmf_tgt.c.

  #ReactorMask 0x00FF

  ReactorMask 0x00F000

 

  # Tracepoint group mask for spdk trace buffers

  # Default: 0x0 (all tracepoint groups disabled)

  # Set to 0xFFFFFFFFFFFFFFFF to enable all tracepoint groups.

  #TpointGroupMask 0x0

 

  # syslog facility

  LogFacility "local7"

 

[Rpc]

  # Defines whether to enable configuration via RPC.

  # Default is disabled.  Note that the RPC interface is not

  # authenticated, so users should be careful about enabling

  # RPC in non-trusted environments.

  Enable No

 

# Users may change this section to create a different number or size of

malloc LUNs.

# This will generate 8 LUNs with a malloc-allocated backend.

# Each LUN will be size 64MB and these will be named

# Malloc0 through Malloc7.  Not all LUNs defined here are necessarily

#  used below.

[Malloc]

  NumberOfLuns 8

  LunSizeInMB 64

 

# Define NVMf protocol global options

[Nvmf]

  # Set the maximum number of submission and completion queues per session.

  # Setting this to '8', for example, allows for 8 submission and 8 completion queues

  # per session.

  MaxQueuesPerSession 128

 

  # Set the maximum number of outstanding I/O per queue.

  #MaxQueueDepth 128

 

  # Set the maximum in-capsule data size. Must be a multiple of 16.

  #InCapsuleDataSize 4096

 

  # Set the maximum I/O size. Must be a multiple of 4096.

  #MaxIOSize 131072

 

  # Set the global acceptor lcore ID, lcores are numbered starting at 0.

  AcceptorCore 12

 

  # Set how often the acceptor polls for incoming connections. The acceptor is also

  # responsible for polling existing connections that have gone idle. 0 means continuously

  # poll. Units in microseconds.

  #AcceptorPollRate  1000

  AcceptorPollRate  0

 

# Define an NVMf Subsystem.

# Direct controller

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 13

  Mode Direct

  Listen RDMA 101.10.10.180:4420

#  Host nqn.2016-06.io.spdk:init

  NVMe 0000:06:00.0

 

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 14

  Mode Direct

  Listen RDMA 100.10.10.180:4420

#  Host nqn.2016-06.io.spdk:init

  NVMe 0000:88:00.0

 

 

My NUMA nodes and core:

NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46

NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47

 

Thanks,

-Rajinikanth

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Kariuki, John K
Sent: Thursday, November 17, 2016 1:34 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] nvmf.conf: AcceptorCore Vs Core

 

Raj

What is your Reactor Mask?

Here is an example of settings that I have used in my system to successfully assign work items to different cores.

1)     Set the reactor mask to ReactorMask 0xF000000 in the conf file to use cores 24, 25, 26 and 27 for SPDK.

The ReactorMask restricts work items to only run on certain cores.

2)     Put the acceptor on core 24 in conf file: AcceptorCore 24

3)     Put my subsystems on Core 25 and 26

[Subsystem1]

  NQN nqn.2016-06.io.spdk:cnode1

  Core 25

  Mode Direct

  Listen RDMA 192.168.100.8:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:81:00.0

 

# Multiple subsystems are allowed.

[Subsystem2]

  NQN nqn.2016-06.io.spdk:cnode2

  Core 26

  Mode Direct

  Listen RDMA 192.168.100.9:4420

  Host nqn.2016-06.io.spdk:init

  NVMe 0000:86:00.0

 

4)     Put the master on core 27 using –p at command line ./nvmf_tgt -c nvmf.conf.coreaffinity -p 27

 

When the nvmf target starts I get the following output

Starting Intel(R) DPDK initialization ...

[ DPDK EAL parameters: nvmf -c f000000 -n 4 -m 2048 --master-lcore=27 --file-prefix=rte0 --proc-type=auto ]

EAL: Detected 96 lcore(s)

EAL: Auto-detected process type: PRIMARY

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

done.

Occupied cpu core mask is 0xf000000

Occupied cpu socket mask is 0x2

Ioat Copy Engine Offload Enabled

Total cores available: 4

Reactor started on core 24 on socket 1

Reactor started on core 25 on socket 1

Reactor started on core 27 on socket 1

Reactor started on core 26 on socket 1

*** RDMA Transport Init ***

*** RDMA Transport Init ***

allocated subsystem nqn.2014-08.org.nvmexpress.discovery on lcore 27 on socket 1

allocated subsystem nqn.2016-06.io.spdk:cnode1 on lcore 25 on socket 1

*** NVMf Target Listening on 192.168.100.8 port 4420 ***

EAL: PCI device 0000:81:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

EAL: PCI device 0000:86:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

Attaching NVMe device 0x7f9256c38b80 at 0:81:0.0 to subsystem nqn.2016-06.io.spdk:cnode1

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 26 on socket 1

*** NVMf Target Listening on 192.168.100.9 port 4420 ***

EAL: PCI device 0000:81:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

EAL: PCI device 0000:86:00.0 on NUMA socket 1

EAL:   probe driver: 8086:953 SPDK NVMe

Attaching NVMe device 0x7f9256c17880 at 0:86:0.0 to subsystem nqn.2016-06.io.spdk:cnode2

Acceptor running on core 24 on socket 1

 

Hope this helps.

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Raj (Rajinikanth) Pandurangan
Sent: Thursday, November 17, 2016 12:52 PM
To: Storage Performance Development Kit <spdk@lists.01.org>
Subject: [SPDK] nvmf.conf: AcceptorCore Vs Core

 

Hello,

 

I have a server with two NUMA nodes.  On each node, configured a NIC.

 

In nvmf.conf file, based on the node configuration, would like to assign right lcore.

 

Here is snippet of nvmf.conf:

..

[Subsystem1]

NQN nqn.2016-06.io.spdk:cnode1

Core 0

Mode Direct

Listen RDMA 100.10.10.180:4420

NVMe 0000:06:00.0

 

 

[Subsystem2]

NQN nqn.2016-06.io.spdk:cnode2

Core 1

Mode Direct

Listen RDMA 101.10.10.180:4420

NVMe 0000:86:00.0

 

 

But noticed that it’s always uses “core 0” for both the Subsystems no matter what the value assigned to “Core” under “subsystem” section.

 

Following warning confirms it’s uses lcore 0.

 

allocated subsystem nqn.2016-06.io.spdk:cnode2 on lcore 0

“Subsystem nqn.2016-06.io.spdk:cnode2 is configured to run on a CPU core belonging to a different NUMA node than the associated NVMe device. This may result in reduced performance.”

 

Also getting “Segment Fault” if I try to set any non-zero value to “AcceptorCore”.

 

It would be nice if any of you could give more insights about “AcceptorCore” and “Core <lcore>”.

 

Thanks,