Hi Tomasz,
Thank you very much for response.
Now, Isaac is on vacation.
You said:
One host is using SPDK iSCSI target with VPP (or posix for comparison)
and the other SPDK iSCSI initiator.
Is there SPDK iSCSI initiator? (In our test, we
setup iSCSI initiator without SPDK installed on the initiator machine).
You said:
Since this is evaluation of iSCSI target Session API, please keep in
mind that null bdevs were used to eliminate other types of bdevs from affecting the
results.
About the null bdevs that you mentioned, did you mean that they are Malloc
bdev devices?
The following is how we setup for iSCSI target with VPP.
Please see if our procedures are fine.
On iSCSI target (CentOS 7.4):
============
VPP was configured through /etc/vpp/startup.conf as:
cpu {
...
main-core 1
...
corelist-workers 2-5
...
workers 4
}
# One 10G NIC with its PCI address and parameters:
dev 0000:82:00.0 {
num-rx-queues 4
num-rx-desc 1024
}
Then, we start VPP and set the interface as below:
ifdown enp130s0f0
/root/spdk_vpp_pr/spdk/dpdk/usertools/dpdk-devbind.py --bind=uio_pci_generic 82:00.0
systemctl start vpp
vppctl set interface state TenGigabitEthernet82/0/0 up
vppctl set interface ip address TenGigabitEthernet82/0/0 192.168.2.10/24
And, we start iSCSI target and construct 4 malloc block devices, each 256MB and 512B
sector size, as:
/root/spdk_vpp_pr/spdk/app/iscsi_tgt/iscsi_tgt -m 0x01 -c /usr/local/etc/spdk/iscsi.conf
python /root/spdk_vpp_pr/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp_pr/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.50/24
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc0 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc1 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc2 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_malloc_bdev -b Malloc3 256 512
python /root/spdk_vpp_pr/spdk/scripts/rpc.py construct_target_node disk1 "Data
Disk1" "Malloc0:0 Malloc1:1 Malloc2:2 Malloc3:3" 1:2 64 -d
On iSCSI initiator (CentOS 7.4):
iscsiadm -m discovery -t sendtargets -p 192.168.2.10
iscsiadm -m node --login (after login, /dev/sdd - /dev/sdg are added)
Then, we run fio through the following job file, fio_vpp_randread.txt, as:
[global]
ioengine=libaio
direct=1
ramp_time=15
runtime=60
iodepth=32
randrepeat=0
bs=4K
group_reporting
time_based
[job1]
rw=randread
filename=/dev/sdd
name=raw-random-read
[job2]
rw=randread
filename=/dev/sde
name=raw-random-read
[job3]
rw=randread
filename=/dev/sdf
name=raw-random-read
[job4]
rw=randread
filename=/dev/sdg
name=raw-random-read
The fio job report:
[root@gluster3 ~]# fio fio_vpp_randread.txt
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,
ioengine=libaio, iodepth=32
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,
ioengine=libaio, iodepth=32
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,
ioengine=libaio, iodepth=32
raw-random-read: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,
ioengine=libaio, iodepth=32
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=338MiB/s,w=0KiB/s][r=86.6k,w=0 IOPS][eta 00m:00s]
raw-random-read: (groupid=0, jobs=4): err= 0: pid=5355: Wed Sep 5 12:20:11 2018
read: IOPS=89.2k, BW=348MiB/s (365MB/s)(20.4GiB/60002msec)
slat (nsec): min=1543, max=1054.4k, avg=9189.80, stdev=11823.44
clat (usec): min=304, max=8993, avg=1422.99, stdev=526.70
lat (usec): min=318, max=9000, avg=1432.50, stdev=525.91
clat percentiles (usec):
| 1.00th=[ 652], 5.00th=[ 799], 10.00th=[ 898], 20.00th=[ 1004],
| 30.00th=[ 1106], 40.00th=[ 1205], 50.00th=[ 1303], 60.00th=[ 1434],
| 70.00th=[ 1582], 80.00th=[ 1778], 90.00th=[ 2114], 95.00th=[ 2409],
| 99.00th=[ 3163], 99.50th=[ 3556], 99.90th=[ 4490], 99.95th=[ 4883],
| 99.99th=[ 5735]
bw ( KiB/s): min=68987, max=134923, per=25.08%, avg=89483.28, stdev=14270.72,
samples=480
iops : min=17246, max=33730, avg=22370.52, stdev=3567.67, samples=480
lat (usec) : 500=0.02%, 750=3.37%, 1000=15.86%
lat (msec) : 2=68.19%, 4=12.33%, 10=0.22%
cpu : usr=6.75%, sys=20.85%, ctx=2226569, majf=0, minf=749
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=127.2%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwt: total=5352130,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=348MiB/s (365MB/s), 348MiB/s-348MiB/s (365MB/s-365MB/s), io=20.4GiB (21.9GB),
run=60002-60002msec
Disk stats (read/write):
sdd: ios=1732347/0, merge=19057/0, ticks=2248152/0, in_queue=2247789, util=99.91%
sde: ios=1558604/0, merge=18701/0, ticks=2314201/0, in_queue=2314047, util=99.92%
sdf: ios=1719023/0, merge=19309/0, ticks=2257093/0, in_queue=2256728, util=99.94%
sdg: ios=1723574/0, merge=18756/0, ticks=2253409/0, in_queue=2253174, util=99.95%
Thanks for any advices.
Regards,
Edward
-----Original Message-----
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki@intel.com]
Sent: Friday, August 31, 2018 12:16 AM
To: Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>; Storage Performance
Development Kit <spdk(a)lists.01.org>; 'Isaac Otsiabah'
Cc: Yang, Edward <Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul
<PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Sorry for delayed response.
Please keep in mind that the patch for switch from VCL to Session API is in
active development, with changes being applied regularly.
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-
23_c_spdk_spdk_-2B_417056_&d=DwIFAg&c=09aR81AqZjK9FqV5BSCPBw&r=-
Vl2krtpQKVbTcyqQsihSQLZVp3NEoqJsxIT3rBIgJk&m=zBi2rfFfwt1hJFK1mCXeH7
BK1b9GjLxSZ8CEmjaP-
Dk&s=0ggEYmwl4WBiSgplNGj9Cqya9bDKmXoQJubuybSiCdg&e=
We are still trying to work out best practices and recommended setup for SPDK
iSCSI target running along with VPP.
Our test environments at this time consists of Fedora 26 machines, using either
one or two 40GB/s interfaces per host. One host is using SPDK iSCSI target with
VPP (or posix for comparison) and the other SPDK iSCSI initiator.
After switch to Session API we were able to saturate a single 40GB/s interface
with much lower core usage in VPP compared to VCL. As well as reduce
number of SPDK iSCSI target cores used in such setup. Both Session API and
posix implementation were able to saturate 40GB/s, while having similar CPU
efficiency. We are working on evaluating higher throughputs (80GB/s and
more), as well looking at optimizations to usage of Sessions API within SPDK.
We have not seen much change from modifying most VPP config parameters
from defaults, at this time, for our setup. Keeping default num-mbufs and
socket-mem to 1024. Mostly changing parameters regarding number of worker
cores and num-rx-queues.
For iSCSI parameters, both for posix and Session API, at certain throughputs
increasing number of targets/luns within portal group were needed. We were
doing out comparisons at around 32-128 target/luns. Since this is evaluation of
iSCSI target Session API, please keep in mind that null bdevs were used to
eliminate other types of bdevs from affecting the results.
Besides that key point for higher throughputs is having iSCSI initiator actually
be able to generate enough traffic for iSCSI target.
May I ask what kind of setup you are using for the comparisons ? Are you
targeting 10GB/s interfaces as noted in previous emails ?
Tomek
-----Original Message-----
From: IOtsiabah(a)us.fujitsu.com [mailto:IOtsiabah@us.fujitsu.com]
Sent: Friday, August 31, 2018 12:51 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac
Otsiabah' <IMCEAEX-
_O=FMSA_OU=EXCHANGE+20ADMINISTRATIVE+20GROUP+20+28FYDIBOHF23
SPDLT+29_CN=RECIPIENTS_CN=IOTSIABAH(a)fujitsu.local>; Zawadzki, Tomasz
<tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>;
Edward.Yang(a)us.fujitsu.com; PVonStamwitz(a)us.fujitsu.com;
Edward.Yang(a)us.fujitsu.com
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomasz, you probably went on vacation and are back now. Previously, I
sent you the two emails below. Please, can you respond to them for us? Thank
you.
Isaac
-----Original Message-----
From: Otsiabah, Isaac
Sent: Tuesday, August 14, 2018 12:40 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac
Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward
<Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul
<PVonStamwitz(a)us.fujitsu.com>; Yang, Edward <Edward.Yang(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Tomasz, we can increased the amount of hugepages used by vpp by increasing
the dpdk parameters
socket-mem 1024
num-mbufs 65536
however, there were no improvement in fio performance test results. We are
running our test on Centos 7. Are you testing vpp on Fedora instead? Can you
share with us your test environment information?
Isaac/Edward
-----Original Message-----
From: Otsiabah, Isaac
Sent: Tuesday, August 14, 2018 10:15 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>; 'Isaac
Otsiabah'; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
Cc: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Yang, Edward
<Edward.Yang(a)us.fujitsu.com>; Von-Stamwitz, Paul
<PVonStamwitz(a)us.fujitsu.com>; Yang, Edward
<Edward.Yang(a)us.fujitsu.com>; Otsiabah, Isaac <IOtsiabah(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomasz, we obtained your vpp patch 417056 (git fetch
https://urldefense.proofpoint.com/v2/url?u=https-
3A__review.gerrithub.io_spdk_spdk&d=DwIFAg&c=09aR81AqZjK9FqV5BSCPBw
&r=-
Vl2krtpQKVbTcyqQsihSQLZVp3NEoqJsxIT3rBIgJk&m=zBi2rfFfwt1hJFK1mCXeH7
BK1b9GjLxSZ8CEmjaP-
Dk&s=Je6FxQzCr9VtkhD7ayQzCaSkQp_yXz_166aHDjWwAOA&e=
refs/changes/56/417056/16:test16) We are testing it and have a few questions.
1.. Please, can you share with us your test results and your test environment
setup or configuration?
2. From experiment, we see that vpp always uses 105 pages from the
available hugepages in the system regardless of the amount available. Is there
a way to increase the amount of huge pages for vpp?
Isaac
From: Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com>;
'spdk(a)lists.01.org'
<spdk(a)lists.01.org>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel
<daniel.verkamp(a)intel.com>; Paul Von-Stamwitz
<PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomasz, I got the SPDK patch. My network topology is simple but making
the network ip address accessible to the iscsi_tgt application and to vpp is not
working. From my understanding, vpp is started first on the target host and
then iscsi_tgt application is started after the network setup is done (please,
correct me if this is not the case).
------- 192.168.2.10
| | initiator
-------
|
|
|
-------------------------------------------- 192.168.2.0
|
|
| 192.168.2.20
-------------- vpp, vppctl
| | iscsi_tgt
--------------
Both system have a 10GB NIC
(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the
first 10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the
NIC is in used by vpp
[root@spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development
Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator
addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid
Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory
Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address
Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for
Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin:
/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir
/run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-
mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0,
hugepage_sz:2097152, nchannel:0, nrank:0 Segment 1: IOVA:0x3e000000,
len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152,
nchannel:0, nrank:0 Segment 2: IOVA:0x3fc00000, len:2097152,
virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0 Segment 4:
IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1,
hugepage_sz:2097152, nchannel:0, nran
STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface
and up it. From vpp, I can ping the initiator machine and vice versa as shown
below.
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 down
local0 0 down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24 vpp#
set interface state TenGigabitEthernet82/0/0 up vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 up
local0 0 down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
local0 (dn):
/* ping initiator from vpp */
vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms
(On Initiator):
/* ping vpp interface from initiator*/
[root@spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms
STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above
192.168.2.x subnet so I ran these commands on the target server to create
veth and then connected it to a vpp host-interface as follows:
ip link add name vpp1out type veth peer name vpp1host ip link set dev
vpp1out up ip link set dev vpp1host up ip addr add 192.168.2.201/24 dev
vpp1host
vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202 vpp# show int addr
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
host-vpp1out (up):
192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10
/* From host, ping vpp */
[root@spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms
/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms
Statistics: 5 sent, 5 received, 0% packet loss
From the target host,I still cannot ping the initiator (192.168.2.10), it does not
go through the vpp interface so my vpp interface connection is not correct.
Please, how does one create the vpp host interface and connect it, so that host
applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2,
should I use a different subnet like 192.168.3.X and turn on IP forwarding add a
route to the routing table?
Isaac
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki@intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah
<IOtsiabah@us.fujitsu.com<mailto:IOtsiabah@us.fujitsu.com>>
Cc: Harris, James R
<james.r.harris@intel.com<mailto:james.r.harris@intel.com>>; Verkamp,
Daniel <daniel.verkamp@intel.com<mailto:daniel.verkamp@intel.com>>; Paul
Von-Stamwitz
<PVonStamwitz@us.fujitsu.com<mailto:PVonStamwitz@us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Are you using following patch ? (I suggest cherry picking it)
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gerrithub.io_-
23_c_389566_&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiU
sKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
xkYWjIUTA2lTbCWuTg&s=FE90i1g4fLqz2TZ_eM5V21BWuBXg2eB7L18qpVk7DS
M&e=
SPDK iSCSI target can be started without specific interface to bind on, by not
specifying any target nodes or portal groups. They can be added later via RPC
https://urldefense.proofpoint.com/v2/url?u=http-
3A__www.spdk.io_doc_iscsi.html-23iscsi-
5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJ
f45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-
h5zZerTcOn1D9wfxM&e=.
Please see
https://urldefense.proofpoint.com/v2/url?u=https-
3A__github.com_spdk_spdk_blob_master_test_iscsi-
5Ftgt_lvol_iscsi.conf&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQC
OCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
xkYWjIUTA2lTbCWuTg&s=jSKH9IX5rn3DlmRDFR35I4V5I-
bT1xxWSqSp1pIXygw&e= for example of minimal iSCSI config.
Suggested flow of starting up applications is:
1. Unbind interfaces from kernel
2. Start VPP and configure the interface via vppctl
3. Start SPDK
4. Configure the iSCSI target via RPC, at this time it should be possible to
use the interface configured in VPP
Please note, there is some leeway here. The only requirement is having VPP
app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime,
and are available for use in SPDK as well.
Let me know if you have any questions.
Tomek
From: Isaac Otsiabah [mailto:IOtsiabah@us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz
<tomasz.zawadzki@intel.com<mailto:tomasz.zawadzki@intel.com>>
Cc: Harris, James R
<james.r.harris@intel.com<mailto:james.r.harris@intel.com>>; Verkamp,
Daniel <daniel.verkamp@intel.com<mailto:daniel.verkamp@intel.com>>; Paul
Von-Stamwitz
<PVonStamwitz@us.fujitsu.com<mailto:PVonStamwitz@us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos
7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt
application.
For VPP, first, I unbind the nick from the kernel as and start VPP application.
./usertools/dpdk-devbind.py -u 0000:07:00.0
vpp unix {cli-listen /run/vpp/cli.sock}
Unbinding the nic takes down the interface, however,
the ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface to bind
to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable
usage of network interfaces. After SPDK iSCSI target initialization finishes,
interfaces configured within VPP will be available to be configured as portal
addresses. Please refer to Configuring iSCSI Target via RPC
method<https://urldefense.proofpoint.com/v2/url?u=http-
3A__www.spdk.io_doc_iscsi.html-23iscsi-
5Frpc&d=DwICAg&c=09aR81AqZjK9FqV5BSCPBw&r=ng2zwXQCOCuiUsKl9hqVJ
f45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeGWugYxlqcFCeU-
xkYWjIUTA2lTbCWuTg&s=KFyVzoGGQQYWVZZkv1DNAelTF-
h5zZerTcOn1D9wfxM&e=>."
is not clear because the instructions at the "Configuring iSCSI Traget via RPC
method" suggest the iscsi_tgt server is running for one to be able to execute
the RPC commands but, how do I get the iscsi_tgt server running without an
interface to bind on during its initialization?
Please, can anyone of you help to explain how to run the SPDK iscsi_tgt
application with VPP (for instance, what should change in iscsi.conf?) after
unbinding the nic, how do I get the iscsi_tgt server to start without an interface
to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?
I would appreciate if anyone would help. Thank you.
Isaac
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://urldefense.proofpoint.com/v2/url?u=https-
3A__lists.01.org_mailman_listinfo_spdk&d=DwICAg&c=09aR81AqZjK9FqV5BSC
PBw&r=ng2zwXQCOCuiUsKl9hqVJf45sjO6jr46tTh5PBLLcls&m=5e7YYsGGsVeG
WugYxlqcFCeU-xkYWjIUTA2lTbCWuTg&s=2iHpVGzaloMHLL179exqyisY-
BLZOoEFh5Y4Z7SArYs&e=