Well, good that things are working. A few others will be online tomorrow too and may be
able to help clear up the mystery - I've only setup the init/tgt I haven't spent
any time in the code...
Saw the fio note as well, will try and help w/that as well this week if nobody else jumps
on it. I'm out for the rest of today...
-Paul
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Jason Messer
Sent: Wednesday, December 26, 2018 12:09 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Trouble Setting Up NVMeoF Target
Sorry I failed to mention create_transport was also the only thing I was missing as well.
________________________________________
From: SPDK [spdk-bounces(a)lists.01.org] on behalf of Gruher, Joseph R
[joseph.r.gruher(a)intel.com]
Sent: Wednesday, December 26, 2018 2:00 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] Trouble Setting Up NVMeoF Target
Paul's steps:
sudo ~/spdk/app/nvmf_tgt/nvmf_tgt
sudo ~/spdk/scripts/rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0 sudo
~/spdk/scripts/rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:0d:00.0 sudo
~/spdk/scripts/rpc.py nvmf_subsystem_create nqn.2016-06.io.spdk:cnode1 -a -s
SPDK00000000000001 sudo ~/spdk/scripts/rpc.py nvmf_subsystem_add_ns
nqn.2016-06.io.spdk:cnode1 NVMe1n1 sudo ~/spdk/scripts/rpc.py nvmf_subsystem_add_listener
nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
My steps:
sudo /home/rsd/install/spdk/app/nvmf_tgt/nvmf_tgt -m 0xF sudo
/home/rsd/install/spdk/scripts/rpc.py construct_nvme_bdev -b d0 -t pcie -a 0000:d8:00.0
sudo /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_create -a
nqn.2018-12.io.spdk:nqn0 sudo /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_add_ns
nqn.2018-12.io.spdk:nqn0 d0n1 sudo /home/rsd/install/spdk/scripts/rpc.py
nvmf_subsystem_add_listener -t rdma -a 192.168.200.4 -f ipv4 -s 4420
nqn.2018-12.io.spdk:nqn0
These are pretty similar. The difference I see is that I don't have an explicit
create_transport step, so maybe that had something to do with the problem. But my steps
are working for me now without including a create_transport, so I'm not sure what to
make of that... example:
(start target)
rsd@nvme:~$
rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py construct_nvme_bdev -b d0 -t pcie
-a 0000:d8:00.0
d0n1
rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_create -a
nqn.2018-12.io.spdk:nqn0 rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
nvmf_subsystem_add_ns nqn.2018-12.io.spdk:nqn0 d0n1 rsd@nvme:~$ sudo
/home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_add_listener -t rdma -a 192.168.200.4
-f ipv4 -s 4420 nqn.2018-12.io.spdk:nqn0 rsd@nvme:~$ rsd@nvme:~$ rsd@nvme:~$ sudo nvme
discover -t rdma -a 192.168.200.4 Discovery Log Number of Records 1, Generation counter 3
=====Discovery Log Entry 0======
trtype: rdma
adrfam: ipv4
subtype: nvme subsystem
treq: not specified
portid: 0
trsvcid: 4420
subnqn: nqn.2018-12.io.spdk:nqn0
traddr: 192.168.200.4
rdma_prtype: not specified
rdma_qptype: connected
rdma_cms: rdma-cm
rdma_pkey: 0x0000
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Jason
Messer
Sent: Wednesday, December 26, 2018 10:29 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Trouble Setting Up NVMeoF Target
I just assumed it was due to deprecating of specific config options
because I see the following message when starting the nvmf_tgr app:
conf.c: 167:spdk_nvmf_parse_nvmf_tgt: *ERROR*: Deprecated options
detected for the NVMe-oF target.
The following options are no
longer controlled by the target
and should be set in the transport on a
per-transport basis:
MaxQueueDepth,
MaxQueuesPerSession, InCapsuleDataSize, MaxIOSize, IOUnitSize
This can be accomplished by
setting the options through the create_nvmf_transport RPC.
You may also continue to
configure these options in the conf file under each transport.
________________________________________
From: SPDK [spdk-bounces(a)lists.01.org] on behalf of Jason Messer
[JasonM(a)ami.com]
Sent: Wednesday, December 26, 2018 1:23 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] Trouble Setting Up NVMeoF Target
I had the same problem on Ubuntu 18.04, SPDK version 19.1.0, NIC:
Mellanox MCX413A-BCAT and had to perform the same steps provided by Paul.
-Jason
________________________________________
From: SPDK [spdk-bounces(a)lists.01.org] on behalf of Gruher, Joseph R
[joseph.r.gruher(a)intel.com]
Sent: Wednesday, December 26, 2018 1:14 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] Trouble Setting Up NVMeoF Target
Based on an old linux-rdma mailing list thread I dug up, I think the
problem may be that I'm using newer CX5 NICs and the version of
rdma_core that Ubuntu
16.04 provides via APT is too old and doesn't have support for CX5.
So, I cloned and built the latest release of rdma_core from the
rdma_core git. Then SPDK complained about a version problem with
ibverbs, so I removed and reinstalled 'ibverbs-utils libibverbs-dev libibverbs1'
using APT, and now everything works.
So these steps worked around the problem, but my theory about the root
cause could be incorrect, and I don't really know if this was the best
approach to resolution.
It would be interesting to know if anyone is testing SPDK with Ubuntu
16.04 and Mellanox CX5 NICs and if any such steps were required.
I'm now having some trouble getting the FIO plugin to work with
18.10.1. Was working fine for me in 18.10. I'll start a different thread for that
issue.
> -----Original Message-----
> From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Luse,
> Paul E
> Sent: Monday, December 24, 2018 2:46 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: Re: [SPDK] Trouble Setting Up NVMeoF Target
>
> Hi Joe,
>
> I can help you out more at the end of the week when I'm in the
> office as I just went through this not too long ago. FYI here's the
> sequence that I used, looks pretty close to what you're doing... if
> someone doesn't jump in between now and Thu I'll run through this
> again on my setup and make sure it works with
> 18.10.1
>
> sudo ~/spdk/app/nvmf_tgt/nvmf_tgt
> sudo ~/spdk/scripts/rpc.py nvmf_create_transport -t RDMA -u 8192 -p
> 4 -c 0 sudo ~/spdk/scripts/rpc.py construct_nvme_bdev -b NVMe1 -t
> PCIe -a
> 0000:0d:00.0 sudo ~/spdk/scripts/rpc.py nvmf_subsystem_create
> nqn.2016-
> 06.io.spdk:cnode1 -a -s SPDK00000000000001 sudo
> ~/spdk/scripts/rpc.py nvmf_subsystem_add_ns
> nqn.2016-06.io.spdk:cnode1 NVMe1n1 sudo ~/spdk/scripts/rpc.py
> nvmf_subsystem_add_listener nqn.2016-
> 06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420
>
> Happy Holidays!
> Paul
>
> -----Original Message-----
> From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Gruher,
> Joseph R
> Sent: Monday, December 24, 2018 3:34 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Trouble Setting Up NVMeoF Target
>
> Hi everyone. I'm running Ubuntu 16.04 with SPDK v18.10.1. I'm
> having trouble setting up the NVMeoF target. I can't seem to spot
> the problem, so maybe someone can help me out here. I'm pretty sure
> I'm either formatting the command incorrectly or I've screwed up the
> IB/RDMA related packages in my Ubuntu install.
>
> I have a Mellanox CX5 NIC at 192.168.200.4, interface is up and linked:
>
> rsd@nvme:~$ ifconfig enp6s0
> enp6s0 Link encap:Ethernet HWaddr 98:03:9b:1e:de:ec
> inet addr:192.168.200.4 Bcast:192.168.200.255 Mask:255.255.255.0
> inet6 addr: fe80::9a03:9bff:fe1e:deec/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:138 errors:0 dropped:0 overruns:0 frame:0
> TX packets:135 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:19029 (19.0 KB) TX bytes:24279 (24.2 KB)
> rsd@nvme:~$ sudo ethtool enp6s0 Settings for enp6s0:
> Supported ports: [ Backplane ]
> Supported link modes: 1000baseKX/Full
> 10000baseKR/Full
> 40000baseKR4/Full
> 40000baseCR4/Full
> 40000baseSR4/Full
> 40000baseLR4/Full
> Supported pause frame use: Symmetric
> Supports auto-negotiation: Yes
> Advertised link modes: 1000baseKX/Full
> 10000baseKR/Full
> 40000baseKR4/Full
> 40000baseCR4/Full
> 40000baseSR4/Full
> 40000baseLR4/Full
> Advertised pause frame use: Symmetric
> Advertised auto-negotiation: Yes
> Link partner advertised link modes: Not reported
> Link partner advertised pause frame use: No
> Link partner advertised auto-negotiation: Yes
> Speed: 100000Mb/s
> Duplex: Full
> Port: Direct Attach Copper
> PHYAD: 0
> Transceiver: internal
> Auto-negotiation: on
> Supports Wake-on: d
> Wake-on: d
> Current message level: 0x00000004 (4)
> link
> Link detected: yes
>
> I ran setup and then started the target and gave it four cores:
> rsd@nvme:~$ sudo ~/install/spdk/scripts/setup.sh [sudo] password for rsd:
> 0000:d8:00.0 (8086 0a54): nvme -> uio_pci_generic
> 0000:d9:00.0 (8086 0a54): nvme -> uio_pci_generic
> 0000:da:00.0 (8086 0a54): nvme -> uio_pci_generic
> 0000:db:00.0 (8086 0a54): nvme -> uio_pci_generic
> 0000:00:04.0 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.1 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.2 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.3 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.4 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.5 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.6 (8086 2021): ioatdma -> uio_pci_generic
> 0000:00:04.7 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.0 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.1 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.2 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.3 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.4 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.5 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.6 (8086 2021): ioatdma -> uio_pci_generic
> 0000:80:04.7 (8086 2021): ioatdma -> uio_pci_generic rsd@nvme:~$
> sudo /home/rsd/install/spdk/app/nvmf_tgt/nvmf_tgt -m 0xF Starting
> SPDK
> v18.10.1 / DPDK 18.08.0 initialization...
> [ DPDK EAL parameters: nvmf --no-shconf -c 0xF
> --file-prefix=spdk_pid42493 ]
> EAL: Detected 80 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> app.c: 602:spdk_app_start: *NOTICE*: Total cores available: 4
> reactor.c: 703:spdk_reactors_init: *NOTICE*: Occupied cpu socket
> mask is 0x1
> reactor.c: 490:_spdk_reactor_run: *NOTICE*: Reactor started on core
> 1 on socket 0
> reactor.c: 490:_spdk_reactor_run: *NOTICE*: Reactor started on core
> 2 on socket 0
> reactor.c: 490:_spdk_reactor_run: *NOTICE*: Reactor started on core
> 0 on socket 0
> reactor.c: 490:_spdk_reactor_run: *NOTICE*: Reactor started on core
> 3 on socket 0
>
> I created my bdevs from my local NVMe devices:
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
> construct_nvme_bdev -b d0 -t pcie -a 0000:d8:00.0
> d0n1
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
> construct_nvme_bdev -b d1 -t pcie -a 0000:d9:00.0
> d1n1
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
> construct_nvme_bdev -b d2 -t pcie -a 0000:da:00.0
> d2n1
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
> construct_nvme_bdev -b d3 -t pcie -a 0000:db:00.0
> d3n1
>
> Matching prints from the target:
> EAL: PCI device 0000:d8:00.0 on NUMA socket 1
> EAL: probe driver: 8086:a54 spdk_nvme
> EAL: PCI device 0000:d9:00.0 on NUMA socket 1
> EAL: probe driver: 8086:a54 spdk_nvme
> EAL: PCI device 0000:da:00.0 on NUMA socket 1
> EAL: probe driver: 8086:a54 spdk_nvme
> EAL: PCI device 0000:db:00.0 on NUMA socket 1
> EAL: probe driver: 8086:a54 spdk_nvme
>
> Then the problem starts when I try to create my NVMeoF subsystem.
> If I use this deprecated method I get an invalid parameters error:
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
> construct_nvmf_subsystem -a -n d0n1 -m 1 nqn.2018-12.io.spdk:nqn0
> "trtype:RDMA traddr:192.168.200.4 trsvcid:4420" ""
> Got JSON-RPC error response
> request:
> {
> "jsonrpc": "2.0",
> "params": {
> "max_namespaces": 1,
> "nqn": "nqn.2018-12.io.spdk:nqn0",
> "allow_any_host": true,
> "serial_number": "00000000000000000000",
> "listen_addresses": [
> {
> "traddr": "192.168.200.4",
> "trtype": "RDMA",
> "trsvcid": "4420"
> }
> ],
> "namespaces": [
> {
> "bdev_name": "d0n1"
> }
> ]
> },
> "method": "construct_nvmf_subsystem",
> "id": 1
> }
> response:
> {
> "message": "Invalid parameters",
> "code": -32602
> }
>
> Matching unhappy prints on the target:
> nvmf_rpc_deprecated.c: 490:spdk_rpc_construct_nvmf_subsystem:
> *WARNING*: The construct_nvmf_subsystem RPC is deprecated. Use
> nvmf_subsystem_create instead.
> libibverbs: Warning: no userspace device-specific driver found for
> /sys/class/infiniband_verbs/uverbs1
> libibverbs: Warning: no userspace device-specific driver found for
> /sys/class/infiniband_verbs/uverbs0
> rdma.c:1595:spdk_nvmf_rdma_create: *ERROR*:
> rdma_create_event_channel() failed, No such device
> transport.c: 93:spdk_nvmf_transport_create: *ERROR*: Unable to
> create new transport of type RDMA
> nvmf.c: 526:spdk_nvmf_tgt_listen: *ERROR*: Transport initialization
> failed
>
> If I try to use the newer method, something similar happens, this part goes OK:
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/rpc.py
> nvmf_subsystem_create nqn.2018-12.io.spdk:nqn0 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_create
> nqn.2018-
> 12.io.spdk:nqn1 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py
> nvmf_subsystem_create nqn.2018-12.io.spdk:nqn2 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_create
> nqn.2018-
> 12.io.spdk:nqn3 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py
> nvmf_subsystem_add_ns nqn.2018-12.io.spdk:nqn0 d0n1 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_add_ns
> nqn.2018-
> 12.io.spdk:nqn1 d1n1 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py
> nvmf_subsystem_add_ns nqn.2018-12.io.spdk:nqn2 d2n1 rsd@nvme:~$ sudo
> /home/rsd/install/spdk/scripts/rpc.py nvmf_subsystem_add_ns
> nqn.2018-
> 12.io.spdk:nqn3 d3n1
>
> But then I can't add the transport:
> rsd@nvme:~/install/nvme-cli-1.6$ sudo
> /home/rsd/install/spdk/scripts/rpc.py
> nvmf_subsystem_add_listener -t rdma -a 192.168.200.4 nqn.2018-
> 12.io.spdk:nqn0 Got JSON-RPC error response
> request:
> {
> "method": "nvmf_subsystem_add_listener",
> "params": {
> "nqn": "nqn.2018-12.io.spdk:nqn0",
> "listen_address": {
> "trtype": "rdma",
> "traddr": "192.168.200.4",
> "trsvcid": null
> }
> },
> "jsonrpc": "2.0",
> "id": 1
> }
> response:
> {
> "code": -32602,
> "message": "Invalid parameters"
> }
>
> Which causes these prints on the target:
> nvmf_rpc.c: 517:decode_rpc_listen_address: *ERROR*:
> spdk_json_decode_object failed
> nvmf_rpc.c: 703:nvmf_rpc_subsystem_add_listener: *ERROR*:
> spdk_json_decode_object failed
>
> Any ideas where I'm going wrong here?
>
> I also tried providing more of the optional arguments:
> rsd@nvme:~/install/nvme-cli-1.6$ sudo
> /home/rsd/install/spdk/scripts/rpc.py
> nvmf_subsystem_add_listener -t rdma -a 192.168.200.4 -f ipv4 -s 4420
> nqn.2018-12.io.spdk:nqn0 Got JSON-RPC error response
> request:
> {
> "params": {
> "listen_address": {
> "adrfam": "ipv4",
> "trsvcid": "4420",
> "traddr": "192.168.200.4",
> "trtype": "rdma"
> },
> "nqn": "nqn.2018-12.io.spdk:nqn0"
> },
> "method": "nvmf_subsystem_add_listener",
> "id": 1,
> "jsonrpc": "2.0"
> }
> response:
> {
> "code": -32602,
> "message": "Invalid parameters"
> }
>
> Matching prints on target:
> libibverbs: Warning: couldn't load driver
'/usr/lib/libibverbs/libmlx4':
> /usr/lib/libibverbs/libmlx4-rdmav2.so: cannot open shared object file:
> No such file or directory
> libibverbs: Warning: no userspace device-specific driver found for
> /sys/class/infiniband_verbs/uverbs1
> libibverbs: Warning: no userspace device-specific driver found for
> /sys/class/infiniband_verbs/uverbs0
> rdma.c:1595:spdk_nvmf_rdma_create: *ERROR*:
> rdma_create_event_channel() failed, No such device
> transport.c: 93:spdk_nvmf_transport_create: *ERROR*: Unable to
> create new transport of type RDMA
> nvmf.c: 526:spdk_nvmf_tgt_listen: *ERROR*: Transport initialization
> failed
>
> The no userspace driver message is certainly concerning. I did
> tinker about with the Ubuntu packages a bit trying to get ib_send_bw
> to work at one point and I wonder if I broke something along the
> way. Any ideas how to recover if that is the case (other than a
> clean new install of
Ubuntu)?
>
> The SPDK pkgdep script does run to completion without any obvious
> issues and doesn't resolve the problem.
>
> rsd@nvme:~$ sudo /home/rsd/install/spdk/scripts/pkgdep.sh
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> g++ is already the newest version (4:5.3.1-1ubuntu1).
> gcc is already the newest version (4:5.3.1-1ubuntu1).
> libaio-dev is already the newest version (0.3.110-2).
> libiscsi-dev is already the newest version (1.12.0-2).
> make is already the newest version (4.1-6).
> sg3-utils is already the newest version (1.40-0ubuntu1).
> astyle is already the newest version (2.05.1-0ubuntu1).
> lcov is already the newest version (1.12-2).
> libcunit1-dev is already the newest version (2.1-3-dfsg-2).
> pep8 is already the newest version (1.7.0-2).
> git is already the newest version (1:2.7.4-0ubuntu1.6).
> libssl-dev is already the newest version (1.0.2g-1ubuntu4.14).
> pciutils is already the newest version (1:3.3.1-1.1ubuntu1.2).
> uuid-dev is already the newest version (2.27.1-6ubuntu3.6).
> clang is already the newest version (1:3.8-33ubuntu3.1).
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> libunwind-dev is already the newest version (1.1-4.1).
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> libibverbs-dev is already the newest version (1.1.8-1.1ubuntu2).
> librdmacm-dev is already the newest version (1.0.21-1).
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> libnuma-dev is already the newest version (2.0.11-1ubuntu1.1).
> nasm is already the newest version (2.11.08-1ubuntu0.1).
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> doxygen is already the newest version (1.8.11-1).
> mscgen is already the newest version (0.20-5).
> graphviz is already the newest version (2.38.0-12ubuntu2.1).
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> python-pip is already the newest version (8.1.1-2ubuntu0.4).
> python3-pip is already the newest version (8.1.1-2ubuntu0.4).
> 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
> The directory '/home/rsd/.cache/pip/http' or its parent directory is
> not owned by the current user and the cache has been disabled.
> Please check the permissions and owner of that directory. If
> executing pip with sudo, you may want sudo's -H flag.
> The directory '/home/rsd/.cache/pip' or its parent directory is not
> owned by the current user and caching wheels has been disabled.
> check the permissions and owner of that directory. If executing pip
> with sudo, you
may want sudo's -H flag.
> Requirement already satisfied (use --upgrade to upgrade):
> configshell_fb in /usr/local/lib/python2.7/dist-packages
> Requirement already satisfied (use --upgrade to upgrade): pexpect in
> /usr/local/lib/python2.7/dist-packages
> Requirement already satisfied (use --upgrade to upgrade): pyparsing
> in /usr/lib/python2.7/dist-packages (from configshell_fb)
> Requirement already satisfied (use --upgrade to upgrade): six in
> /usr/lib/python2.7/dist-packages (from configshell_fb) Requirement
> already
satisfied (use --upgrade to upgrade):
> ptyprocess>=0.5 in /usr/local/lib/python2.7/dist-packages (from
> ptyprocess>pexpect) You
> are using pip version 8.1.1, however version 18.1 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> The directory '/home/rsd/.cache/pip/http' or its parent directory is
> not owned by the current user and the cache has been disabled.
> Please check the permissions and owner of that directory. If
> executing pip with sudo, you may want sudo's -H flag.
> The directory '/home/rsd/.cache/pip' or its parent directory is not
> owned by the current user and caching wheels has been disabled.
> check the permissions and owner of that directory. If executing pip
> with sudo, you
may want sudo's -H flag.
> Requirement already satisfied (use --upgrade to upgrade):
> configshell_fb in /usr/local/lib/python3.5/dist-packages
> Requirement already satisfied (use --upgrade to upgrade): pexpect in
> /usr/local/lib/python3.5/dist-packages
> Requirement already satisfied (use --upgrade to upgrade): pyparsing
> in /usr/local/lib/python3.5/dist-packages (from configshell_fb)
> Requirement already satisfied (use --upgrade to upgrade): six in
> /usr/lib/python3/dist- packages (from configshell_fb) Requirement
> already satisfied (use --upgrade to
> upgrade): ptyprocess>=0.5 in /usr/local/lib/python3.5/dist-packages
> (from
> pexpect) You are using pip version 8.1.1, however version 18.1 is available.
> You should consider upgrading via the 'pip install --upgrade pip' command.
> Crypto requires NASM version 2.12.02 or newer. Please install or
> upgrade and re-run this script if you are going to use Crypto.
>
> Thanks,
> Joe
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/spdk
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
Please consider the environment before printing this email.
The information contained in this message may be confidential and
proprietary to American Megatrends, Inc. This communication is
intended to be read only by the individual or entity to whom it is
addressed or by their designee. If the reader of this message is not
the intended recipient, you are on notice that any distribution of
this message, in any form, is strictly prohibited. Please promptly
notify the sender by reply e-mail or by telephone at 770-246-8600, and then delete or
destroy all copies of the transmission.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
Please consider the environment before printing this email.
The information contained in this message may be confidential and
proprietary to American Megatrends, Inc. This communication is
intended to be read only by the individual or entity to whom it is
addressed or by their designee. If the reader of this message is not
the intended recipient, you are on notice that any distribution of
this message, in any form, is strictly prohibited. Please promptly
notify the sender by reply e-mail or by telephone at 770-246-8600, and then delete or
destroy all copies of the transmission.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
Please consider the environment before printing this email.
The information contained in this message may be confidential and proprietary to American
Megatrends, Inc. This communication is intended to be read only by the individual or
entity to whom it is addressed or by their designee. If the reader of this message is not
the intended recipient, you are on notice that any distribution of this message, in any
form, is strictly prohibited. Please promptly notify the sender by reply e-mail or by
telephone at 770-246-8600, and then delete or destroy all copies of the transmission.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org