It looks like there are now consistent failures in iscsi and spdk-nvme-cli tests, I tried
to retrigger and the failures happened again:
spdk nvme cli:
https://ci.spdk.io/spdk/builds/review/253dd179d38ac2b608f5adf1edad56e1ec6...
iscsi:
https://ci.spdk.io/spdk/builds/review/253dd179d38ac2b608f5adf1edad56e1ec6...
________________________________
From: Harris, James R <james.r.harris(a)intel.com>
Sent: Tuesday, January 29, 2019 6:09 PM
To: Storage Performance Development Kit; Shahar Salzman
Subject: Re: [SPDK] Strange CI failure
Thanks Shahar. For now, you can reply to your own patch on GerritHub with just the word
"retrigger" - it will re-run your patch through the test pool. That will get
your patch unblocked while Paul looks at the intermittent test failure.
-Jim
On 1/29/19, 8:48 AM, "SPDK on behalf of Luse, Paul E"
<spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:
Thanks! I've got a few hours of meetings coming up but here's what I see. If
you can repro that'd be great, we can get a github issue up and going. If not I can
look deeper into this later if someone else doesn't jump in by then with an
"aha" moment :)
Starting SPDK v19.01-pre / DPDK 18.11.0 initialization...
[ DPDK EAL parameters: identify -c 0x1 -n 1 -m 0 --base-virtaddr=0x200000000000
--file-prefix=spdk0 --proc-type=auto ]
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Auto-detected process type: SECONDARY
EAL: Multi-process socket /var/run/dpdk/spdk0/mp_socket_835807_c029d817e596b
EAL: Probing VFIO support...
EAL: VFIO support initialized
test/nvme/nvme.sh: line 108: 835807 Segmentation fault (core dumped)
$rootdir/examples/nvme/identify/identify -i 0
08:50:18 # trap - ERR
08:50:18 # print_backtrace
08:50:18 # [[ ehxBE =~ e ]]
08:50:18 # local shell_options=ehxBE
08:50:18 # set +x
========== Backtrace start: ==========
From: Shahar Salzman [mailto:shahar.salzman@kaminario.com]
Sent: Tuesday, January 29, 2019 8:35 AM
To: Luse, Paul E <paul.e.luse(a)intel.com>; Storage Performance Development Kit
<spdk(a)lists.01.org>
Subject: Re: Strange CI failure
https://ci.spdk.io/spdk-jenkins/results/autotest-per-patch/builds/21382/a...
I can copy paste it if you cannot reach the link.
________________________________
From: SPDK <spdk-bounces@lists.01.org<mailto:spdk-bounces@lists.01.org>>
on behalf of Luse, Paul E
<paul.e.luse@intel.com<mailto:paul.e.luse@intel.com>>
Sent: Tuesday, January 29, 2019 5:22 PM
To: Storage Performance Development Kit
Subject: Re: [SPDK] Strange CI failure
Can you send a link to the full log?
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Shahar Salzman
Sent: Tuesday, January 29, 2019 8:21 AM
To: Storage Performance Development Kit
<spdk@lists.01.org<mailto:spdk@lists.01.org>>
Subject: [SPDK] Strange CI failure
Hi,
I have encountered a CI failure that has nothing to do with my code.
The reason that I know it has nothing to do with it, is that the change is a gdb
macro.
Do we know that this test machine is unstable?
Here is the backtrace:
========== Backtrace start: ==========
in test/nvme/nvme.sh:108 -> main()
...
103 report_test_completion "nightly_nvme_reset"
104 timing_exit reset
105 fi
106
107 timing_enter identify
=> 108 $rootdir/examples/nvme/identify/identify -i 0
109 for bdf in $(iter_pci_class_code 01 08 02); do
110 $rootdir/examples/nvme/identify/identify -r "trtype:PCIe
traddr:${bdf}" -i 0
111 done
112 timing_exit identify
113
...
Shahar
_______________________________________________
SPDK mailing list
SPDK@lists.01.org<mailto:SPDK@lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK@lists.01.org<mailto:SPDK@lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk