[ptrace] 201766a20e: kernel_selftests.seccomp.make_fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 201766a20e30f982ccfe36bebfad9602c3ff574a ("ptrace: add PTRACE_GET_SYSCALL_INFO request")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-07-26 16:52:03 make run_tests -C seccomp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-201766a20e30f982ccfe36bebfad9602c3ff574a/tools/testing/selftests/seccomp'
gcc -Wl,-no-as-needed -Wall seccomp_bpf.c -lpthread -o seccomp_bpf
In file included from seccomp_bpf.c:51:0:
seccomp_bpf.c: In function ‘tracer_ptrace’:
seccomp_bpf.c:1787:20: error: ‘PTRACE_EVENTMSG_SYSCALL_ENTRY’ undeclared (first use in this function)
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
seccomp_bpf.c:1787:20: note: each undeclared identifier is reported only once for each function it appears in
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
seccomp_bpf.c:1788:6: error: ‘PTRACE_EVENTMSG_SYSCALL_EXIT’ undeclared (first use in this function)
: PTRACE_EVENTMSG_SYSCALL_EXIT, msg);
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
Makefile:12: recipe for target 'seccomp_bpf' failed
make: *** [seccomp_bpf] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-201766a20e30f982ccfe36bebfad9602c3ff574a/tools/testing/selftests/seccomp'
To reproduce:
# build kernel
cd linux
cp config-5.2.0-10889-g201766a20e30f9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
7 months, 1 week
[btrfs] 8d47a0d8f7: fio.write_bw_MBps -28.6% regression
by kernel test robot
Greeting,
FYI, we noticed a -28.6% regression of fio.write_bw_MBps due to commit:
commit: 8d47a0d8f7947422dd359ac8e462687f81a7a137 ("btrfs: Do mandatory tree block check before submitting bio")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fio-basic
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
with following parameters:
disk: 2pmem
fs: btrfs
runtime: 200s
nr_task: 50%
time_based: tb
rw: randwrite
bs: 4k
ioengine: libaio
test_size: 100G
cpufreq_governor: performance
ucode: 0x3d
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
4k/gcc-7/performance/2pmem/btrfs/libaio/x86_64-rhel-7.6/50%/debian-x86_64-2018-04-03.cgz/200s/randwrite/lkp-hsw-ep2/100G/fio-basic/tb/0x3d
commit:
ff2ac107fa ("btrfs: tree-checker: Remove comprehensive root owner check")
8d47a0d8f7 ("btrfs: Do mandatory tree block check before submitting bio")
ff2ac107fae2440b 8d47a0d8f7947422dd359ac8e46
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.BUG:unable_to_handle_kernel
:4 25% 1:4 dmesg.Kernel_panic-not_syncing:Fatal_exception_in_interrupt
:4 25% 1:4 dmesg.Oops:#[##]
:4 25% 1:4 dmesg.RIP:cpuidle_enter_state
:4 25% 1:4 dmesg.RIP:native_write_msr
:4 25% 1:4 dmesg.RIP:perf_prepare_sample
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
0.01 ± 21% +0.3 0.28 ± 20% fio.latency_1000ms%
4.14 ± 13% +1.8 5.95 ± 9% fio.latency_10ms%
8.73 ± 15% +3.0 11.68 ± 8% fio.latency_20ms%
1.91 ± 5% -0.4 1.52 ± 3% fio.latency_250ms%
66.81 -5.9 60.90 ± 2% fio.latency_4ms%
0.18 ± 15% +0.1 0.29 ± 16% fio.latency_750us%
1.017e+08 -28.5% 72701525 fio.time.file_system_outputs
109.75 -26.8% 80.33 ± 2% fio.time.percent_of_cpu_this_job_got
197.97 -26.4% 145.79 ± 2% fio.time.system_time
38.66 ± 4% -24.0% 29.39 ± 3% fio.time.user_time
12388015 -28.5% 8858878 fio.time.voluntary_context_switches
12711059 -28.5% 9083954 fio.workload
247.91 -28.6% 176.96 fio.write_bw_MBps
309248 ± 2% +71.7% 531114 ± 4% fio.write_clat_99%_us
17574 +40.0% 24609 fio.write_clat_mean_us
57989 ± 4% +82.2% 105661 fio.write_clat_stddev
63463 -28.6% 45302 fio.write_iops
564.61 +40.2% 791.54 fio.write_slat_mean_us
9375 ± 7% +96.3% 18406 fio.write_slat_stddev
21.25 +8.5% 23.06 ± 9% boot-time.dhcp
93.32 +1.4% 94.58 iostat.cpu.idle
6.43 -18.6% 5.23 ± 2% iostat.cpu.system
6.48 -1.2 5.27 ± 2% mpstat.cpu.all.sys%
0.24 ± 5% -0.1 0.18 ± 2% mpstat.cpu.all.usr%
93.00 +1.1% 94.00 vmstat.cpu.id
591200 -30.0% 413921 vmstat.io.bo
31653776 ± 2% -10.9% 28196588 ± 2% vmstat.memory.cache
11927341 ± 7% +29.8% 15486805 ± 4% vmstat.memory.free
5.00 -26.7% 3.67 ± 12% vmstat.procs.r
635910 ± 2% -27.6% 460384 vmstat.system.cs
7822384 ± 2% -25.0% 5870498 numa-numastat.node0.local_node
7839861 ± 2% -24.8% 5893785 numa-numastat.node0.numa_hit
224988 ± 47% -93.5% 14675 ± 48% numa-numastat.node0.numa_miss
242465 ± 44% -84.3% 37964 ± 18% numa-numastat.node0.other_node
7762623 ± 12% -26.8% 5681358 ± 6% numa-numastat.node1.local_node
224988 ± 47% -93.5% 14675 ± 48% numa-numastat.node1.numa_foreign
7768618 ± 12% -26.9% 5681573 ± 6% numa-numastat.node1.numa_hit
9.429e+08 ± 10% -35.4% 6.094e+08 cpuidle.C1.time
53504154 ± 6% -31.8% 36504731 cpuidle.C1.usage
1.093e+09 ± 22% -50.3% 5.435e+08 ± 3% cpuidle.C1E.time
16541925 ± 13% -45.0% 9103775 cpuidle.C1E.usage
5.366e+09 ± 36% +71.7% 9.214e+09 ± 18% cpuidle.C3.time
16163664 ± 33% +57.0% 25379738 ± 5% cpuidle.C3.usage
9697055 ± 16% -35.8% 6225168 ± 19% cpuidle.C6.usage
19313595 ± 17% -16.3% 16166123 cpuidle.POLL.time
10291838 ± 9% -26.8% 7537045 ± 7% meminfo.Active
10034782 ± 9% -27.4% 7280710 ± 7% meminfo.Active(file)
31524905 ± 2% -10.9% 28082249 ± 2% meminfo.Cached
115157 ± 4% +61.1% 185560 ± 3% meminfo.CmaFree
6430720 ± 8% +19.1% 7660885 ± 6% meminfo.DirectMap2M
534230 ± 5% -12.8% 465932 ± 6% meminfo.Dirty
11737322 ± 7% +30.2% 15281189 ± 4% meminfo.MemFree
37604344 ± 2% -9.4% 34060483 meminfo.Memused
806.50 ± 37% +137.4% 1914 ± 8% meminfo.Writeback
370.50 ± 11% +36.8% 507.00 slabinfo.bdev_cache.active_objs
370.50 ± 11% +36.8% 507.00 slabinfo.bdev_cache.num_objs
15147 ± 5% -8.3% 13897 ± 2% slabinfo.blkdev_ioc.active_objs
17839 ± 6% -11.7% 15745 ± 2% slabinfo.blkdev_ioc.num_objs
12178 ± 4% -7.4% 11275 ± 2% slabinfo.kmalloc-512.num_objs
7794 ± 3% +25.4% 9774 slabinfo.proc_inode_cache.active_objs
8017 ± 2% +23.4% 9895 slabinfo.proc_inode_cache.num_objs
2366 ± 23% -32.6% 1595 ± 5% slabinfo.task_group.active_objs
2366 ± 23% -32.6% 1595 ± 5% slabinfo.task_group.num_objs
263.75 ± 2% -15.1% 224.00 ± 2% turbostat.Avg_MHz
9.46 ± 2% -1.4 8.07 ± 2% turbostat.Busy%
53502248 ± 6% -31.8% 36503898 turbostat.C1
5.86 ± 8% -2.0 3.82 turbostat.C1%
16540396 ± 13% -45.0% 9102733 turbostat.C1E
6.80 ± 22% -3.4 3.40 ± 4% turbostat.C1E%
16163332 ± 33% +57.0% 25379351 ± 5% turbostat.C3
33.55 ± 36% +24.1 57.61 ± 18% turbostat.C3%
9689445 ± 16% -35.8% 6221033 ± 19% turbostat.C6
18.04 ± 26% +49.1% 26.90 ± 9% turbostat.CPU%c3
53.25 ± 3% +10.8% 59.00 ± 4% turbostat.CoreTmp
4.67 ± 14% +62.7% 7.59 ± 26% turbostat.Pkg%pc2
56.50 +9.7% 62.00 ± 2% turbostat.PkgTmp
11.84 -4.8% 11.28 turbostat.RAMWatt
7232478 ± 10% -37.2% 4543037 ± 22% numa-meminfo.node0.Active
7072564 ± 10% -37.6% 4410135 ± 23% numa-meminfo.node0.Active(file)
308167 ± 6% -24.7% 232152 ± 10% numa-meminfo.node0.Dirty
8784028 ± 13% +21.7% 10686327 ± 2% numa-meminfo.node0.Inactive
8669548 ± 12% +20.7% 10465444 ± 2% numa-meminfo.node0.Inactive(file)
572.50 ± 22% +115.9% 1236 ± 25% numa-meminfo.node0.Writeback
15098140 ± 3% -17.7% 12425670 ± 3% numa-meminfo.node1.FilePages
11599549 ± 7% -22.3% 9018051 ± 4% numa-meminfo.node1.Inactive
105067 ± 92% -99.3% 725.33 ± 51% numa-meminfo.node1.Inactive(anon)
11494482 ± 7% -21.6% 9017325 ± 4% numa-meminfo.node1.Inactive(file)
5504 ± 9% -12.5% 4819 ± 2% numa-meminfo.node1.KernelStack
6709938 ± 11% +45.8% 9781880 ± 5% numa-meminfo.node1.MemFree
18029136 ± 4% -17.0% 14957200 ± 3% numa-meminfo.node1.MemUsed
4427 ± 69% -77.5% 996.00 ± 8% numa-meminfo.node1.PageTables
857660 ± 32% -37.2% 538646 ± 8% numa-meminfo.node1.SUnreclaim
109232 ± 88% -96.1% 4274 ± 10% numa-meminfo.node1.Shmem
986740 ± 35% -40.0% 592452 ± 10% numa-meminfo.node1.Slab
575.21 ± 15% +30.1% 748.58 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.max
100.45 ± 15% +28.6% 129.19 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.stddev
830.19 ± 9% +15.7% 960.25 ± 6% sched_debug.cfs_rq:/.util_avg.max
29.97 ± 16% -25.2% 22.43 ± 31% sched_debug.cfs_rq:/.util_est_enqueued.avg
591.17 ± 13% -22.2% 459.83 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.max
107.52 ± 13% -23.0% 82.84 ± 23% sched_debug.cfs_rq:/.util_est_enqueued.stddev
2.06 ± 5% -9.3% 1.87 sched_debug.cpu.clock.stddev
2.06 ± 5% -9.3% 1.87 sched_debug.cpu.clock_task.stddev
24.51 ± 7% +21.5% 29.77 ± 8% sched_debug.cpu.cpu_load[0].avg
541.29 ± 15% +38.3% 748.58 ± 2% sched_debug.cpu.cpu_load[0].max
93.97 ± 7% +27.8% 120.09 ± 2% sched_debug.cpu.cpu_load[0].stddev
519.08 ± 6% +40.5% 729.42 ± 4% sched_debug.cpu.cpu_load[1].max
87.45 ± 5% +28.5% 112.39 ± 4% sched_debug.cpu.cpu_load[1].stddev
480.83 ± 7% +39.5% 670.92 ± 13% sched_debug.cpu.cpu_load[2].max
80.92 ± 5% +25.7% 101.75 ± 8% sched_debug.cpu.cpu_load[2].stddev
408.83 ± 4% +54.0% 629.58 ± 19% sched_debug.cpu.cpu_load[3].max
71.62 ± 2% +30.7% 93.61 ± 11% sched_debug.cpu.cpu_load[3].stddev
381.50 ± 8% +66.7% 636.00 ± 18% sched_debug.cpu.cpu_load[4].max
64.40 ± 6% +39.7% 89.96 ± 12% sched_debug.cpu.cpu_load[4].stddev
106.75 ± 20% -88.4% 12.33 ± 10% proc-vmstat.kswapd_high_wmark_hit_quickly
2508660 ± 9% -27.5% 1819662 ± 7% proc-vmstat.nr_active_file
33315360 -30.4% 23192301 proc-vmstat.nr_dirtied
133447 ± 5% -12.6% 116580 ± 6% proc-vmstat.nr_dirty
7881043 ± 2% -10.9% 7019533 ± 2% proc-vmstat.nr_file_pages
28790 ± 4% +61.1% 46392 ± 3% proc-vmstat.nr_free_cma
2934503 ± 7% +30.2% 3821369 ± 4% proc-vmstat.nr_free_pages
5044168 -3.4% 4872965 proc-vmstat.nr_inactive_file
430282 -6.0% 404589 proc-vmstat.nr_slab_unreclaimable
197.75 ± 38% +136.5% 467.67 ± 10% proc-vmstat.nr_writeback
33243480 -30.4% 23143756 proc-vmstat.nr_written
2508665 ± 9% -27.5% 1819662 ± 7% proc-vmstat.nr_zone_active_file
5044399 -3.4% 4873026 proc-vmstat.nr_zone_inactive_file
134972 ± 5% -12.4% 118262 ± 5% proc-vmstat.nr_zone_write_pending
15632301 ± 5% -25.8% 11598535 ± 3% proc-vmstat.numa_hit
15608826 ± 5% -25.8% 11575029 ± 3% proc-vmstat.numa_local
2300530 ± 10% -31.1% 1585363 ± 10% proc-vmstat.pgactivate
16955745 -26.3% 12499611 ± 2% proc-vmstat.pgalloc_normal
17488000 -25.9% 12959812 ± 2% proc-vmstat.pgfree
1.33e+08 -30.4% 92575440 proc-vmstat.pgpgout
5728177 ± 6% -78.5% 1232433 ± 25% proc-vmstat.pgscan_kswapd
5727967 ± 6% -78.5% 1232244 ± 25% proc-vmstat.pgsteal_kswapd
32760 -7.6% 30285 proc-vmstat.slabs_scanned
1768211 ± 10% -37.7% 1102275 ± 23% numa-vmstat.node0.nr_active_file
10410188 ± 7% -32.5% 7023939 ± 7% numa-vmstat.node0.nr_dirtied
76995 ± 6% -24.6% 58035 ± 10% numa-vmstat.node0.nr_dirty
2167457 ± 12% +20.7% 2616147 ± 2% numa-vmstat.node0.nr_inactive_file
142.00 ± 24% +120.4% 313.00 ± 24% numa-vmstat.node0.nr_writeback
10308758 ± 7% -32.6% 6949497 ± 7% numa-vmstat.node0.nr_written
1768216 ± 10% -37.7% 1102275 ± 23% numa-vmstat.node0.nr_zone_active_file
2167563 ± 12% +20.7% 2616189 ± 2% numa-vmstat.node0.nr_zone_inactive_file
77786 ± 7% -24.3% 58899 ± 10% numa-vmstat.node0.nr_zone_write_pending
5889240 ± 3% -14.3% 5046874 ± 3% numa-vmstat.node0.numa_hit
5837851 ± 4% -14.0% 5023120 ± 3% numa-vmstat.node0.numa_local
68376 ± 48% -96.9% 2149 ± 46% numa-vmstat.node0.numa_miss
119766 ± 42% -78.4% 25904 ± 3% numa-vmstat.node0.numa_other
30.50 ± 38% -100.0% 0.00 numa-vmstat.node0.workingset_nodes
6829126 ± 6% -22.8% 5269898 ± 10% numa-vmstat.node1.nr_dirtied
3774645 ± 3% -17.7% 3106081 ± 3% numa-vmstat.node1.nr_file_pages
28796 ± 4% +61.1% 46405 ± 3% numa-vmstat.node1.nr_free_cma
1677332 ± 11% +45.8% 2445778 ± 5% numa-vmstat.node1.nr_free_pages
26266 ± 92% -99.3% 180.67 ± 51% numa-vmstat.node1.nr_inactive_anon
2873702 ± 7% -21.6% 2254116 ± 4% numa-vmstat.node1.nr_inactive_file
5506 ± 9% -12.5% 4820 ± 2% numa-vmstat.node1.nr_kernel_stack
29031 ± 85% -88.2% 3436 numa-vmstat.node1.nr_mapped
1106 ± 69% -77.5% 249.00 ± 8% numa-vmstat.node1.nr_page_table_pages
27308 ± 88% -96.1% 1068 ± 10% numa-vmstat.node1.nr_shmem
214418 ± 32% -37.2% 134663 ± 8% numa-vmstat.node1.nr_slab_unreclaimable
75.00 ± 35% +145.3% 184.00 ± 18% numa-vmstat.node1.nr_writeback
6761224 ± 6% -23.1% 5202586 ± 10% numa-vmstat.node1.nr_written
26266 ± 92% -99.3% 180.67 ± 51% numa-vmstat.node1.nr_zone_inactive_anon
2873810 ± 7% -21.6% 2254135 ± 4% numa-vmstat.node1.nr_zone_inactive_file
68381 ± 48% -96.9% 2152 ± 46% numa-vmstat.node1.numa_foreign
5035283 ± 15% -26.9% 3679318 ± 7% numa-vmstat.node1.numa_hit
4905249 ± 15% -28.2% 3522229 ± 8% numa-vmstat.node1.numa_local
18.43 ± 6% -28.3% 13.21 ± 3% perf-stat.i.MPKI
2.205e+09 ± 3% +10.5% 2.437e+09 perf-stat.i.branch-instructions
1.83 ± 5% -0.6 1.24 ± 7% perf-stat.i.branch-miss-rate%
37019551 ± 3% -29.4% 26140844 ± 5% perf-stat.i.branch-misses
28932020 ± 6% -31.1% 19933819 ± 3% perf-stat.i.cache-misses
2.03e+08 ± 4% -24.8% 1.526e+08 perf-stat.i.cache-references
645248 ± 2% -27.4% 468301 perf-stat.i.context-switches
1.97 ± 7% -22.1% 1.53 ± 2% perf-stat.i.cpi
1.983e+10 ± 3% -15.1% 1.684e+10 perf-stat.i.cpu-cycles
1.03 ± 5% -0.4 0.64 ± 10% perf-stat.i.dTLB-load-miss-rate%
31879502 ± 6% -37.9% 19807598 ± 9% perf-stat.i.dTLB-load-misses
3.201e+09 ± 3% +18.8% 3.804e+09 perf-stat.i.dTLB-loads
0.13 ± 7% -0.0 0.09 ± 10% perf-stat.i.dTLB-store-miss-rate%
2516181 ± 10% -28.3% 1805272 ± 10% perf-stat.i.dTLB-store-misses
2.143e+09 ± 3% +19.8% 2.568e+09 ± 2% perf-stat.i.dTLB-stores
13781489 ± 3% -28.4% 9866239 ± 3% perf-stat.i.iTLB-loads
1.164e+10 ± 3% +21.6% 1.415e+10 perf-stat.i.instructions
8435 ± 17% +42.5% 12023 ± 14% perf-stat.i.instructions-per-iTLB-miss
0.57 ± 3% +43.6% 0.82 perf-stat.i.ipc
16050361 ± 9% -30.7% 11127721 ± 3% perf-stat.i.node-load-misses
8120610 ± 9% -31.3% 5580996 perf-stat.i.node-loads
2591649 ± 13% -31.1% 1785932 ± 6% perf-stat.i.node-store-misses
1809775 ± 12% -31.2% 1244360 ± 2% perf-stat.i.node-stores
17.45 ± 4% -38.2% 10.79 perf-stat.overall.MPKI
1.68 ± 3% -0.6 1.07 ± 5% perf-stat.overall.branch-miss-rate%
1.70 ± 3% -30.2% 1.19 perf-stat.overall.cpi
689.69 ± 9% +22.6% 845.48 ± 2% perf-stat.overall.cycles-between-cache-misses
0.99 ± 5% -0.5 0.52 ± 10% perf-stat.overall.dTLB-load-miss-rate%
0.12 ± 8% -0.0 0.07 ± 10% perf-stat.overall.dTLB-store-miss-rate%
10.81 ± 14% +2.3 13.09 ± 10% perf-stat.overall.iTLB-load-miss-rate%
7119 ± 14% +35.9% 9679 ± 13% perf-stat.overall.instructions-per-iTLB-miss
0.59 ± 3% +43.1% 0.84 perf-stat.overall.ipc
202912 ± 2% +68.8% 342615 ± 3% perf-stat.overall.path-length
2.195e+09 ± 3% +10.5% 2.425e+09 perf-stat.ps.branch-instructions
36847010 ± 3% -29.4% 26015686 ± 5% perf-stat.ps.branch-misses
28799092 ± 6% -31.1% 19837306 ± 3% perf-stat.ps.cache-misses
2.021e+08 ± 4% -24.8% 1.519e+08 perf-stat.ps.cache-references
642266 ± 2% -27.4% 465968 perf-stat.ps.context-switches
1.974e+10 ± 3% -15.1% 1.676e+10 perf-stat.ps.cpu-cycles
31729472 ± 6% -37.9% 19715594 ± 9% perf-stat.ps.dTLB-load-misses
3.186e+09 ± 3% +18.8% 3.785e+09 perf-stat.ps.dTLB-loads
2504443 ± 10% -28.2% 1797154 ± 10% perf-stat.ps.dTLB-store-misses
2.133e+09 ± 3% +19.8% 2.556e+09 ± 2% perf-stat.ps.dTLB-stores
13717699 ± 3% -28.4% 9816979 ± 3% perf-stat.ps.iTLB-loads
1.159e+10 ± 3% +21.5% 1.408e+10 perf-stat.ps.instructions
15979371 ± 9% -30.7% 11073468 ± 3% perf-stat.ps.node-load-misses
8081205 ± 9% -31.3% 5554114 perf-stat.ps.node-loads
2580198 ± 13% -31.1% 1777169 ± 6% perf-stat.ps.node-store-misses
1801590 ± 12% -31.3% 1238321 ± 2% perf-stat.ps.node-stores
2.579e+12 +20.6% 3.111e+12 ± 2% perf-stat.total.instructions
36149 ± 4% -26.2% 26686 ± 5% softirqs.CPU0.RCU
37255 ± 3% -26.8% 27256 ± 2% softirqs.CPU1.RCU
37834 ± 5% -26.5% 27801 ± 9% softirqs.CPU10.RCU
39075 ± 3% -24.5% 29494 ± 2% softirqs.CPU11.RCU
38137 ± 6% -23.1% 29340 ± 4% softirqs.CPU12.RCU
37480 ± 3% -20.5% 29797 ± 2% softirqs.CPU13.RCU
37690 ± 5% -28.7% 26861 ± 4% softirqs.CPU14.RCU
33863 ± 9% -29.5% 23885 ± 5% softirqs.CPU15.RCU
34306 ± 9% -31.1% 23641 ± 12% softirqs.CPU16.RCU
34374 ± 9% -29.0% 24402 ± 7% softirqs.CPU17.RCU
44013 ± 9% -31.7% 30059 softirqs.CPU18.RCU
43114 ± 8% -29.9% 30219 softirqs.CPU19.RCU
36276 ± 8% -23.3% 27825 ± 5% softirqs.CPU2.RCU
67049 ± 7% +39.1% 93267 ± 18% softirqs.CPU2.TIMER
39721 ± 13% -31.4% 27233 ± 3% softirqs.CPU20.RCU
42340 ± 5% -28.6% 30215 ± 6% softirqs.CPU21.RCU
40896 ± 7% -30.3% 28506 ± 2% softirqs.CPU22.RCU
42422 ± 7% -33.2% 28357 ± 2% softirqs.CPU23.RCU
42330 ± 9% -33.5% 28150 ± 5% softirqs.CPU24.RCU
41406 ± 9% -31.4% 28398 softirqs.CPU25.RCU
39754 ± 5% -29.4% 28063 softirqs.CPU26.RCU
40207 ± 6% -27.9% 28995 ± 12% softirqs.CPU27.RCU
40351 ± 9% -29.4% 28487 ± 3% softirqs.CPU28.RCU
41880 ± 5% -29.6% 29472 softirqs.CPU29.RCU
36293 ± 4% -17.7% 29864 ± 7% softirqs.CPU3.RCU
39341 ± 9% -34.3% 25847 softirqs.CPU30.RCU
39687 ± 9% -33.5% 26400 ± 2% softirqs.CPU31.RCU
38062 ± 14% -39.9% 22878 ± 10% softirqs.CPU32.RCU
39769 ± 13% -33.8% 26314 ± 5% softirqs.CPU33.RCU
39504 ± 10% -37.2% 24824 ± 6% softirqs.CPU34.RCU
39591 ± 11% -38.6% 24300 ± 5% softirqs.CPU35.RCU
40398 ± 8% -27.0% 29488 ± 12% softirqs.CPU36.RCU
39464 ± 12% -25.3% 29478 ± 10% softirqs.CPU37.RCU
41177 ± 9% -34.0% 27190 ± 19% softirqs.CPU38.RCU
38734 ± 13% -24.8% 29145 ± 5% softirqs.CPU39.RCU
36804 ± 5% -23.0% 28354 softirqs.CPU4.RCU
40421 ± 12% -28.8% 28776 ± 6% softirqs.CPU40.RCU
40371 ± 11% -31.1% 27834 ± 8% softirqs.CPU41.RCU
40633 ± 10% -29.9% 28469 ± 10% softirqs.CPU42.RCU
41018 ± 9% -33.6% 27252 ± 6% softirqs.CPU43.RCU
38181 ± 12% -27.0% 27856 ± 10% softirqs.CPU44.RCU
39605 ± 12% -32.3% 26803 ± 12% softirqs.CPU45.RCU
42686 ± 11% -29.7% 30016 ± 9% softirqs.CPU47.RCU
40433 ± 9% -31.8% 27586 ± 11% softirqs.CPU48.RCU
38371 ± 12% -23.0% 29528 ± 7% softirqs.CPU49.RCU
37172 ± 2% -25.9% 27562 ± 3% softirqs.CPU5.RCU
39660 ± 12% -27.5% 28752 ± 11% softirqs.CPU50.RCU
40375 ± 10% -26.7% 29582 ± 3% softirqs.CPU51.RCU
40568 ± 9% -28.8% 28886 ± 6% softirqs.CPU52.RCU
39967 ± 10% -27.3% 29073 ± 11% softirqs.CPU53.RCU
40874 ± 11% -37.1% 25725 ± 6% softirqs.CPU54.RCU
41227 ± 14% -37.9% 25606 ± 6% softirqs.CPU55.RCU
38856 ± 13% -34.7% 25380 ± 8% softirqs.CPU56.RCU
39618 ± 13% -39.1% 24119 ± 4% softirqs.CPU57.RCU
39412 ± 12% -36.6% 24970 ± 3% softirqs.CPU58.RCU
41572 ± 14% -41.3% 24399 ± 10% softirqs.CPU59.RCU
36927 ± 3% -24.4% 27923 ± 22% softirqs.CPU6.RCU
37057 ± 10% -31.8% 25261 ± 2% softirqs.CPU60.RCU
36584 ± 8% -27.8% 26419 ± 2% softirqs.CPU61.RCU
35859 ± 4% -28.2% 25729 ± 5% softirqs.CPU62.RCU
35849 ± 7% -33.2% 23933 ± 5% softirqs.CPU63.RCU
35379 ± 6% -26.4% 26041 ± 4% softirqs.CPU64.RCU
36908 ± 6% -26.5% 27110 ± 2% softirqs.CPU65.RCU
37595 ± 3% -26.1% 27783 ± 4% softirqs.CPU66.RCU
36327 ± 2% -27.0% 26533 ± 2% softirqs.CPU67.RCU
35136 ± 7% -31.7% 23998 ± 9% softirqs.CPU68.RCU
35638 ± 4% -28.6% 25435 ± 4% softirqs.CPU69.RCU
37268 ± 6% -25.4% 27786 ± 4% softirqs.CPU7.RCU
36693 ± 4% -29.2% 25986 ± 5% softirqs.CPU70.RCU
36859 ± 3% -28.8% 26260 ± 4% softirqs.CPU71.RCU
37539 ± 7% -27.7% 27144 ± 4% softirqs.CPU8.RCU
37584 ± 4% -26.0% 27802 ± 15% softirqs.CPU9.RCU
2795313 ± 7% -29.7% 1964363 ± 4% softirqs.RCU
249.50 ± 76% -50.4% 123.67 ± 2% interrupts.41:PCI-MSI.1572870-edge.eth0-TxRx-6
154.50 ± 13% -10.9% 137.67 ± 10% interrupts.42:PCI-MSI.1572871-edge.eth0-TxRx-7
150.50 ± 43% -28.0% 108.33 interrupts.84:PCI-MSI.1572913-edge.eth0-TxRx-49
10393 ± 10% -26.4% 7653 ± 10% interrupts.CPU1.RES:Rescheduling_interrupts
12518 ± 15% -31.0% 8642 ± 21% interrupts.CPU10.RES:Rescheduling_interrupts
945.25 ± 79% -75.4% 232.67 ±141% interrupts.CPU11.NMI:Non-maskable_interrupts
945.25 ± 79% -75.4% 232.67 ±141% interrupts.CPU11.PMI:Performance_monitoring_interrupts
13068 ± 18% -30.6% 9075 ± 23% interrupts.CPU11.RES:Rescheduling_interrupts
11949 ± 11% -49.8% 6001 ± 29% interrupts.CPU12.RES:Rescheduling_interrupts
13711 ± 8% -19.4% 11052 ± 24% interrupts.CPU13.RES:Rescheduling_interrupts
13361 ± 6% -38.8% 8170 ± 25% interrupts.CPU14.RES:Rescheduling_interrupts
7628 ± 12% -29.2% 5401 ± 7% interrupts.CPU18.RES:Rescheduling_interrupts
7909 ± 25% -30.4% 5501 ± 37% interrupts.CPU19.RES:Rescheduling_interrupts
1482 ± 39% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts
1482 ± 39% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts
2116 +23.2% 2606 ± 10% interrupts.CPU20.CAL:Function_call_interrupts
194.75 ±173% +380.1% 935.00 ± 30% interrupts.CPU20.NMI:Non-maskable_interrupts
194.75 ±173% +380.1% 935.00 ± 30% interrupts.CPU20.PMI:Performance_monitoring_interrupts
2096 ± 4% +18.0% 2474 ± 9% interrupts.CPU21.CAL:Function_call_interrupts
11313 ± 20% -56.4% 4935 ± 23% interrupts.CPU21.RES:Rescheduling_interrupts
2111 ± 3% +20.1% 2534 ± 11% interrupts.CPU22.CAL:Function_call_interrupts
2109 ± 4% +22.6% 2585 ± 10% interrupts.CPU23.CAL:Function_call_interrupts
10305 ± 16% -40.7% 6115 ± 17% interrupts.CPU23.RES:Rescheduling_interrupts
2128 ± 2% +17.8% 2507 ± 10% interrupts.CPU24.CAL:Function_call_interrupts
12271 ± 20% -39.0% 7488 ± 29% interrupts.CPU24.RES:Rescheduling_interrupts
2123 +19.7% 2541 ± 7% interrupts.CPU25.CAL:Function_call_interrupts
9780 ± 21% -32.9% 6563 ± 25% interrupts.CPU25.RES:Rescheduling_interrupts
980.75 ± 89% -88.9% 109.00 ±141% interrupts.CPU26.NMI:Non-maskable_interrupts
980.75 ± 89% -88.9% 109.00 ±141% interrupts.CPU26.PMI:Performance_monitoring_interrupts
11163 ± 28% -49.8% 5605 ± 13% interrupts.CPU27.RES:Rescheduling_interrupts
11071 ± 12% -56.3% 4835 ± 26% interrupts.CPU29.RES:Rescheduling_interrupts
728.50 ± 66% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts
728.50 ± 66% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts
10599 ± 10% -38.7% 6501 ± 16% interrupts.CPU31.RES:Rescheduling_interrupts
2213 ± 4% +14.2% 2527 ± 7% interrupts.CPU32.CAL:Function_call_interrupts
8862 ± 16% -39.6% 5354 ± 28% interrupts.CPU32.RES:Rescheduling_interrupts
2156 ± 6% +20.5% 2599 ± 9% interrupts.CPU33.CAL:Function_call_interrupts
2169 ± 5% +19.9% 2601 ± 9% interrupts.CPU34.CAL:Function_call_interrupts
10483 ± 18% -48.8% 5371 ± 33% interrupts.CPU34.RES:Rescheduling_interrupts
2049 +24.4% 2549 ± 8% interrupts.CPU35.CAL:Function_call_interrupts
10929 ± 15% -49.0% 5578 ± 14% interrupts.CPU35.RES:Rescheduling_interrupts
1993 ± 4% +28.5% 2561 ± 12% interrupts.CPU36.CAL:Function_call_interrupts
161.75 ±161% +717.9% 1323 ± 43% interrupts.CPU36.NMI:Non-maskable_interrupts
161.75 ±161% +717.9% 1323 ± 43% interrupts.CPU36.PMI:Performance_monitoring_interrupts
1987 ± 4% +30.5% 2594 ± 13% interrupts.CPU37.CAL:Function_call_interrupts
1863 ± 10% +35.7% 2529 ± 14% interrupts.CPU38.CAL:Function_call_interrupts
1984 ± 6% +26.5% 2509 ± 12% interrupts.CPU41.CAL:Function_call_interrupts
12830 ± 30% -26.4% 9445 ± 9% interrupts.CPU43.RES:Rescheduling_interrupts
2067 ± 2% +27.4% 2632 ± 9% interrupts.CPU44.CAL:Function_call_interrupts
2066 ± 3% +27.2% 2629 ± 9% interrupts.CPU45.CAL:Function_call_interrupts
2158 ± 3% +19.6% 2582 ± 9% interrupts.CPU46.CAL:Function_call_interrupts
2036 ± 6% +30.3% 2652 ± 10% interrupts.CPU47.CAL:Function_call_interrupts
150.50 ± 43% -28.0% 108.33 interrupts.CPU49.84:PCI-MSI.1572913-edge.eth0-TxRx-49
11823 ± 12% -36.9% 7460 ± 18% interrupts.CPU49.RES:Rescheduling_interrupts
1139 ± 58% -75.5% 278.67 ±141% interrupts.CPU5.NMI:Non-maskable_interrupts
1139 ± 58% -75.5% 278.67 ±141% interrupts.CPU5.PMI:Performance_monitoring_interrupts
11748 ± 22% -40.0% 7048 ± 27% interrupts.CPU5.RES:Rescheduling_interrupts
2102 ± 5% +19.6% 2515 ± 7% interrupts.CPU52.CAL:Function_call_interrupts
2115 ± 6% +19.5% 2527 ± 7% interrupts.CPU53.CAL:Function_call_interrupts
11318 ± 17% -20.4% 9004 ± 26% interrupts.CPU53.RES:Rescheduling_interrupts
2177 ± 5% +17.4% 2556 ± 9% interrupts.CPU54.CAL:Function_call_interrupts
10968 ± 23% -62.3% 4134 ± 21% interrupts.CPU54.RES:Rescheduling_interrupts
2161 ± 5% +20.2% 2598 ± 9% interrupts.CPU55.CAL:Function_call_interrupts
9334 ± 19% -55.4% 4158 ± 34% interrupts.CPU55.RES:Rescheduling_interrupts
1443 ± 22% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts
1443 ± 22% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts
11377 ± 31% -56.9% 4907 ± 43% interrupts.CPU56.RES:Rescheduling_interrupts
601.00 ± 99% -79.6% 122.67 ±141% interrupts.CPU57.NMI:Non-maskable_interrupts
601.00 ± 99% -79.6% 122.67 ±141% interrupts.CPU57.PMI:Performance_monitoring_interrupts
8295 ± 31% -47.7% 4340 ± 24% interrupts.CPU57.RES:Rescheduling_interrupts
9182 ± 37% -59.7% 3701 ± 46% interrupts.CPU58.RES:Rescheduling_interrupts
249.50 ± 76% -50.4% 123.67 ± 2% interrupts.CPU6.41:PCI-MSI.1572870-edge.eth0-TxRx-6
12473 ± 27% -44.4% 6940 ± 13% interrupts.CPU6.RES:Rescheduling_interrupts
7886 ± 12% -42.2% 4555 ± 31% interrupts.CPU60.RES:Rescheduling_interrupts
971.50 ± 47% -86.8% 128.33 ±141% interrupts.CPU61.NMI:Non-maskable_interrupts
971.50 ± 47% -86.8% 128.33 ±141% interrupts.CPU61.PMI:Performance_monitoring_interrupts
9253 ± 26% -56.4% 4033 ± 16% interrupts.CPU61.RES:Rescheduling_interrupts
2103 ± 5% +19.1% 2505 ± 11% interrupts.CPU63.CAL:Function_call_interrupts
2059 ± 2% +20.6% 2483 ± 10% interrupts.CPU64.CAL:Function_call_interrupts
8545 ± 17% -41.2% 5025 ± 8% interrupts.CPU65.RES:Rescheduling_interrupts
7378 ± 23% -51.0% 3614 ± 30% interrupts.CPU68.RES:Rescheduling_interrupts
8573 ± 28% -50.1% 4277 ± 6% interrupts.CPU69.RES:Rescheduling_interrupts
154.50 ± 13% -10.9% 137.67 ± 10% interrupts.CPU7.42:PCI-MSI.1572871-edge.eth0-TxRx-7
9910 ± 28% -51.2% 4839 ± 40% interrupts.CPU70.RES:Rescheduling_interrupts
8836 ± 26% -52.7% 4180 ± 6% interrupts.CPU71.RES:Rescheduling_interrupts
911.50 ± 37% -93.2% 61.67 ±141% interrupts.CPU8.NMI:Non-maskable_interrupts
911.50 ± 37% -93.2% 61.67 ±141% interrupts.CPU8.PMI:Performance_monitoring_interrupts
12326 ± 15% -32.3% 8350 ± 10% interrupts.CPU8.RES:Rescheduling_interrupts
1392 ± 58% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts
1392 ± 58% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts
60185 ± 19% -30.5% 41822 ± 11% interrupts.NMI:Non-maskable_interrupts
60185 ± 19% -30.5% 41822 ± 11% interrupts.PMI:Performance_monitoring_interrupts
749419 -31.9% 510225 ± 3% interrupts.RES:Rescheduling_interrupts
15.05 ± 6% -4.1 10.97 ± 20% perf-profile.calltrace.cycles-pp.normal_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
12.66 ± 5% -4.0 8.68 ± 20% perf-profile.calltrace.cycles-pp.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread.kthread
5.99 ± 8% -1.6 4.37 ± 6% perf-profile.calltrace.cycles-pp.io_submit
5.83 ± 8% -1.6 4.25 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.io_submit
5.82 ± 8% -1.6 4.24 ± 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.io_submit
5.81 ± 8% -1.6 4.23 ± 6% perf-profile.calltrace.cycles-pp.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe.io_submit
5.76 ± 8% -1.6 4.19 ± 6% perf-profile.calltrace.cycles-pp.io_submit_one.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe.io_submit
5.65 ± 6% -1.6 4.08 ± 20% perf-profile.calltrace.cycles-pp.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread
4.52 ± 3% -1.5 2.99 ± 19% perf-profile.calltrace.cycles-pp.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper.process_one_work.worker_thread
6.90 ± 8% -1.5 5.38 ± 10% perf-profile.calltrace.cycles-pp.extent_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.wb_writeback
6.90 ± 8% -1.5 5.38 ± 10% perf-profile.calltrace.cycles-pp.extent_write_cache_pages.extent_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
5.58 ± 8% -1.5 4.07 ± 6% perf-profile.calltrace.cycles-pp.aio_write.io_submit_one.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.50 ± 3% -1.5 2.99 ± 19% perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper.process_one_work
5.44 ± 8% -1.5 3.96 ± 7% perf-profile.calltrace.cycles-pp.btrfs_file_write_iter.aio_write.io_submit_one.__x64_sys_io_submit.do_syscall_64
6.63 ± 8% -1.5 5.17 ± 10% perf-profile.calltrace.cycles-pp.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages.__writeback_single_inode
5.34 ± 9% -1.4 3.90 ± 6% perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_file_write_iter.aio_write.io_submit_one.__x64_sys_io_submit
5.55 ± 8% -1.2 4.32 ± 10% perf-profile.calltrace.cycles-pp.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages
4.98 ± 8% -1.1 3.86 ± 10% perf-profile.calltrace.cycles-pp.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages
5.00 ± 8% -1.1 3.88 ± 10% perf-profile.calltrace.cycles-pp.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages.extent_writepages
2.80 ± 4% -1.0 1.78 ± 22% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper.process_one_work
1.45 ± 3% -0.7 0.79 ± 18% perf-profile.calltrace.cycles-pp.__btrfs_cow_block.btrfs_cow_block.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io
1.45 ± 3% -0.7 0.79 ± 18% perf-profile.calltrace.cycles-pp.btrfs_cow_block.btrfs_search_slot.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
2.32 ± 3% -0.6 1.72 ± 18% perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper
0.91 ± 8% -0.5 0.42 ± 73% perf-profile.calltrace.cycles-pp.btrfs_lookup_csum.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io.normal_work_helper
0.88 ± 8% -0.5 0.41 ± 74% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_csum.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
0.83 ± 5% -0.4 0.41 ± 71% perf-profile.calltrace.cycles-pp.split_leaf.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums
1.90 ± 20% -0.4 1.48 ± 29% perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
1.89 ± 20% -0.4 1.48 ± 29% perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
0.82 ± 4% -0.4 0.42 ± 71% perf-profile.calltrace.cycles-pp.run_one_async_done.normal_work_helper.process_one_work.worker_thread.kthread
0.81 ± 4% -0.4 0.41 ± 71% perf-profile.calltrace.cycles-pp.btrfs_map_bio.run_one_async_done.normal_work_helper.process_one_work.worker_thread
0.74 ± 6% -0.4 0.36 ± 70% perf-profile.calltrace.cycles-pp.push_leaf_right.split_leaf.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks
1.19 ± 3% -0.4 0.83 ± 16% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.btrfs_csum_file_blocks.add_pending_csums.btrfs_finish_ordered_io
1.00 ± 6% -0.4 0.65 ± 7% perf-profile.calltrace.cycles-pp.lock_and_cleanup_extent_if_need.btrfs_buffered_write.btrfs_file_write_iter.aio_write.io_submit_one
1.27 ± 5% -0.4 0.92 ± 7% perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter.aio_write.io_submit_one
1.25 ± 6% -0.3 0.90 ± 7% perf-profile.calltrace.cycles-pp.create_io_em.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
0.93 ± 8% -0.3 0.59 ± 7% perf-profile.calltrace.cycles-pp.__set_extent_bit.lock_extent_bits.lock_and_cleanup_extent_if_need.btrfs_buffered_write.btrfs_file_write_iter
0.94 ± 7% -0.3 0.60 ± 6% perf-profile.calltrace.cycles-pp.lock_extent_bits.lock_and_cleanup_extent_if_need.btrfs_buffered_write.btrfs_file_write_iter.aio_write
0.70 ± 8% -0.3 0.37 ± 70% perf-profile.calltrace.cycles-pp.__lookup_extent_mapping.btrfs_get_extent.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter
0.79 ± 6% -0.3 0.47 ± 71% perf-profile.calltrace.cycles-pp.setup_items_for_insert.btrfs_duplicate_item.btrfs_mark_extent_written.btrfs_finish_ordered_io.normal_work_helper
0.71 ± 21% -0.3 0.43 ± 74% perf-profile.calltrace.cycles-pp.__btrfs_free_extent.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.flush_space.btrfs_async_reclaim_metadata_space
1.15 ± 11% -0.3 0.88 ± 9% perf-profile.calltrace.cycles-pp.btrfs_lookup_csums_range.csum_exist_in_range.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
1.15 ± 11% -0.3 0.88 ± 8% perf-profile.calltrace.cycles-pp.csum_exist_in_range.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
1.43 ± 13% -0.3 1.17 ± 5% perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.aio_write.io_submit_one
1.06 ± 5% -0.3 0.81 ± 7% perf-profile.calltrace.cycles-pp.btrfs_drop_extent_cache.create_io_em.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
1.30 ± 15% -0.2 1.06 ± 4% perf-profile.calltrace.cycles-pp.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter.aio_write
1.03 ± 13% -0.2 0.79 ± 9% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_csums_range.csum_exist_in_range.run_delalloc_nocow.btrfs_run_delalloc_range
1.20 ± 4% -0.2 0.96 ± 15% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
1.24 ± 3% -0.2 1.01 ± 15% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.05 ± 8% -0.2 0.82 ± 12% perf-profile.calltrace.cycles-pp.__extent_writepage_io.__extent_writepage.extent_write_cache_pages.extent_writepages.do_writepages
0.88 ± 13% -0.2 0.71 ± 2% perf-profile.calltrace.cycles-pp.wait_reserve_ticket.reserve_metadata_bytes.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_file_write_iter
1.01 ± 9% -0.2 0.84 ± 9% perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc
1.01 ± 10% -0.2 0.85 ± 9% perf-profile.calltrace.cycles-pp.btrfs_lookup_file_extent.run_delalloc_nocow.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
0.72 ± 7% -0.2 0.56 ± 6% perf-profile.calltrace.cycles-pp.btrfs_get_extent.btrfs_dirty_pages.btrfs_buffered_write.btrfs_file_write_iter.aio_write
0.75 ± 8% -0.1 0.60 ± 10% perf-profile.calltrace.cycles-pp.submit_extent_page.__extent_writepage_io.__extent_writepage.extent_write_cache_pages.extent_writepages
0.92 ± 15% +0.2 1.14 ± 3% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.01 ± 15% +0.2 1.25 ± 4% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.15 ± 15% +0.3 1.44 ± 6% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.27 ±100% +0.4 0.63 ± 4% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.74 ± 9% +0.4 2.11 ± 2% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
2.63 ± 10% +0.6 3.26 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.00 +0.8 0.75 ± 18% perf-profile.calltrace.cycles-pp.map_private_extent_buffer.btrfs_get_token_64.check_leaf.btree_csum_one_bio.btree_submit_bio_hook
2.37 ± 19% +0.8 3.19 ± 5% perf-profile.calltrace.cycles-pp.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread.ret_from_fork
2.33 ± 19% +0.8 3.17 ± 5% perf-profile.calltrace.cycles-pp.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread
0.00 +0.9 0.90 ± 18% perf-profile.calltrace.cycles-pp.map_private_extent_buffer.btrfs_get_token_32.check_leaf.btree_csum_one_bio.btree_submit_bio_hook
4.71 ± 11% +1.0 5.74 perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
5.04 ± 10% +1.2 6.19 perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.00 +1.3 1.30 ± 19% perf-profile.calltrace.cycles-pp.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range
0.00 +1.4 1.36 ± 19% perf-profile.calltrace.cycles-pp.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents
0.29 ±101% +1.4 1.67 ± 18% perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
0.00 +1.4 1.39 ± 19% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction
0.00 +1.4 1.39 ± 19% perf-profile.calltrace.cycles-pp.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction
0.00 +1.4 1.39 ± 19% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space
0.00 +1.4 1.39 ± 19% perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
0.00 +1.4 1.39 ± 19% perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
0.00 +1.5 1.52 ± 18% perf-profile.calltrace.cycles-pp.btrfs_get_token_64.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio
0.00 +2.2 2.20 ± 14% perf-profile.calltrace.cycles-pp.btrfs_get_token_32.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio
8.37 ± 7% +3.3 11.65 ± 16% perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
8.37 ± 7% +3.3 11.65 ± 16% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
7.97 ± 7% +3.5 11.44 ± 15% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.wb_writeback.wb_workfn
7.97 ± 7% +3.5 11.44 ± 15% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.wb_writeback.wb_workfn.process_one_work.worker_thread
7.97 ± 7% +3.5 11.44 ± 15% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.wb_writeback.wb_workfn.process_one_work
49.45 ± 2% +4.3 53.79 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
61.98 ± 2% +4.4 66.38 ± 3% perf-profile.calltrace.cycles-pp.secondary_startup_64
1.07 ± 18% +5.0 6.06 ± 20% perf-profile.calltrace.cycles-pp.btree_write_cache_pages.do_writepages.__writeback_single_inode.writeback_sb_inodes.wb_writeback
0.93 ± 18% +5.0 5.95 ± 20% perf-profile.calltrace.cycles-pp.write_one_eb.btree_write_cache_pages.do_writepages.__writeback_single_inode.writeback_sb_inodes
61.26 ± 2% +5.1 66.36 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
61.27 ± 2% +5.1 66.37 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
61.21 ± 2% +5.1 66.32 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.33 ±100% +5.3 5.68 ± 20% perf-profile.calltrace.cycles-pp.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages.__writeback_single_inode
55.45 ± 2% +5.4 60.88 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +6.4 6.39 ± 13% perf-profile.calltrace.cycles-pp.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page
0.00 +6.5 6.50 ± 13% perf-profile.calltrace.cycles-pp.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb
0.29 ±100% +6.6 6.92 ± 13% perf-profile.calltrace.cycles-pp.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages
0.29 ±100% +6.6 6.92 ± 13% perf-profile.calltrace.cycles-pp.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages
15.05 ± 6% -4.1 10.97 ± 20% perf-profile.children.cycles-pp.normal_work_helper
12.66 ± 5% -4.0 8.68 ± 20% perf-profile.children.cycles-pp.btrfs_finish_ordered_io
7.84 ± 5% -2.4 5.39 ± 10% perf-profile.children.cycles-pp.btrfs_search_slot
7.30 ± 7% -1.7 5.59 ± 11% perf-profile.children.cycles-pp.extent_writepages
7.30 ± 7% -1.7 5.59 ± 11% perf-profile.children.cycles-pp.extent_write_cache_pages
7.03 ± 7% -1.6 5.38 ± 12% perf-profile.children.cycles-pp.__extent_writepage
6.19 ± 8% -1.6 4.56 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
6.17 ± 8% -1.6 4.55 ± 7% perf-profile.children.cycles-pp.do_syscall_64
6.01 ± 8% -1.6 4.39 ± 7% perf-profile.children.cycles-pp.io_submit
5.81 ± 8% -1.6 4.23 ± 6% perf-profile.children.cycles-pp.__x64_sys_io_submit
5.76 ± 8% -1.6 4.19 ± 6% perf-profile.children.cycles-pp.io_submit_one
5.66 ± 5% -1.6 4.09 ± 20% perf-profile.children.cycles-pp.btrfs_mark_extent_written
4.52 ± 3% -1.5 2.99 ± 19% perf-profile.children.cycles-pp.add_pending_csums
5.59 ± 8% -1.5 4.07 ± 6% perf-profile.children.cycles-pp.aio_write
4.51 ± 3% -1.5 2.99 ± 19% perf-profile.children.cycles-pp.btrfs_csum_file_blocks
5.44 ± 8% -1.5 3.96 ± 7% perf-profile.children.cycles-pp.btrfs_file_write_iter
5.34 ± 9% -1.4 3.90 ± 6% perf-profile.children.cycles-pp.btrfs_buffered_write
5.88 ± 7% -1.4 4.47 ± 11% perf-profile.children.cycles-pp.writepage_delalloc
5.30 ± 8% -1.3 4.01 ± 11% perf-profile.children.cycles-pp.btrfs_run_delalloc_range
5.28 ± 8% -1.3 3.99 ± 11% perf-profile.children.cycles-pp.run_delalloc_nocow
2.55 ± 2% -0.9 1.67 ± 16% perf-profile.children.cycles-pp.btrfs_cow_block
2.54 ± 2% -0.9 1.66 ± 16% perf-profile.children.cycles-pp.__btrfs_cow_block
2.15 ± 13% -0.8 1.31 ± 8% perf-profile.children.cycles-pp.__lookup_extent_mapping
1.87 ± 12% -0.8 1.04 ± 12% perf-profile.children.cycles-pp.__etree_search
0.94 ± 51% -0.7 0.22 ± 34% perf-profile.children.cycles-pp.printk
0.94 ± 51% -0.7 0.22 ± 34% perf-profile.children.cycles-pp.vprintk_emit
2.60 ± 2% -0.7 1.92 ± 14% perf-profile.children.cycles-pp.btrfs_insert_empty_items
3.00 ± 5% -0.7 2.33 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
1.89 ± 2% -0.7 1.23 ± 7% perf-profile.children.cycles-pp.__set_extent_bit
2.78 ± 4% -0.6 2.14 ± 14% perf-profile.children.cycles-pp.__schedule
1.57 ± 2% -0.5 1.02 ± 9% perf-profile.children.cycles-pp.lock_extent_bits
1.19 ± 14% -0.5 0.70 ± 11% perf-profile.children.cycles-pp.__clear_extent_bit
1.51 ± 7% -0.5 1.04 ± 9% perf-profile.children.cycles-pp.read_block_for_search
2.08 ± 5% -0.5 1.62 ± 18% perf-profile.children.cycles-pp.setup_items_for_insert
2.08 ± 17% -0.4 1.67 ± 24% perf-profile.children.cycles-pp.btrfs_run_delayed_refs
2.08 ± 17% -0.4 1.67 ± 24% perf-profile.children.cycles-pp.__btrfs_run_delayed_refs
1.60 ± 6% -0.4 1.20 ± 12% perf-profile.children.cycles-pp.schedule
1.95 ± 11% -0.4 1.58 ± 12% perf-profile.children.cycles-pp.try_to_wake_up
1.31 ± 6% -0.4 0.94 ± 9% perf-profile.children.cycles-pp.create_io_em
0.99 ± 4% -0.4 0.63 ± 13% perf-profile.children.cycles-pp.alloc_tree_block_no_bg_flush
1.00 ± 6% -0.4 0.65 ± 7% perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
1.27 ± 9% -0.4 0.92 ± 11% perf-profile.children.cycles-pp.__wake_up_common_lock
0.98 ± 3% -0.4 0.63 ± 13% perf-profile.children.cycles-pp.btrfs_alloc_tree_block
1.27 ± 5% -0.4 0.92 ± 7% perf-profile.children.cycles-pp.btrfs_dirty_pages
1.34 ± 7% -0.3 0.99 ± 10% perf-profile.children.cycles-pp.btrfs_map_bio
1.11 ± 9% -0.3 0.77 ± 11% perf-profile.children.cycles-pp.generic_bin_search
1.16 ± 6% -0.3 0.82 ± 9% perf-profile.children.cycles-pp.find_extent_buffer
0.91 ± 7% -0.3 0.57 ± 21% perf-profile.children.cycles-pp.btrfs_lookup_csum
1.21 ± 11% -0.3 0.90 ± 10% perf-profile.children.cycles-pp.btrfs_lookup_csums_range
1.21 ± 11% -0.3 0.91 ± 9% perf-profile.children.cycles-pp.csum_exist_in_range
1.10 ± 10% -0.3 0.80 ± 9% perf-profile.children.cycles-pp.__wake_up_common
0.37 ± 70% -0.3 0.07 ± 18% perf-profile.children.cycles-pp.test_range_bit
1.61 ± 4% -0.3 1.32 ± 15% perf-profile.children.cycles-pp.push_leaf_right
0.60 ± 4% -0.3 0.31 ± 25% perf-profile.children.cycles-pp.btrfs_extend_item
1.02 ± 9% -0.3 0.73 ± 10% perf-profile.children.cycles-pp.autoremove_wake_function
1.12 ± 4% -0.3 0.84 ± 9% perf-profile.children.cycles-pp.btrfs_drop_extent_cache
0.82 ± 6% -0.3 0.55 ± 4% perf-profile.children.cycles-pp.btrfs_release_path
0.85 ± 8% -0.3 0.59 ± 17% perf-profile.children.cycles-pp.copy_extent_buffer_full
0.99 ± 4% -0.3 0.73 ± 13% perf-profile.children.cycles-pp.btrfs_set_token_32
1.44 ± 13% -0.3 1.18 ± 5% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
0.84 ± 8% -0.3 0.59 ± 17% perf-profile.children.cycles-pp.copy_page
0.87 ± 11% -0.3 0.62 ± 11% perf-profile.children.cycles-pp.pagecache_get_page
0.84 ± 7% -0.2 0.60 ± 11% perf-profile.children.cycles-pp.kmem_cache_alloc
1.12 ± 7% -0.2 0.88 ± 17% perf-profile.children.cycles-pp.__extent_writepage_io
1.30 ± 15% -0.2 1.06 ± 5% perf-profile.children.cycles-pp.reserve_metadata_bytes
0.82 ± 4% -0.2 0.58 ± 13% perf-profile.children.cycles-pp.run_one_async_done
1.25 ± 3% -0.2 1.01 ± 15% perf-profile.children.cycles-pp.schedule_idle
0.34 ± 53% -0.2 0.10 ± 28% perf-profile.children.cycles-pp.delay_tsc
0.70 ± 13% -0.2 0.47 ± 11% perf-profile.children.cycles-pp.prepare_pages
0.50 ± 22% -0.2 0.28 ± 13% perf-profile.children.cycles-pp.clear_state_bit
0.84 ± 5% -0.2 0.63 ± 6% perf-profile.children.cycles-pp.btrfs_get_extent
0.54 ± 11% -0.2 0.34 ± 4% perf-profile.children.cycles-pp.btrfs_free_path
1.16 ± 11% -0.2 0.96 ± 19% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.53 ± 9% -0.2 0.33 ± 18% perf-profile.children.cycles-pp.unpin_extent_cache
1.07 ± 10% -0.2 0.88 ± 10% perf-profile.children.cycles-pp.btrfs_lookup_file_extent
0.50 ± 4% -0.2 0.31 ± 15% perf-profile.children.cycles-pp.clear_extent_bit
0.45 ± 13% -0.2 0.26 ± 11% perf-profile.children.cycles-pp.kmem_cache_free
0.40 ± 14% -0.2 0.21 ± 50% perf-profile.children.cycles-pp.__writeback_inodes_wb
0.64 ± 7% -0.2 0.45 ± 10% perf-profile.children.cycles-pp.dequeue_task_fair
0.50 ± 8% -0.2 0.33 ± 23% perf-profile.children.cycles-pp.btrfs_add_delayed_tree_ref
0.88 ± 13% -0.2 0.71 ± 2% perf-profile.children.cycles-pp.wait_reserve_ticket
0.52 ± 9% -0.2 0.35 ± 7% perf-profile.children.cycles-pp.__radix_tree_lookup
0.52 ± 8% -0.2 0.36 ± 8% perf-profile.children.cycles-pp.xas_load
0.78 ± 19% -0.2 0.62 ± 23% perf-profile.children.cycles-pp.__btrfs_free_extent
0.56 ± 17% -0.1 0.42 ± 15% perf-profile.children.cycles-pp.btrfs_del_items
0.51 ± 5% -0.1 0.36 ± 9% perf-profile.children.cycles-pp.dequeue_entity
0.39 ± 7% -0.1 0.24 ± 6% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.82 ± 14% -0.1 0.68 ± 18% perf-profile.children.cycles-pp.ttwu_do_activate
0.55 ± 8% -0.1 0.41 ± 8% perf-profile.children.cycles-pp.btrfs_submit_bio_hook
0.53 ± 5% -0.1 0.40 ± 13% perf-profile.children.cycles-pp.update_load_avg
0.40 ± 8% -0.1 0.27 ± 27% perf-profile.children.cycles-pp.add_delayed_ref_head
0.35 ± 13% -0.1 0.22 ± 3% perf-profile.children.cycles-pp.set_extent_bit
0.52 ± 9% -0.1 0.39 ± 7% perf-profile.children.cycles-pp.btrfs_wq_submit_bio
0.35 ± 3% -0.1 0.23 ± 32% perf-profile.children.cycles-pp.queued_write_lock_slowpath
0.33 ± 6% -0.1 0.21 ± 15% perf-profile.children.cycles-pp.__slab_alloc
0.63 ± 19% -0.1 0.51 ± 13% perf-profile.children.cycles-pp.sched_ttwu_pending
0.31 ± 11% -0.1 0.19 ± 28% perf-profile.children.cycles-pp.btrfs_update_inode
0.31 ± 11% -0.1 0.19 ± 28% perf-profile.children.cycles-pp.btrfs_update_inode_fallback
0.57 ± 5% -0.1 0.46 ± 10% perf-profile.children.cycles-pp.find_lock_delalloc_range
0.45 ± 5% -0.1 0.34 ± 9% perf-profile.children.cycles-pp.btrfs_cross_ref_exist
0.29 ± 3% -0.1 0.18 ± 7% perf-profile.children.cycles-pp.free_extent_buffer
0.38 ± 3% -0.1 0.26 ± 8% perf-profile.children.cycles-pp.btrfs_mark_buffer_dirty
0.46 ± 5% -0.1 0.35 ± 13% perf-profile.children.cycles-pp.find_delalloc_range
0.41 ± 9% -0.1 0.30 ± 2% perf-profile.children.cycles-pp.btrfs_copy_from_user
0.40 ± 9% -0.1 0.29 ± 2% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
0.38 ± 10% -0.1 0.27 perf-profile.children.cycles-pp.find_get_entry
0.31 ± 12% -0.1 0.20 ± 31% perf-profile.children.cycles-pp.btrfs_remove_ordered_extent
0.30 ± 6% -0.1 0.20 ± 14% perf-profile.children.cycles-pp.___slab_alloc
0.26 ± 8% -0.1 0.16 ± 7% perf-profile.children.cycles-pp.merge_state
0.38 ± 7% -0.1 0.28 ± 3% perf-profile.children.cycles-pp.copyin
0.26 ± 17% -0.1 0.15 ± 26% perf-profile.children.cycles-pp.btrfs_delayed_update_inode
0.25 ± 4% -0.1 0.15 ± 5% perf-profile.children.cycles-pp.insert_state
0.40 ± 13% -0.1 0.30 ± 13% perf-profile.children.cycles-pp.extent_clear_unlock_delalloc
0.36 ± 9% -0.1 0.26 ± 3% perf-profile.children.cycles-pp.alloc_extent_state
0.38 ± 7% -0.1 0.29 ± 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.39 ± 3% -0.1 0.29 ± 14% perf-profile.children.cycles-pp.alloc_extent_buffer
0.31 ± 4% -0.1 0.22 ± 11% perf-profile.children.cycles-pp.set_extent_buffer_dirty
0.26 ± 11% -0.1 0.17 ± 23% perf-profile.children.cycles-pp.unlock_up
0.32 ± 3% -0.1 0.24 ± 13% perf-profile.children.cycles-pp.__btrfs_map_block
0.23 ± 13% -0.1 0.15 ± 28% perf-profile.children.cycles-pp.btrfs_add_delayed_data_ref
0.25 ± 12% -0.1 0.17 ± 2% perf-profile.children.cycles-pp.__slab_free
0.47 ± 6% -0.1 0.39 ± 15% perf-profile.children.cycles-pp.__push_leaf_right
0.26 ± 9% -0.1 0.18 ± 19% perf-profile.children.cycles-pp.__test_set_page_writeback
0.32 ± 11% -0.1 0.24 ± 27% perf-profile.children.cycles-pp.btrfs_add_ordered_extent
0.15 ± 8% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.__xa_set_mark
0.23 ± 7% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.release_extent_buffer
0.18 ± 8% -0.1 0.11 ± 26% perf-profile.children.cycles-pp.alloc_extent_map
0.32 ± 10% -0.1 0.24 ± 27% perf-profile.children.cycles-pp.__btrfs_add_ordered_extent
0.32 ± 17% -0.1 0.24 ± 10% perf-profile.children.cycles-pp.btrfs_tree_read_lock_atomic
0.20 ± 13% -0.1 0.13 ± 27% perf-profile.children.cycles-pp.btrfs_inc_extent_ref
0.14 ± 21% -0.1 0.06 ± 19% perf-profile.children.cycles-pp.find_ref_head
0.13 ± 32% -0.1 0.06 ± 19% perf-profile.children.cycles-pp.rb_erase_cached
0.21 ± 7% -0.1 0.14 ± 15% perf-profile.children.cycles-pp.btrfs_submit_bio_start
0.53 ± 12% -0.1 0.46 ± 16% perf-profile.children.cycles-pp.btrfs_tree_read_lock
0.21 ± 7% -0.1 0.14 ± 15% perf-profile.children.cycles-pp.btrfs_csum_one_bio
0.17 ± 16% -0.1 0.10 ± 8% perf-profile.children.cycles-pp.rb_erase
0.21 ± 7% -0.1 0.14 ± 16% perf-profile.children.cycles-pp.run_one_async_start
0.22 ± 3% -0.1 0.15 ± 22% perf-profile.children.cycles-pp.btrfs_try_tree_write_lock
0.21 ± 9% -0.1 0.15 ± 21% perf-profile.children.cycles-pp.btrfs_leaf_free_space
0.15 ± 23% -0.1 0.09 ± 35% perf-profile.children.cycles-pp.free_extent_map
0.20 ± 9% -0.1 0.14 ± 20% perf-profile.children.cycles-pp.leaf_space_used
0.19 ± 8% -0.1 0.13 ± 13% perf-profile.children.cycles-pp.check_delayed_ref
0.17 ± 13% -0.1 0.12 ± 31% perf-profile.children.cycles-pp.get_page_from_freelist
0.13 ± 8% -0.1 0.08 ± 26% perf-profile.children.cycles-pp.__update_load_avg_se
0.17 ± 2% -0.1 0.12 ± 18% perf-profile.children.cycles-pp.btrfs_set_range_writeback
0.15 ± 8% -0.1 0.09 ± 20% perf-profile.children.cycles-pp.xas_set_mark
0.18 ± 6% -0.1 0.13 ± 16% perf-profile.children.cycles-pp.___might_sleep
0.20 ± 11% -0.1 0.15 ± 35% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.10 ± 9% -0.1 0.04 ± 71% perf-profile.children.cycles-pp.btrfs_tree_unlock
0.14 ± 11% -0.1 0.09 ± 28% perf-profile.children.cycles-pp.btrfs_reserve_extent
0.10 ± 33% -0.1 0.05 ± 72% perf-profile.children.cycles-pp.__btrfs_release_delayed_node
0.22 ± 9% -0.0 0.17 ± 25% perf-profile.children.cycles-pp.mark_extent_buffer_accessed
0.09 ± 13% -0.0 0.04 ± 71% perf-profile.children.cycles-pp.btrfs_tree_read_unlock_blocking
0.18 ± 8% -0.0 0.14 ± 24% perf-profile.children.cycles-pp.__list_del_entry_valid
0.09 ± 13% -0.0 0.04 ± 73% perf-profile.children.cycles-pp.verify_parent_transid
0.11 ± 19% -0.0 0.07 ± 11% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.22 ± 5% -0.0 0.18 ± 12% perf-profile.children.cycles-pp.check_committed_ref
0.11 ± 19% -0.0 0.07 ± 23% perf-profile.children.cycles-pp.select_idle_sibling
0.14 ± 11% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.do_io_getevents
0.15 ± 7% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__x64_sys_io_getevents
0.08 ± 10% -0.0 0.04 ± 71% perf-profile.children.cycles-pp.btrfs_release_extent_buffer_pages
0.17 ± 18% -0.0 0.13 ± 9% perf-profile.children.cycles-pp.clear_page_dirty_for_io
0.14 ± 5% -0.0 0.10 ± 25% perf-profile.children.cycles-pp.block_group_cache_tree_search
0.18 ± 4% -0.0 0.14 ± 11% perf-profile.children.cycles-pp.btrfs_get_chunk_map
0.11 ± 13% -0.0 0.08 ± 12% perf-profile.children.cycles-pp.btrfs_buffer_uptodate
0.15 ± 15% -0.0 0.11 ± 19% perf-profile.children.cycles-pp.xas_find_marked
0.12 ± 15% -0.0 0.08 ± 20% perf-profile.children.cycles-pp.btrfs_unlock_up_safe
0.12 ± 14% -0.0 0.08 ± 14% perf-profile.children.cycles-pp.btrfs_set_path_blocking
0.11 ± 11% -0.0 0.07 ± 28% perf-profile.children.cycles-pp.find_free_extent
0.24 ± 5% -0.0 0.20 ± 6% perf-profile.children.cycles-pp.btrfs_root_node
0.07 ± 6% -0.0 0.03 ± 70% perf-profile.children.cycles-pp.extent_mergeable
0.13 ± 12% -0.0 0.10 ± 9% perf-profile.children.cycles-pp.__switch_to
0.10 ± 18% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.rb_insert_color_cached
0.07 ± 14% -0.0 0.04 ± 71% perf-profile.children.cycles-pp.rb_prev
0.11 ± 11% -0.0 0.08 ± 12% perf-profile.children.cycles-pp.get_io_u
0.10 ± 14% -0.0 0.07 ± 23% perf-profile.children.cycles-pp.available_idle_cpu
0.17 ± 10% -0.0 0.13 ± 3% perf-profile.children.cycles-pp._raw_write_lock
0.10 ± 11% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.__might_sleep
0.17 ± 7% -0.0 0.14 ± 12% perf-profile.children.cycles-pp._raw_read_lock
0.17 ± 10% -0.0 0.14 ± 12% perf-profile.children.cycles-pp.can_overcommit
0.10 ± 11% -0.0 0.07 perf-profile.children.cycles-pp.__list_add_valid
0.09 ± 7% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.aio_read_events
0.11 ± 10% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.___perf_sw_event
0.11 ± 9% -0.0 0.09 ± 18% perf-profile.children.cycles-pp.get_alloc_profile
0.11 ± 12% -0.0 0.08 ± 14% perf-profile.children.cycles-pp.pick_next_task_idle
0.10 ± 13% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.read_events
0.08 ± 6% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.replace_extent_mapping
0.10 ± 5% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.menu_reflect
0.07 ± 17% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.run_posix_cpu_timers
0.05 ± 58% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.rcu_irq_enter
0.19 ± 11% +0.1 0.26 ± 17% perf-profile.children.cycles-pp.native_irq_return_iret
0.20 ± 14% +0.1 0.27 ± 8% perf-profile.children.cycles-pp.btrfs_comp_cpu_keys
0.38 ± 9% +0.1 0.46 perf-profile.children.cycles-pp.load_balance
0.30 ± 8% +0.1 0.38 ± 15% perf-profile.children.cycles-pp.irq_enter
0.21 ± 24% +0.1 0.30 ± 9% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.22 ± 3% +0.1 0.32 ± 9% perf-profile.children.cycles-pp.lapic_next_deadline
0.52 ± 12% +0.1 0.63 ± 3% perf-profile.children.cycles-pp.rebalance_domains
1.90 ± 10% +0.3 2.23 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.23 ± 12% +0.4 0.67 ± 15% perf-profile.children.cycles-pp.read_extent_buffer
2.88 ± 11% +0.5 3.41 perf-profile.children.cycles-pp.hrtimer_interrupt
0.09 ± 12% +0.7 0.83 ± 15% perf-profile.children.cycles-pp.btrfs_get_token_8
2.37 ± 19% +0.8 3.19 ± 5% perf-profile.children.cycles-pp.btrfs_async_reclaim_metadata_space
2.33 ± 19% +0.8 3.17 ± 5% perf-profile.children.cycles-pp.flush_space
0.14 ± 34% +1.3 1.39 ± 19% perf-profile.children.cycles-pp.btrfs_write_and_wait_transaction
0.14 ± 34% +1.3 1.39 ± 19% perf-profile.children.cycles-pp.btrfs_write_marked_extents
0.14 ± 34% +1.3 1.39 ± 19% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
0.42 ± 43% +1.3 1.67 ± 18% perf-profile.children.cycles-pp.btrfs_commit_transaction
0.25 ± 5% +1.9 2.17 ± 10% perf-profile.children.cycles-pp.btrfs_get_token_64
1.71 ± 5% +2.1 3.79 ± 5% perf-profile.children.cycles-pp.btrfs_get_token_32
0.43 ± 8% +2.4 2.84 ± 10% perf-profile.children.cycles-pp.map_private_extent_buffer
8.37 ± 7% +3.3 11.65 ± 16% perf-profile.children.cycles-pp.wb_writeback
8.37 ± 7% +3.3 11.65 ± 16% perf-profile.children.cycles-pp.writeback_sb_inodes
8.37 ± 7% +3.3 11.65 ± 16% perf-profile.children.cycles-pp.__writeback_single_inode
8.37 ± 7% +3.3 11.65 ± 16% perf-profile.children.cycles-pp.wb_workfn
50.06 ± 2% +3.8 53.82 ± 4% perf-profile.children.cycles-pp.intel_idle
61.99 ± 2% +4.4 66.39 ± 3% perf-profile.children.cycles-pp.do_idle
61.98 ± 2% +4.4 66.38 ± 3% perf-profile.children.cycles-pp.secondary_startup_64
61.98 ± 2% +4.4 66.38 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
8.51 ± 7% +4.5 13.04 ± 12% perf-profile.children.cycles-pp.do_writepages
56.43 ± 3% +4.9 61.30 ± 3% perf-profile.children.cycles-pp.cpuidle_enter_state
61.27 ± 2% +5.1 66.37 ± 3% perf-profile.children.cycles-pp.start_secondary
1.43 ± 7% +6.2 7.64 ± 13% perf-profile.children.cycles-pp.submit_extent_page
1.20 ± 17% +6.2 7.45 ± 13% perf-profile.children.cycles-pp.btree_write_cache_pages
1.05 ± 17% +6.3 7.31 ± 13% perf-profile.children.cycles-pp.write_one_eb
1.11 ± 7% +6.3 7.37 ± 13% perf-profile.children.cycles-pp.submit_one_bio
0.55 ± 16% +6.4 6.95 ± 13% perf-profile.children.cycles-pp.btree_submit_bio_hook
0.00 +6.5 6.48 ± 13% perf-profile.children.cycles-pp.check_leaf
0.00 +6.5 6.53 ± 13% perf-profile.children.cycles-pp.btree_csum_one_bio
2.12 ± 11% -0.8 1.29 ± 7% perf-profile.self.cycles-pp.__lookup_extent_mapping
1.83 ± 12% -0.8 1.03 ± 13% perf-profile.self.cycles-pp.__etree_search
2.06 ± 5% -0.5 1.59 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
0.81 ± 9% -0.3 0.52 ± 10% perf-profile.self.cycles-pp.generic_bin_search
0.84 ± 7% -0.3 0.58 ± 17% perf-profile.self.cycles-pp.copy_page
0.33 ± 54% -0.2 0.10 ± 28% perf-profile.self.cycles-pp.delay_tsc
0.89 ± 9% -0.2 0.69 ± 14% perf-profile.self.cycles-pp.btrfs_set_token_32
0.81 ± 4% -0.2 0.63 ± 14% perf-profile.self.cycles-pp.__schedule
0.51 ± 9% -0.2 0.35 ± 6% perf-profile.self.cycles-pp.__radix_tree_lookup
0.45 ± 8% -0.1 0.32 ± 10% perf-profile.self.cycles-pp.xas_load
0.43 ± 4% -0.1 0.31 ± 6% perf-profile.self.cycles-pp.find_extent_buffer
0.77 ± 5% -0.1 0.66 ± 11% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.31 ± 10% -0.1 0.20 ± 28% perf-profile.self.cycles-pp.add_delayed_ref_head
0.39 ± 16% -0.1 0.29 ± 16% perf-profile.self.cycles-pp.try_to_wake_up
0.31 ± 4% -0.1 0.20 ± 26% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.38 ± 7% -0.1 0.29 ± 3% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.20 ± 17% -0.1 0.11 ± 14% perf-profile.self.cycles-pp.kmem_cache_free
0.25 ± 11% -0.1 0.17 ± 2% perf-profile.self.cycles-pp.__slab_free
0.16 ± 9% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.free_extent_buffer
0.28 ± 17% -0.1 0.20 ± 11% perf-profile.self.cycles-pp.btrfs_search_slot
0.31 ± 4% -0.1 0.24 ± 20% perf-profile.self.cycles-pp.kmem_cache_alloc
0.14 ± 21% -0.1 0.06 ± 19% perf-profile.self.cycles-pp.find_ref_head
0.16 ± 10% -0.1 0.09 ± 5% perf-profile.self.cycles-pp.insert_state
0.12 ± 8% -0.1 0.06 ± 72% perf-profile.self.cycles-pp.__update_load_avg_se
0.16 ± 15% -0.1 0.10 ± 8% perf-profile.self.cycles-pp.rb_erase
0.10 ± 9% -0.1 0.04 ± 71% perf-profile.self.cycles-pp.btrfs_tree_unlock
0.15 ± 13% -0.1 0.10 ± 12% perf-profile.self.cycles-pp.___slab_alloc
0.28 ± 13% -0.1 0.22 ± 5% perf-profile.self.cycles-pp.queued_read_lock_slowpath
0.15 ± 21% -0.1 0.09 ± 35% perf-profile.self.cycles-pp.free_extent_map
0.18 ± 6% -0.1 0.12 ± 11% perf-profile.self.cycles-pp.update_load_avg
0.10 ± 11% -0.1 0.04 ± 70% perf-profile.self.cycles-pp.btrfs_extend_item
0.15 ± 23% -0.1 0.09 ± 9% perf-profile.self.cycles-pp.add_extent_mapping
0.15 ± 8% -0.1 0.09 ± 20% perf-profile.self.cycles-pp.xas_set_mark
0.18 ± 6% -0.1 0.13 ± 16% perf-profile.self.cycles-pp.___might_sleep
0.18 ± 6% -0.0 0.13 ± 22% perf-profile.self.cycles-pp.__list_del_entry_valid
0.18 ± 4% -0.0 0.14 ± 22% perf-profile.self.cycles-pp.pick_next_task_fair
0.09 ± 13% -0.0 0.04 ± 73% perf-profile.self.cycles-pp.verify_parent_transid
0.09 ± 12% -0.0 0.04 ± 71% perf-profile.self.cycles-pp.btrfs_tree_read_unlock_blocking
0.16 ± 13% -0.0 0.12 ± 24% perf-profile.self.cycles-pp.mark_page_accessed
0.12 ± 14% -0.0 0.08 ± 20% perf-profile.self.cycles-pp.dequeue_task_fair
0.10 ± 19% -0.0 0.07 ± 18% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.08 ± 14% -0.0 0.04 ± 73% perf-profile.self.cycles-pp.unlock_up
0.15 ± 15% -0.0 0.11 ± 19% perf-profile.self.cycles-pp.xas_find_marked
0.24 ± 6% -0.0 0.20 ± 8% perf-profile.self.cycles-pp.btrfs_root_node
0.17 ± 6% -0.0 0.13 ± 10% perf-profile.self.cycles-pp._raw_read_lock
0.10 ± 13% -0.0 0.06 ± 14% perf-profile.self.cycles-pp.btrfs_csum_one_bio
0.16 ± 7% -0.0 0.13 ± 9% perf-profile.self.cycles-pp.dequeue_entity
0.07 -0.0 0.04 ± 71% perf-profile.self.cycles-pp.io_u_queued_complete
0.08 -0.0 0.05 ± 70% perf-profile.self.cycles-pp.mutex_lock
0.11 ± 11% -0.0 0.08 ± 12% perf-profile.self.cycles-pp.get_io_u
0.16 ± 9% -0.0 0.13 perf-profile.self.cycles-pp._raw_write_lock
0.07 ± 17% -0.0 0.04 ± 73% perf-profile.self.cycles-pp.btrfs_tree_read_lock
0.12 ± 8% -0.0 0.09 ± 13% perf-profile.self.cycles-pp.__switch_to
0.10 ± 11% -0.0 0.06 ± 14% perf-profile.self.cycles-pp.__clear_extent_bit
0.07 ± 12% -0.0 0.04 ± 71% perf-profile.self.cycles-pp.rb_prev
0.10 ± 15% -0.0 0.07 ± 23% perf-profile.self.cycles-pp.find_get_pages_range_tag
0.10 ± 12% -0.0 0.07 ± 23% perf-profile.self.cycles-pp.available_idle_cpu
0.10 ± 11% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.__list_add_valid
0.08 ± 13% -0.0 0.06 ± 8% perf-profile.self.cycles-pp.__might_sleep
0.12 ± 17% -0.0 0.09 ± 15% perf-profile.self.cycles-pp.release_extent_buffer
0.09 ± 17% -0.0 0.07 ± 14% perf-profile.self.cycles-pp.___perf_sw_event
0.08 ± 5% -0.0 0.06 ± 13% perf-profile.self.cycles-pp.__push_leaf_right
0.11 ± 7% -0.0 0.09 ± 9% perf-profile.self.cycles-pp.set_extent_buffer_dirty
0.11 ± 4% -0.0 0.08 ± 11% perf-profile.self.cycles-pp.__set_extent_bit
0.09 ± 7% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.hrtimer_interrupt
0.07 ± 17% +0.0 0.09 ± 5% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.07 ± 7% +0.0 0.09 ± 5% perf-profile.self.cycles-pp.load_balance
0.01 ±173% +0.1 0.07 ± 17% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.06 ± 23% perf-profile.self.cycles-pp.find_next_and_bit
0.19 ± 11% +0.1 0.26 ± 17% perf-profile.self.cycles-pp.native_irq_return_iret
0.19 ± 24% +0.1 0.28 ± 7% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.50 ± 8% +0.2 0.69 ± 17% perf-profile.self.cycles-pp.cpuidle_enter_state
0.07 ± 11% +0.4 0.46 ± 17% perf-profile.self.cycles-pp.btrfs_get_token_8
0.22 ± 13% +0.4 0.66 ± 16% perf-profile.self.cycles-pp.read_extent_buffer
0.00 +0.6 0.65 ± 14% perf-profile.self.cycles-pp.check_leaf
0.15 ± 11% +1.0 1.13 ± 11% perf-profile.self.cycles-pp.btrfs_get_token_64
1.48 ± 7% +1.0 2.51 ± 5% perf-profile.self.cycles-pp.btrfs_get_token_32
0.41 ± 8% +2.2 2.61 ± 9% perf-profile.self.cycles-pp.map_private_extent_buffer
49.99 ± 2% +3.7 53.74 ± 4% perf-profile.self.cycles-pp.intel_idle
fio.write_bw_MBps
300 +-+-------------------------------------------------------------------+
| |
250 +-++.+ +. .+.. .+..+..+.+.. .+..+.. .+ |
| : : +. +..+ + +..+ +. + |
| : : : : : : |
200 +-+ : : : : : : O O |
O O O: O :O O O O O O O O O O: O:O O: O: O O O O O
150 +-+ : : : : : : |
| : : : : : : |
100 +-+ : : : : : : |
| : : : : : : |
| : : : : : : |
50 +-+ :: :: :: |
| : : : |
0 +-+---------------------------------------------O-------------O-------+
fio.write_iops
70000 +-+-----------------------------------------------------------------+
|..+.+ +..+..+.+.. .+.+..+.+..+ +.. +..+.+..+..+ |
60000 +-+ : : +. : : + : |
| : : : : : : |
50000 +-+ : : : : : : |
O O O: :O O O O O O O O O O: O:O O:O : O O O O O O O
40000 +-+ : O: : : : : |
| : : : : : : |
30000 +-+ : : : : : : |
| : : : : : : |
20000 +-+ : : : : : : |
| :: :: :: |
10000 +-+ : : : |
| : : : |
0 +-+--------------------------------------------O------------O-------+
fio.write_clat_mean_us
30000 +-+-----------------------------------------------------------------+
| |
25000 O-+O O O O O O O O O O O O O O O O O O O |
| O O O O O
| |
20000 +-+ |
|..+.+ +..+..+.+..+..+.+..+.+..+ +..+ +..+.+..+..+ |
15000 +-+ : : : : : : |
| : : : : : : |
10000 +-+ : : : : : : |
| : : : : : : |
| : : : : : : |
5000 +-+ : : : : :: |
| : : : |
0 +-+--------------------------------------------O------------O-------+
fio.write_clat_stddev
160000 +-+----------------------------------------------------------------+
| |
140000 O-+O O O O O O O O O O O O O O |
120000 +-+ O O |
| |
100000 +-+ O O O O O O O O
| |
80000 +-+ |
| .+.. |
60000 +-++.+ +..+.+. +.+..+.+..+..+ +.+ +..+..+.+..+ |
40000 +-+ : : : : : : |
| : : : : : : |
20000 +-+ : : : : : : |
| :: :: :: |
0 +-+-------------------------------------------O------------O-------+
fio.write_slat_mean_us
900 +-+-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O |
800 +-+O O O O O O O
700 +-+ |
| |
600 +-++.+ +.+..+..+..+.+..+..+.+..+ +..+ +..+..+..+.+ |
500 +-+ : : : : : : |
| : : : : : : |
400 +-+ : : : : : : |
300 +-+ : : : : : : |
| : : : : : : |
200 +-+ : : : : : : |
100 +-+ : : :: :: |
| : : : |
0 +-+---------------------------------------------O-------------O-------+
fio.write_slat_stddev
25000 +-+-----O---------O-------------------------------------------------+
O O O O O O O O O O O O O O O |
| |
20000 +-+ |
| O O O O O O O O
| |
15000 +-+ |
| |
10000 +-++. .+.. .+.. .+..+ .+ .+..+ |
|. + +..+..+ +..+ + : +. : +..+.+. |
| : : : : : : |
5000 +-+ : : : : : : |
| : : : : : : |
| :: : :: |
0 +-+--------------------------------------------O------------O-------+
fio.latency_1000ms_
0.4 +-+------------------------------------------------------------------+
| |
0.35 +-+ O |
0.3 +-+ |
| O O |
0.25 +-+ O O |
| O O O
0.2 O-+ O O O O O O O O O O |
| O O |
0.15 +-+O O O O |
0.1 +-+ |
| |
0.05 +-+ |
| .+. .+..+..+.+..+.. .+. .+.+..+.. |
0 +-+---------------------------------------------O------------O-------+
fio.workload
1.4e+07 +-+---------------------------------------------------------------+
|..+.+ +..+.+..+.+..+.+..+.+..+ +.. +.+..+.+..+ |
1.2e+07 +-+ : : : : + : |
| : : : : : : |
1e+07 +-+ : : : : : : |
O O O: :O O O O O O O O O O: O:O O:O : O O O O O O O
8e+06 +-+ : O: : : : : |
| : : : : : : |
6e+06 +-+ : : : : : : |
| : : : : : : |
4e+06 +-+ : : : : : : |
| :: :: :: |
2e+06 +-+ : : : |
| : : : |
0 +-+-------------------------------------------O-----------O-------+
fio.time.voluntary_context_switches
1.4e+07 +-+---------------------------------------------------------------+
| |
1.2e+07 +-++.+ +..+.+..+.+..+.+..+.+..+ +..+ +.+..+.+..+ |
| : : : : : : |
1e+07 +-+ : : : : : : |
| O O: : O O O : : :O : O O O O O O O
8e+06 O-+ : O:O O O O O O O: O:O O: : |
| : : : : : : |
6e+06 +-+ : : : : : : |
| : : : : : : |
4e+06 +-+ : : : : : : |
| : : : : :: |
2e+06 +-+ : : :: |
| : : : |
0 +-+-------------------------------------------O-----------O-------+
fio.time.file_system_outputs
1.2e+08 +-+---------------------------------------------------------------+
| |
1e+08 +-++.+ +..+.+..+.+..+.+..+.+..+ +..+ +.+..+.+..+ |
| : : : : : : |
| : : : : : : |
8e+07 +-+ : : : : : : O O O O O
O O O: O:O O O O O O O O O O: O:O O:O : O O |
6e+07 +-+ : : : : : : |
| : : : : : : |
4e+07 +-+ : : : : : : |
| : : : : : : |
| : : : : :: |
2e+07 +-+ : : :: |
| : : : |
0 +-+-------------------------------------------O-----------O-------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year
[drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -18.8% regression of vm-scalability.median due to commit:
commit: 90f479ae51afa45efab97afdde9b94b9660dd3e4 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-cow-seq-hugetlb
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/8T/lkp-knm01/anon-cow-seq-hugetlb/vm-scalability
commit:
f1f8555dfb ("drm/bochs: Use shadow buffer for bochs framebuffer console")
90f479ae51 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
f1f8555dfb9a70a2 90f479ae51afa45efab97afdde9
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
:4 25% 1:4 dmesg.WARNING:at_ip__fsnotify_parent/0x
%stddev %change %stddev
\ | \
43955 ± 2% -18.8% 35691 vm-scalability.median
0.06 ± 7% +193.0% 0.16 ± 2% vm-scalability.median_stddev
14906559 ± 2% -17.9% 12237079 vm-scalability.throughput
87651 ± 2% -17.4% 72374 vm-scalability.time.involuntary_context_switches
2086168 -23.6% 1594224 vm-scalability.time.minor_page_faults
15082 ± 2% -10.4% 13517 vm-scalability.time.percent_of_cpu_this_job_got
29987 -8.9% 27327 vm-scalability.time.system_time
15755 -12.4% 13795 vm-scalability.time.user_time
122011 -19.3% 98418 vm-scalability.time.voluntary_context_switches
3.034e+09 -23.6% 2.318e+09 vm-scalability.workload
242478 ± 12% +68.5% 408518 ± 23% cpuidle.POLL.time
2788 ± 21% +117.4% 6062 ± 26% cpuidle.POLL.usage
56653 ± 10% +64.4% 93144 ± 20% meminfo.Mapped
120392 ± 7% +14.0% 137212 ± 4% meminfo.Shmem
47221 ± 11% +77.1% 83634 ± 22% numa-meminfo.node0.Mapped
120465 ± 7% +13.9% 137205 ± 4% numa-meminfo.node0.Shmem
2885513 -16.5% 2409384 numa-numastat.node0.local_node
2885471 -16.5% 2409354 numa-numastat.node0.numa_hit
11813 ± 11% +76.3% 20824 ± 22% numa-vmstat.node0.nr_mapped
30096 ± 7% +13.8% 34238 ± 4% numa-vmstat.node0.nr_shmem
43.72 ± 2% +5.5 49.20 mpstat.cpu.all.idle%
0.03 ± 4% +0.0 0.05 ± 6% mpstat.cpu.all.soft%
19.51 -2.4 17.08 mpstat.cpu.all.usr%
1012 -7.9% 932.75 turbostat.Avg_MHz
32.38 ± 10% +25.8% 40.73 turbostat.CPU%c1
145.51 -3.1% 141.01 turbostat.PkgWatt
15.09 -19.2% 12.19 turbostat.RAMWatt
43.50 ± 2% +13.2% 49.25 vmstat.cpu.id
18.75 ± 2% -13.3% 16.25 ± 2% vmstat.cpu.us
152.00 ± 2% -9.5% 137.50 vmstat.procs.r
4800 -13.1% 4173 vmstat.system.cs
156170 -11.9% 137594 slabinfo.anon_vma.active_objs
3395 -11.9% 2991 slabinfo.anon_vma.active_slabs
156190 -11.9% 137606 slabinfo.anon_vma.num_objs
3395 -11.9% 2991 slabinfo.anon_vma.num_slabs
1716 ± 5% +11.5% 1913 ± 8% slabinfo.dmaengine-unmap-16.active_objs
1716 ± 5% +11.5% 1913 ± 8% slabinfo.dmaengine-unmap-16.num_objs
1767 ± 2% -19.0% 1431 ± 2% slabinfo.hugetlbfs_inode_cache.active_objs
1767 ± 2% -19.0% 1431 ± 2% slabinfo.hugetlbfs_inode_cache.num_objs
3597 ± 5% -16.4% 3006 ± 3% slabinfo.skbuff_ext_cache.active_objs
3597 ± 5% -16.4% 3006 ± 3% slabinfo.skbuff_ext_cache.num_objs
1330122 -23.6% 1016557 proc-vmstat.htlb_buddy_alloc_success
77214 ± 3% +6.4% 82128 ± 2% proc-vmstat.nr_active_anon
67277 +2.9% 69246 proc-vmstat.nr_anon_pages
218.50 ± 3% -10.6% 195.25 proc-vmstat.nr_dirtied
288628 +1.4% 292755 proc-vmstat.nr_file_pages
360.50 -2.7% 350.75 proc-vmstat.nr_inactive_file
14225 ± 9% +63.8% 23304 ± 20% proc-vmstat.nr_mapped
30109 ± 7% +13.8% 34259 ± 4% proc-vmstat.nr_shmem
99870 -1.3% 98597 proc-vmstat.nr_slab_unreclaimable
204.00 ± 4% -12.1% 179.25 proc-vmstat.nr_written
77214 ± 3% +6.4% 82128 ± 2% proc-vmstat.nr_zone_active_anon
360.50 -2.7% 350.75 proc-vmstat.nr_zone_inactive_file
8810 ± 19% -66.1% 2987 ± 42% proc-vmstat.numa_hint_faults
8810 ± 19% -66.1% 2987 ± 42% proc-vmstat.numa_hint_faults_local
2904082 -16.4% 2427026 proc-vmstat.numa_hit
2904081 -16.4% 2427025 proc-vmstat.numa_local
6.828e+08 -23.5% 5.221e+08 proc-vmstat.pgalloc_normal
2900008 -17.2% 2400195 proc-vmstat.pgfault
6.827e+08 -23.5% 5.22e+08 proc-vmstat.pgfree
1.635e+10 -17.0% 1.357e+10 perf-stat.i.branch-instructions
1.53 ± 4% -0.1 1.45 ± 3% perf-stat.i.branch-miss-rate%
2.581e+08 ± 3% -20.5% 2.051e+08 ± 2% perf-stat.i.branch-misses
12.66 +1.1 13.78 perf-stat.i.cache-miss-rate%
72720849 -12.0% 63958986 perf-stat.i.cache-misses
5.766e+08 -18.6% 4.691e+08 perf-stat.i.cache-references
4674 ± 2% -13.0% 4064 perf-stat.i.context-switches
4.29 +12.5% 4.83 perf-stat.i.cpi
2.573e+11 -7.4% 2.383e+11 perf-stat.i.cpu-cycles
231.35 -21.5% 181.56 perf-stat.i.cpu-migrations
3522 +4.4% 3677 perf-stat.i.cycles-between-cache-misses
0.09 ± 13% +0.0 0.12 ± 5% perf-stat.i.iTLB-load-miss-rate%
5.894e+10 -15.8% 4.961e+10 perf-stat.i.iTLB-loads
5.901e+10 -15.8% 4.967e+10 perf-stat.i.instructions
1291 ± 14% -21.8% 1010 perf-stat.i.instructions-per-iTLB-miss
0.24 -11.0% 0.21 perf-stat.i.ipc
9476 -17.5% 7821 perf-stat.i.minor-faults
9478 -17.5% 7821 perf-stat.i.page-faults
9.76 -3.6% 9.41 perf-stat.overall.MPKI
1.59 ± 4% -0.1 1.52 perf-stat.overall.branch-miss-rate%
12.61 +1.1 13.71 perf-stat.overall.cache-miss-rate%
4.38 +10.5% 4.83 perf-stat.overall.cpi
3557 +5.3% 3747 perf-stat.overall.cycles-between-cache-misses
0.08 ± 12% +0.0 0.10 perf-stat.overall.iTLB-load-miss-rate%
1268 ± 15% -23.0% 976.22 perf-stat.overall.instructions-per-iTLB-miss
0.23 -9.5% 0.21 perf-stat.overall.ipc
5815 +9.7% 6378 perf-stat.overall.path-length
1.634e+10 -17.5% 1.348e+10 perf-stat.ps.branch-instructions
2.595e+08 ± 3% -21.2% 2.043e+08 ± 2% perf-stat.ps.branch-misses
72565205 -12.2% 63706339 perf-stat.ps.cache-misses
5.754e+08 -19.2% 4.646e+08 perf-stat.ps.cache-references
4640 ± 2% -12.5% 4060 perf-stat.ps.context-switches
2.581e+11 -7.5% 2.387e+11 perf-stat.ps.cpu-cycles
229.91 -22.0% 179.42 perf-stat.ps.cpu-migrations
5.889e+10 -16.3% 4.927e+10 perf-stat.ps.iTLB-loads
5.899e+10 -16.3% 4.938e+10 perf-stat.ps.instructions
9388 -18.2% 7677 perf-stat.ps.minor-faults
9389 -18.2% 7677 perf-stat.ps.page-faults
1.764e+13 -16.2% 1.479e+13 perf-stat.total.instructions
46803 ± 3% -18.8% 37982 ± 6% sched_debug.cfs_rq:/.exec_clock.min
5320 ± 3% +23.7% 6581 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
6737 ± 14% +58.1% 10649 ± 10% sched_debug.cfs_rq:/.load.avg
587978 ± 17% +58.2% 930382 ± 9% sched_debug.cfs_rq:/.load.max
46952 ± 16% +64.8% 77388 ± 11% sched_debug.cfs_rq:/.load.stddev
7.12 ± 4% +49.1% 10.62 ± 6% sched_debug.cfs_rq:/.load_avg.avg
474.40 ± 23% +67.5% 794.60 ± 10% sched_debug.cfs_rq:/.load_avg.max
37.70 ± 11% +74.8% 65.90 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
13424269 ± 4% -15.6% 11328098 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
15411275 ± 3% -12.4% 13505072 ± 2% sched_debug.cfs_rq:/.min_vruntime.max
7939295 ± 6% -17.5% 6551322 ± 7% sched_debug.cfs_rq:/.min_vruntime.min
21.44 ± 7% -56.1% 9.42 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
117.45 ± 11% -60.6% 46.30 ± 14% sched_debug.cfs_rq:/.nr_spread_over.max
19.33 ± 8% -66.4% 6.49 ± 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
4.32 ± 15% +84.4% 7.97 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
353.85 ± 29% +118.8% 774.35 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.max
27.30 ± 24% +118.5% 59.64 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.stddev
6729 ± 14% +58.2% 10644 ± 10% sched_debug.cfs_rq:/.runnable_weight.avg
587978 ± 17% +58.2% 930382 ± 9% sched_debug.cfs_rq:/.runnable_weight.max
46950 ± 16% +64.8% 77387 ± 11% sched_debug.cfs_rq:/.runnable_weight.stddev
5305069 ± 4% -17.4% 4380376 ± 7% sched_debug.cfs_rq:/.spread0.avg
7328745 ± 3% -9.9% 6600897 ± 3% sched_debug.cfs_rq:/.spread0.max
2220837 ± 4% +55.8% 3460596 ± 5% sched_debug.cpu.avg_idle.avg
4590666 ± 9% +76.8% 8117037 ± 15% sched_debug.cpu.avg_idle.max
485052 ± 7% +80.3% 874679 ± 10% sched_debug.cpu.avg_idle.stddev
561.50 ± 26% +37.7% 773.30 ± 15% sched_debug.cpu.clock.stddev
561.50 ± 26% +37.7% 773.30 ± 15% sched_debug.cpu.clock_task.stddev
3.20 ± 10% +109.6% 6.70 ± 3% sched_debug.cpu.cpu_load[0].avg
309.10 ± 20% +150.3% 773.75 ± 12% sched_debug.cpu.cpu_load[0].max
21.02 ± 14% +160.8% 54.80 ± 9% sched_debug.cpu.cpu_load[0].stddev
3.19 ± 8% +109.8% 6.70 ± 3% sched_debug.cpu.cpu_load[1].avg
299.75 ± 19% +158.0% 773.30 ± 12% sched_debug.cpu.cpu_load[1].max
20.32 ± 12% +168.7% 54.62 ± 9% sched_debug.cpu.cpu_load[1].stddev
3.20 ± 8% +109.1% 6.69 ± 4% sched_debug.cpu.cpu_load[2].avg
288.90 ± 20% +167.0% 771.40 ± 12% sched_debug.cpu.cpu_load[2].max
19.70 ± 12% +175.4% 54.27 ± 9% sched_debug.cpu.cpu_load[2].stddev
3.16 ± 8% +110.9% 6.66 ± 6% sched_debug.cpu.cpu_load[3].avg
275.50 ± 24% +178.4% 766.95 ± 12% sched_debug.cpu.cpu_load[3].max
18.92 ± 15% +184.2% 53.77 ± 10% sched_debug.cpu.cpu_load[3].stddev
3.08 ± 8% +115.7% 6.65 ± 7% sched_debug.cpu.cpu_load[4].avg
263.55 ± 28% +188.7% 760.85 ± 12% sched_debug.cpu.cpu_load[4].max
18.03 ± 18% +196.6% 53.46 ± 11% sched_debug.cpu.cpu_load[4].stddev
14543 -9.6% 13150 sched_debug.cpu.curr->pid.max
5293 ± 16% +74.7% 9248 ± 11% sched_debug.cpu.load.avg
587978 ± 17% +58.2% 930382 ± 9% sched_debug.cpu.load.max
40887 ± 19% +78.3% 72891 ± 9% sched_debug.cpu.load.stddev
1141679 ± 4% +56.9% 1790907 ± 5% sched_debug.cpu.max_idle_balance_cost.avg
2432100 ± 9% +72.6% 4196779 ± 13% sched_debug.cpu.max_idle_balance_cost.max
745656 +29.3% 964170 ± 5% sched_debug.cpu.max_idle_balance_cost.min
239032 ± 9% +81.9% 434806 ± 10% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 27% +92.1% 0.00 ± 31% sched_debug.cpu.next_balance.stddev
1030 ± 4% -10.4% 924.00 ± 2% sched_debug.cpu.nr_switches.min
0.04 ± 26% +139.0% 0.09 ± 41% sched_debug.cpu.nr_uninterruptible.avg
830.35 ± 6% -12.0% 730.50 ± 2% sched_debug.cpu.sched_count.min
912.00 ± 2% -9.5% 825.38 sched_debug.cpu.ttwu_count.avg
433.05 ± 3% -19.2% 350.05 ± 3% sched_debug.cpu.ttwu_count.min
160.70 ± 3% -12.5% 140.60 ± 4% sched_debug.cpu.ttwu_local.min
9072 ± 11% -36.4% 5767 ± 8% softirqs.CPU1.RCU
12769 ± 5% +15.3% 14718 ± 3% softirqs.CPU101.SCHED
13198 +11.5% 14717 ± 3% softirqs.CPU102.SCHED
12981 ± 4% +13.9% 14788 ± 3% softirqs.CPU105.SCHED
13486 ± 3% +11.8% 15071 ± 4% softirqs.CPU111.SCHED
12794 ± 4% +14.1% 14601 ± 9% softirqs.CPU112.SCHED
12999 ± 4% +10.1% 14314 ± 4% softirqs.CPU115.SCHED
12844 ± 4% +10.6% 14202 ± 2% softirqs.CPU120.SCHED
13336 ± 3% +9.4% 14585 ± 3% softirqs.CPU122.SCHED
12639 ± 4% +20.2% 15195 softirqs.CPU123.SCHED
13040 ± 5% +15.2% 15024 ± 5% softirqs.CPU126.SCHED
13123 +15.1% 15106 ± 5% softirqs.CPU127.SCHED
9188 ± 6% -35.7% 5911 ± 2% softirqs.CPU13.RCU
13054 ± 3% +13.1% 14761 ± 5% softirqs.CPU130.SCHED
13158 ± 2% +13.9% 14985 ± 5% softirqs.CPU131.SCHED
12797 ± 6% +13.5% 14524 ± 3% softirqs.CPU133.SCHED
12452 ± 5% +14.8% 14297 softirqs.CPU134.SCHED
13078 ± 3% +10.4% 14439 ± 3% softirqs.CPU138.SCHED
12617 ± 2% +14.5% 14442 ± 5% softirqs.CPU139.SCHED
12974 ± 3% +13.7% 14752 ± 4% softirqs.CPU142.SCHED
12579 ± 4% +19.1% 14983 ± 3% softirqs.CPU143.SCHED
9122 ± 24% -44.6% 5053 ± 5% softirqs.CPU144.RCU
13366 ± 2% +11.1% 14848 ± 3% softirqs.CPU149.SCHED
13246 ± 2% +22.0% 16162 ± 7% softirqs.CPU150.SCHED
13452 ± 3% +20.5% 16210 ± 7% softirqs.CPU151.SCHED
13507 +10.1% 14869 softirqs.CPU156.SCHED
13808 ± 3% +9.2% 15079 ± 4% softirqs.CPU157.SCHED
13442 ± 2% +13.4% 15248 ± 4% softirqs.CPU160.SCHED
13311 +12.1% 14920 ± 2% softirqs.CPU162.SCHED
13544 ± 3% +8.5% 14695 ± 4% softirqs.CPU163.SCHED
13648 ± 3% +11.2% 15179 ± 2% softirqs.CPU166.SCHED
13404 ± 4% +12.5% 15079 ± 3% softirqs.CPU168.SCHED
13421 ± 6% +16.0% 15568 ± 8% softirqs.CPU169.SCHED
13115 ± 3% +23.1% 16139 ± 10% softirqs.CPU171.SCHED
13424 ± 6% +10.4% 14822 ± 3% softirqs.CPU175.SCHED
13274 ± 3% +13.7% 15087 ± 9% softirqs.CPU185.SCHED
13409 ± 3% +12.3% 15063 ± 3% softirqs.CPU190.SCHED
13181 ± 7% +13.4% 14946 ± 3% softirqs.CPU196.SCHED
13578 ± 3% +10.9% 15061 softirqs.CPU197.SCHED
13323 ± 5% +24.8% 16627 ± 6% softirqs.CPU198.SCHED
14072 ± 2% +12.3% 15798 ± 7% softirqs.CPU199.SCHED
12604 ± 13% +17.9% 14865 softirqs.CPU201.SCHED
13380 ± 4% +14.8% 15356 ± 3% softirqs.CPU203.SCHED
13481 ± 8% +14.2% 15390 ± 3% softirqs.CPU204.SCHED
12921 ± 2% +13.8% 14710 ± 3% softirqs.CPU206.SCHED
13468 +13.0% 15218 ± 2% softirqs.CPU208.SCHED
13253 ± 2% +13.1% 14992 softirqs.CPU209.SCHED
13319 ± 2% +14.3% 15225 ± 7% softirqs.CPU210.SCHED
13673 ± 5% +16.3% 15895 ± 3% softirqs.CPU211.SCHED
13290 +17.0% 15556 ± 5% softirqs.CPU212.SCHED
13455 ± 4% +14.4% 15392 ± 3% softirqs.CPU213.SCHED
13454 ± 4% +14.3% 15377 ± 3% softirqs.CPU215.SCHED
13872 ± 7% +9.7% 15221 ± 5% softirqs.CPU220.SCHED
13555 ± 4% +17.3% 15896 ± 5% softirqs.CPU222.SCHED
13411 ± 4% +20.8% 16197 ± 6% softirqs.CPU223.SCHED
8472 ± 21% -44.8% 4680 ± 3% softirqs.CPU224.RCU
13141 ± 3% +16.2% 15265 ± 7% softirqs.CPU225.SCHED
14084 ± 3% +8.2% 15242 ± 2% softirqs.CPU226.SCHED
13528 ± 4% +11.3% 15063 ± 4% softirqs.CPU228.SCHED
13218 ± 3% +16.3% 15377 ± 4% softirqs.CPU229.SCHED
14031 ± 4% +10.2% 15467 ± 2% softirqs.CPU231.SCHED
13770 ± 3% +14.0% 15700 ± 3% softirqs.CPU232.SCHED
13456 ± 3% +12.3% 15105 ± 3% softirqs.CPU233.SCHED
13137 ± 4% +13.5% 14909 ± 3% softirqs.CPU234.SCHED
13318 ± 2% +14.7% 15280 ± 2% softirqs.CPU235.SCHED
13690 ± 2% +13.7% 15563 ± 7% softirqs.CPU238.SCHED
13771 ± 5% +20.8% 16634 ± 7% softirqs.CPU241.SCHED
13317 ± 7% +19.5% 15919 ± 9% softirqs.CPU243.SCHED
8234 ± 16% -43.9% 4616 ± 5% softirqs.CPU244.RCU
13845 ± 6% +13.0% 15643 ± 3% softirqs.CPU244.SCHED
13179 ± 3% +16.3% 15323 softirqs.CPU246.SCHED
13754 +12.2% 15438 ± 3% softirqs.CPU248.SCHED
13769 ± 4% +10.9% 15276 ± 2% softirqs.CPU252.SCHED
13702 +10.5% 15147 ± 2% softirqs.CPU254.SCHED
13315 ± 2% +12.5% 14980 ± 3% softirqs.CPU255.SCHED
13785 ± 3% +12.9% 15568 ± 5% softirqs.CPU256.SCHED
13307 ± 3% +15.0% 15298 ± 3% softirqs.CPU257.SCHED
13864 ± 3% +10.5% 15313 ± 2% softirqs.CPU259.SCHED
13879 ± 2% +11.4% 15465 softirqs.CPU261.SCHED
13815 +13.6% 15687 ± 5% softirqs.CPU264.SCHED
119574 ± 2% +11.8% 133693 ± 11% softirqs.CPU266.TIMER
13688 +10.9% 15180 ± 6% softirqs.CPU267.SCHED
11716 ± 4% +19.3% 13974 ± 8% softirqs.CPU27.SCHED
13866 ± 3% +13.7% 15765 ± 4% softirqs.CPU271.SCHED
13887 ± 5% +12.5% 15621 softirqs.CPU272.SCHED
13383 ± 3% +19.8% 16031 ± 2% softirqs.CPU274.SCHED
13347 +14.1% 15232 ± 3% softirqs.CPU275.SCHED
12884 ± 2% +21.0% 15593 ± 4% softirqs.CPU276.SCHED
13131 ± 5% +13.4% 14891 ± 5% softirqs.CPU277.SCHED
12891 ± 2% +19.2% 15371 ± 4% softirqs.CPU278.SCHED
13313 ± 4% +13.0% 15049 ± 2% softirqs.CPU279.SCHED
13514 ± 3% +10.2% 14897 ± 2% softirqs.CPU280.SCHED
13501 ± 3% +13.7% 15346 softirqs.CPU281.SCHED
13261 +17.5% 15577 softirqs.CPU282.SCHED
8076 ± 15% -43.7% 4546 ± 5% softirqs.CPU283.RCU
13686 ± 3% +12.6% 15413 ± 2% softirqs.CPU284.SCHED
13439 ± 2% +9.2% 14670 ± 4% softirqs.CPU285.SCHED
8878 ± 9% -35.4% 5735 ± 4% softirqs.CPU35.RCU
11690 ± 2% +13.6% 13274 ± 5% softirqs.CPU40.SCHED
11714 ± 2% +19.3% 13975 ± 13% softirqs.CPU41.SCHED
11763 +12.5% 13239 ± 4% softirqs.CPU45.SCHED
11662 ± 2% +9.4% 12757 ± 3% softirqs.CPU46.SCHED
11805 ± 2% +9.3% 12902 ± 2% softirqs.CPU50.SCHED
12158 ± 3% +12.3% 13655 ± 8% softirqs.CPU55.SCHED
11716 ± 4% +8.8% 12751 ± 3% softirqs.CPU58.SCHED
11922 ± 2% +9.9% 13100 ± 4% softirqs.CPU64.SCHED
9674 ± 17% -41.8% 5625 ± 6% softirqs.CPU66.RCU
11818 +12.0% 13237 softirqs.CPU66.SCHED
124682 ± 7% -6.1% 117088 ± 5% softirqs.CPU66.TIMER
8637 ± 9% -34.0% 5700 ± 7% softirqs.CPU70.RCU
11624 ± 2% +11.0% 12901 ± 2% softirqs.CPU70.SCHED
12372 ± 2% +13.2% 14003 ± 3% softirqs.CPU71.SCHED
9949 ± 25% -33.9% 6574 ± 31% softirqs.CPU72.RCU
10392 ± 26% -35.1% 6745 ± 35% softirqs.CPU73.RCU
12766 ± 3% +11.1% 14188 ± 3% softirqs.CPU76.SCHED
12611 ± 2% +18.8% 14984 ± 5% softirqs.CPU78.SCHED
12786 ± 3% +17.9% 15079 ± 7% softirqs.CPU79.SCHED
11947 ± 4% +9.7% 13103 ± 4% softirqs.CPU8.SCHED
13379 ± 7% +11.8% 14962 ± 4% softirqs.CPU83.SCHED
13438 ± 5% +9.7% 14738 ± 2% softirqs.CPU84.SCHED
12768 +19.4% 15241 ± 6% softirqs.CPU88.SCHED
8604 ± 13% -39.3% 5222 ± 3% softirqs.CPU89.RCU
13077 ± 2% +17.1% 15308 ± 7% softirqs.CPU89.SCHED
11887 ± 3% +20.1% 14272 ± 5% softirqs.CPU9.SCHED
12723 ± 3% +11.3% 14165 ± 4% softirqs.CPU90.SCHED
8439 ± 12% -38.9% 5153 ± 4% softirqs.CPU91.RCU
13429 ± 3% +10.3% 14806 ± 2% softirqs.CPU95.SCHED
12852 ± 4% +10.3% 14174 ± 5% softirqs.CPU96.SCHED
13010 ± 2% +14.4% 14888 ± 5% softirqs.CPU97.SCHED
2315644 ± 4% -36.2% 1477200 ± 4% softirqs.RCU
1572 ± 10% +63.9% 2578 ± 39% interrupts.CPU0.NMI:Non-maskable_interrupts
1572 ± 10% +63.9% 2578 ± 39% interrupts.CPU0.PMI:Performance_monitoring_interrupts
252.00 ± 11% -35.2% 163.25 ± 13% interrupts.CPU104.RES:Rescheduling_interrupts
2738 ± 24% +52.4% 4173 ± 19% interrupts.CPU105.NMI:Non-maskable_interrupts
2738 ± 24% +52.4% 4173 ± 19% interrupts.CPU105.PMI:Performance_monitoring_interrupts
245.75 ± 19% -31.0% 169.50 ± 7% interrupts.CPU105.RES:Rescheduling_interrupts
228.75 ± 13% -24.7% 172.25 ± 19% interrupts.CPU106.RES:Rescheduling_interrupts
2243 ± 15% +66.3% 3730 ± 35% interrupts.CPU113.NMI:Non-maskable_interrupts
2243 ± 15% +66.3% 3730 ± 35% interrupts.CPU113.PMI:Performance_monitoring_interrupts
2703 ± 31% +67.0% 4514 ± 33% interrupts.CPU118.NMI:Non-maskable_interrupts
2703 ± 31% +67.0% 4514 ± 33% interrupts.CPU118.PMI:Performance_monitoring_interrupts
2613 ± 25% +42.2% 3715 ± 24% interrupts.CPU121.NMI:Non-maskable_interrupts
2613 ± 25% +42.2% 3715 ± 24% interrupts.CPU121.PMI:Performance_monitoring_interrupts
311.50 ± 23% -47.7% 163.00 ± 9% interrupts.CPU122.RES:Rescheduling_interrupts
266.75 ± 19% -31.6% 182.50 ± 15% interrupts.CPU124.RES:Rescheduling_interrupts
293.75 ± 33% -32.3% 198.75 ± 19% interrupts.CPU125.RES:Rescheduling_interrupts
2601 ± 36% +43.2% 3724 ± 29% interrupts.CPU127.NMI:Non-maskable_interrupts
2601 ± 36% +43.2% 3724 ± 29% interrupts.CPU127.PMI:Performance_monitoring_interrupts
2258 ± 21% +68.2% 3797 ± 29% interrupts.CPU13.NMI:Non-maskable_interrupts
2258 ± 21% +68.2% 3797 ± 29% interrupts.CPU13.PMI:Performance_monitoring_interrupts
3338 ± 29% +54.6% 5160 ± 9% interrupts.CPU139.NMI:Non-maskable_interrupts
3338 ± 29% +54.6% 5160 ± 9% interrupts.CPU139.PMI:Performance_monitoring_interrupts
219.50 ± 27% -23.0% 169.00 ± 21% interrupts.CPU139.RES:Rescheduling_interrupts
290.25 ± 25% -32.5% 196.00 ± 11% interrupts.CPU14.RES:Rescheduling_interrupts
243.50 ± 4% -16.0% 204.50 ± 12% interrupts.CPU140.RES:Rescheduling_interrupts
1797 ± 15% +135.0% 4223 ± 46% interrupts.CPU147.NMI:Non-maskable_interrupts
1797 ± 15% +135.0% 4223 ± 46% interrupts.CPU147.PMI:Performance_monitoring_interrupts
2537 ± 22% +89.6% 4812 ± 28% interrupts.CPU15.NMI:Non-maskable_interrupts
2537 ± 22% +89.6% 4812 ± 28% interrupts.CPU15.PMI:Performance_monitoring_interrupts
292.25 ± 34% -33.9% 193.25 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts
424.25 ± 37% -58.5% 176.25 ± 14% interrupts.CPU158.RES:Rescheduling_interrupts
312.50 ± 42% -54.2% 143.00 ± 18% interrupts.CPU159.RES:Rescheduling_interrupts
725.00 ±118% -75.7% 176.25 ± 14% interrupts.CPU163.RES:Rescheduling_interrupts
2367 ± 6% +59.9% 3786 ± 24% interrupts.CPU177.NMI:Non-maskable_interrupts
2367 ± 6% +59.9% 3786 ± 24% interrupts.CPU177.PMI:Performance_monitoring_interrupts
239.50 ± 30% -46.6% 128.00 ± 14% interrupts.CPU179.RES:Rescheduling_interrupts
320.75 ± 15% -24.0% 243.75 ± 20% interrupts.CPU20.RES:Rescheduling_interrupts
302.50 ± 17% -47.2% 159.75 ± 8% interrupts.CPU200.RES:Rescheduling_interrupts
2166 ± 5% +92.0% 4157 ± 40% interrupts.CPU207.NMI:Non-maskable_interrupts
2166 ± 5% +92.0% 4157 ± 40% interrupts.CPU207.PMI:Performance_monitoring_interrupts
217.00 ± 11% -34.6% 142.00 ± 12% interrupts.CPU214.RES:Rescheduling_interrupts
2610 ± 36% +47.4% 3848 ± 35% interrupts.CPU215.NMI:Non-maskable_interrupts
2610 ± 36% +47.4% 3848 ± 35% interrupts.CPU215.PMI:Performance_monitoring_interrupts
2046 ± 13% +118.6% 4475 ± 43% interrupts.CPU22.NMI:Non-maskable_interrupts
2046 ± 13% +118.6% 4475 ± 43% interrupts.CPU22.PMI:Performance_monitoring_interrupts
289.50 ± 28% -41.1% 170.50 ± 8% interrupts.CPU22.RES:Rescheduling_interrupts
2232 ± 6% +33.0% 2970 ± 24% interrupts.CPU221.NMI:Non-maskable_interrupts
2232 ± 6% +33.0% 2970 ± 24% interrupts.CPU221.PMI:Performance_monitoring_interrupts
4552 ± 12% -27.6% 3295 ± 15% interrupts.CPU222.NMI:Non-maskable_interrupts
4552 ± 12% -27.6% 3295 ± 15% interrupts.CPU222.PMI:Performance_monitoring_interrupts
2013 ± 15% +80.9% 3641 ± 27% interrupts.CPU226.NMI:Non-maskable_interrupts
2013 ± 15% +80.9% 3641 ± 27% interrupts.CPU226.PMI:Performance_monitoring_interrupts
2575 ± 49% +67.1% 4302 ± 34% interrupts.CPU227.NMI:Non-maskable_interrupts
2575 ± 49% +67.1% 4302 ± 34% interrupts.CPU227.PMI:Performance_monitoring_interrupts
248.00 ± 36% -36.3% 158.00 ± 19% interrupts.CPU228.RES:Rescheduling_interrupts
2441 ± 24% +43.0% 3490 ± 30% interrupts.CPU23.NMI:Non-maskable_interrupts
2441 ± 24% +43.0% 3490 ± 30% interrupts.CPU23.PMI:Performance_monitoring_interrupts
404.25 ± 69% -65.5% 139.50 ± 17% interrupts.CPU236.RES:Rescheduling_interrupts
566.50 ± 40% -73.6% 149.50 ± 31% interrupts.CPU237.RES:Rescheduling_interrupts
243.50 ± 26% -37.1% 153.25 ± 21% interrupts.CPU248.RES:Rescheduling_interrupts
258.25 ± 12% -53.5% 120.00 ± 18% interrupts.CPU249.RES:Rescheduling_interrupts
2888 ± 27% +49.4% 4313 ± 30% interrupts.CPU253.NMI:Non-maskable_interrupts
2888 ± 27% +49.4% 4313 ± 30% interrupts.CPU253.PMI:Performance_monitoring_interrupts
2468 ± 44% +67.3% 4131 ± 37% interrupts.CPU256.NMI:Non-maskable_interrupts
2468 ± 44% +67.3% 4131 ± 37% interrupts.CPU256.PMI:Performance_monitoring_interrupts
425.00 ± 59% -60.3% 168.75 ± 34% interrupts.CPU258.RES:Rescheduling_interrupts
1859 ± 16% +106.3% 3834 ± 44% interrupts.CPU268.NMI:Non-maskable_interrupts
1859 ± 16% +106.3% 3834 ± 44% interrupts.CPU268.PMI:Performance_monitoring_interrupts
2684 ± 28% +61.2% 4326 ± 36% interrupts.CPU269.NMI:Non-maskable_interrupts
2684 ± 28% +61.2% 4326 ± 36% interrupts.CPU269.PMI:Performance_monitoring_interrupts
2171 ± 6% +108.8% 4533 ± 20% interrupts.CPU270.NMI:Non-maskable_interrupts
2171 ± 6% +108.8% 4533 ± 20% interrupts.CPU270.PMI:Performance_monitoring_interrupts
2262 ± 14% +61.8% 3659 ± 37% interrupts.CPU273.NMI:Non-maskable_interrupts
2262 ± 14% +61.8% 3659 ± 37% interrupts.CPU273.PMI:Performance_monitoring_interrupts
2203 ± 11% +50.7% 3320 ± 38% interrupts.CPU279.NMI:Non-maskable_interrupts
2203 ± 11% +50.7% 3320 ± 38% interrupts.CPU279.PMI:Performance_monitoring_interrupts
2433 ± 17% +52.9% 3721 ± 25% interrupts.CPU280.NMI:Non-maskable_interrupts
2433 ± 17% +52.9% 3721 ± 25% interrupts.CPU280.PMI:Performance_monitoring_interrupts
2778 ± 33% +63.1% 4531 ± 36% interrupts.CPU283.NMI:Non-maskable_interrupts
2778 ± 33% +63.1% 4531 ± 36% interrupts.CPU283.PMI:Performance_monitoring_interrupts
331.75 ± 32% -39.8% 199.75 ± 17% interrupts.CPU29.RES:Rescheduling_interrupts
2178 ± 22% +53.9% 3353 ± 31% interrupts.CPU3.NMI:Non-maskable_interrupts
2178 ± 22% +53.9% 3353 ± 31% interrupts.CPU3.PMI:Performance_monitoring_interrupts
298.50 ± 30% -39.7% 180.00 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
2490 ± 3% +58.7% 3953 ± 28% interrupts.CPU35.NMI:Non-maskable_interrupts
2490 ± 3% +58.7% 3953 ± 28% interrupts.CPU35.PMI:Performance_monitoring_interrupts
270.50 ± 24% -31.1% 186.25 ± 3% interrupts.CPU36.RES:Rescheduling_interrupts
2493 ± 7% +57.0% 3915 ± 27% interrupts.CPU43.NMI:Non-maskable_interrupts
2493 ± 7% +57.0% 3915 ± 27% interrupts.CPU43.PMI:Performance_monitoring_interrupts
286.75 ± 36% -32.4% 193.75 ± 7% interrupts.CPU45.RES:Rescheduling_interrupts
259.00 ± 12% -23.6% 197.75 ± 13% interrupts.CPU46.RES:Rescheduling_interrupts
244.00 ± 21% -35.6% 157.25 ± 11% interrupts.CPU47.RES:Rescheduling_interrupts
230.00 ± 7% -21.3% 181.00 ± 11% interrupts.CPU48.RES:Rescheduling_interrupts
281.00 ± 13% -27.4% 204.00 ± 15% interrupts.CPU53.RES:Rescheduling_interrupts
256.75 ± 5% -18.4% 209.50 ± 12% interrupts.CPU54.RES:Rescheduling_interrupts
2433 ± 9% +68.4% 4098 ± 35% interrupts.CPU58.NMI:Non-maskable_interrupts
2433 ± 9% +68.4% 4098 ± 35% interrupts.CPU58.PMI:Performance_monitoring_interrupts
316.00 ± 25% -41.4% 185.25 ± 13% interrupts.CPU59.RES:Rescheduling_interrupts
2703 ± 38% +56.0% 4217 ± 31% interrupts.CPU60.NMI:Non-maskable_interrupts
2703 ± 38% +56.0% 4217 ± 31% interrupts.CPU60.PMI:Performance_monitoring_interrupts
2425 ± 16% +39.9% 3394 ± 27% interrupts.CPU61.NMI:Non-maskable_interrupts
2425 ± 16% +39.9% 3394 ± 27% interrupts.CPU61.PMI:Performance_monitoring_interrupts
2388 ± 18% +69.5% 4047 ± 29% interrupts.CPU66.NMI:Non-maskable_interrupts
2388 ± 18% +69.5% 4047 ± 29% interrupts.CPU66.PMI:Performance_monitoring_interrupts
2322 ± 11% +93.4% 4491 ± 35% interrupts.CPU67.NMI:Non-maskable_interrupts
2322 ± 11% +93.4% 4491 ± 35% interrupts.CPU67.PMI:Performance_monitoring_interrupts
319.00 ± 40% -44.7% 176.25 ± 9% interrupts.CPU67.RES:Rescheduling_interrupts
2512 ± 8% +28.1% 3219 ± 25% interrupts.CPU70.NMI:Non-maskable_interrupts
2512 ± 8% +28.1% 3219 ± 25% interrupts.CPU70.PMI:Performance_monitoring_interrupts
2290 ± 39% +78.7% 4094 ± 28% interrupts.CPU74.NMI:Non-maskable_interrupts
2290 ± 39% +78.7% 4094 ± 28% interrupts.CPU74.PMI:Performance_monitoring_interrupts
2446 ± 40% +94.8% 4764 ± 23% interrupts.CPU75.NMI:Non-maskable_interrupts
2446 ± 40% +94.8% 4764 ± 23% interrupts.CPU75.PMI:Performance_monitoring_interrupts
426.75 ± 61% -67.7% 138.00 ± 8% interrupts.CPU75.RES:Rescheduling_interrupts
192.50 ± 13% +45.6% 280.25 ± 45% interrupts.CPU76.RES:Rescheduling_interrupts
274.25 ± 34% -42.2% 158.50 ± 34% interrupts.CPU77.RES:Rescheduling_interrupts
2357 ± 9% +73.0% 4078 ± 23% interrupts.CPU78.NMI:Non-maskable_interrupts
2357 ± 9% +73.0% 4078 ± 23% interrupts.CPU78.PMI:Performance_monitoring_interrupts
348.50 ± 53% -47.3% 183.75 ± 29% interrupts.CPU80.RES:Rescheduling_interrupts
2650 ± 43% +46.2% 3874 ± 36% interrupts.CPU84.NMI:Non-maskable_interrupts
2650 ± 43% +46.2% 3874 ± 36% interrupts.CPU84.PMI:Performance_monitoring_interrupts
2235 ± 10% +117.8% 4867 ± 10% interrupts.CPU90.NMI:Non-maskable_interrupts
2235 ± 10% +117.8% 4867 ± 10% interrupts.CPU90.PMI:Performance_monitoring_interrupts
2606 ± 33% +38.1% 3598 ± 21% interrupts.CPU92.NMI:Non-maskable_interrupts
2606 ± 33% +38.1% 3598 ± 21% interrupts.CPU92.PMI:Performance_monitoring_interrupts
408.75 ± 58% -56.8% 176.75 ± 25% interrupts.CPU92.RES:Rescheduling_interrupts
399.00 ± 64% -63.6% 145.25 ± 16% interrupts.CPU93.RES:Rescheduling_interrupts
314.75 ± 36% -44.2% 175.75 ± 13% interrupts.CPU94.RES:Rescheduling_interrupts
191.00 ± 15% -29.1% 135.50 ± 9% interrupts.CPU97.RES:Rescheduling_interrupts
94.00 ± 8% +50.0% 141.00 ± 12% interrupts.IWI:IRQ_work_interrupts
841457 ± 7% +16.6% 980751 ± 3% interrupts.NMI:Non-maskable_interrupts
841457 ± 7% +16.6% 980751 ± 3% interrupts.PMI:Performance_monitoring_interrupts
12.75 ± 11% -4.1 8.67 ± 31% perf-profile.calltrace.cycles-pp.do_rw_once
1.02 ± 16% -0.6 0.47 ± 59% perf-profile.calltrace.cycles-pp.sched_clock.sched_clock_cpu.cpuidle_enter_state.cpuidle_enter.do_idle
1.10 ± 15% -0.4 0.66 ± 14% perf-profile.calltrace.cycles-pp.sched_clock_cpu.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
1.05 ± 16% -0.4 0.61 ± 14% perf-profile.calltrace.cycles-pp.native_sched_clock.sched_clock.sched_clock_cpu.cpuidle_enter_state.cpuidle_enter
1.58 ± 4% +0.3 1.91 ± 7% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.copy_page
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.11 ± 4% +0.5 2.60 ± 7% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.osq_lock.__mutex_lock.hugetlb_fault.handle_mm_fault
0.83 ± 26% +0.5 1.32 ± 18% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.83 ± 26% +0.5 1.32 ± 18% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.90 ± 5% +0.6 2.45 ± 7% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.copy_page.copy_subpage
0.65 ± 62% +0.6 1.20 ± 15% perf-profile.calltrace.cycles-pp.alloc_fresh_huge_page.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow.hugetlb_fault
0.60 ± 62% +0.6 1.16 ± 18% perf-profile.calltrace.cycles-pp.free_huge_page.release_pages.tlb_flush_mmu.tlb_finish_mmu.exit_mmap
0.95 ± 17% +0.6 1.52 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.mutex_spin_on_owner
0.61 ± 62% +0.6 1.18 ± 18% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput
0.61 ± 62% +0.6 1.19 ± 19% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.61 ± 62% +0.6 1.19 ± 19% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
0.64 ± 61% +0.6 1.23 ± 18% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.64 ± 61% +0.6 1.23 ± 18% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
1.30 ± 9% +0.6 1.92 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.mutex_spin_on_owner.__mutex_lock
0.19 ±173% +0.7 0.89 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_huge_page.release_pages.tlb_flush_mmu
0.19 ±173% +0.7 0.90 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_huge_page.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.00 +0.8 0.77 ± 30% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.prep_new_huge_page.alloc_fresh_huge_page.alloc_surplus_huge_page
0.00 +0.8 0.78 ± 30% perf-profile.calltrace.cycles-pp._raw_spin_lock.prep_new_huge_page.alloc_fresh_huge_page.alloc_surplus_huge_page.alloc_huge_page
0.00 +0.8 0.79 ± 29% perf-profile.calltrace.cycles-pp.prep_new_huge_page.alloc_fresh_huge_page.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow
0.82 ± 67% +0.9 1.72 ± 22% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.alloc_huge_page.hugetlb_cow.hugetlb_fault
0.84 ± 66% +0.9 1.74 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow
2.52 ± 6% +0.9 3.44 ± 9% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.copy_page.copy_subpage.copy_user_huge_page
0.83 ± 67% +0.9 1.75 ± 21% perf-profile.calltrace.cycles-pp._raw_spin_lock.alloc_huge_page.hugetlb_cow.hugetlb_fault.handle_mm_fault
0.84 ± 66% +0.9 1.77 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow.hugetlb_fault
1.64 ± 12% +1.0 2.67 ± 7% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.mutex_spin_on_owner.__mutex_lock.hugetlb_fault
1.65 ± 45% +1.3 2.99 ± 18% perf-profile.calltrace.cycles-pp.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow.hugetlb_fault.handle_mm_fault
1.74 ± 13% +1.4 3.16 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.mutex_spin_on_owner.__mutex_lock.hugetlb_fault.handle_mm_fault
2.56 ± 48% +2.2 4.81 ± 19% perf-profile.calltrace.cycles-pp.alloc_huge_page.hugetlb_cow.hugetlb_fault.handle_mm_fault.__do_page_fault
12.64 ± 14% +3.6 16.20 ± 8% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.hugetlb_fault.handle_mm_fault.__do_page_fault
2.97 ± 7% +3.8 6.74 ± 9% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.copy_page.copy_subpage.copy_user_huge_page.hugetlb_cow
19.99 ± 9% +4.1 24.05 ± 6% perf-profile.calltrace.cycles-pp.hugetlb_cow.hugetlb_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.37 ± 15% -0.5 0.83 ± 13% perf-profile.children.cycles-pp.sched_clock_cpu
1.31 ± 16% -0.5 0.78 ± 13% perf-profile.children.cycles-pp.sched_clock
1.29 ± 16% -0.5 0.77 ± 13% perf-profile.children.cycles-pp.native_sched_clock
1.80 ± 2% -0.3 1.47 ± 10% perf-profile.children.cycles-pp.task_tick_fair
0.73 ± 2% -0.2 0.54 ± 11% perf-profile.children.cycles-pp.update_curr
0.42 ± 17% -0.2 0.27 ± 16% perf-profile.children.cycles-pp.account_process_tick
0.73 ± 10% -0.2 0.58 ± 9% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.27 ± 6% -0.1 0.14 ± 14% perf-profile.children.cycles-pp.__acct_update_integrals
0.27 ± 18% -0.1 0.16 ± 13% perf-profile.children.cycles-pp.rcu_segcblist_ready_cbs
0.40 ± 12% -0.1 0.30 ± 14% perf-profile.children.cycles-pp.__next_timer_interrupt
0.47 ± 7% -0.1 0.39 ± 13% perf-profile.children.cycles-pp.update_rq_clock
0.29 ± 12% -0.1 0.21 ± 15% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.21 ± 7% -0.1 0.14 ± 12% perf-profile.children.cycles-pp.account_system_index_time
0.38 ± 2% -0.1 0.31 ± 12% perf-profile.children.cycles-pp.timerqueue_add
0.26 ± 11% -0.1 0.20 ± 13% perf-profile.children.cycles-pp.find_next_bit
0.23 ± 15% -0.1 0.17 ± 15% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.14 ± 8% -0.1 0.07 ± 14% perf-profile.children.cycles-pp.account_user_time
0.17 ± 6% -0.0 0.12 ± 10% perf-profile.children.cycles-pp.cpuacct_charge
0.18 ± 20% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.irq_work_tick
0.11 ± 13% -0.0 0.07 ± 25% perf-profile.children.cycles-pp.tick_sched_do_timer
0.12 ± 10% -0.0 0.08 ± 15% perf-profile.children.cycles-pp.get_cpu_device
0.07 ± 11% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.raise_softirq
0.12 ± 3% -0.0 0.09 ± 8% perf-profile.children.cycles-pp.write
0.11 ± 13% +0.0 0.14 ± 8% perf-profile.children.cycles-pp.native_write_msr
0.09 ± 9% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.finish_task_switch
0.10 ± 10% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.schedule_idle
0.07 ± 6% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.__read_nocancel
0.04 ± 58% +0.0 0.07 ± 15% perf-profile.children.cycles-pp.__free_pages_ok
0.06 ± 7% +0.0 0.09 ± 13% perf-profile.children.cycles-pp.perf_read
0.07 +0.0 0.11 ± 14% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.cmd_stat
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.__run_perf_stat
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.process_interval
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.read_counters
0.07 ± 22% +0.0 0.11 ± 19% perf-profile.children.cycles-pp.__handle_mm_fault
0.07 ± 19% +0.1 0.13 ± 8% perf-profile.children.cycles-pp.rb_erase
0.03 ±100% +0.1 0.09 ± 9% perf-profile.children.cycles-pp.smp_call_function_single
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.perf_event_read
0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.__perf_event_read_value
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.08 ± 17% +0.1 0.15 ± 8% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.04 ±103% +0.1 0.13 ± 58% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.38 ± 14% +0.1 0.51 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.11 ± 4% +0.3 0.37 ± 32% perf-profile.children.cycles-pp.worker_thread
0.20 ± 5% +0.3 0.48 ± 25% perf-profile.children.cycles-pp.ret_from_fork
0.20 ± 4% +0.3 0.48 ± 25% perf-profile.children.cycles-pp.kthread
0.00 +0.3 0.29 ± 38% perf-profile.children.cycles-pp.memcpy_erms
0.00 +0.3 0.29 ± 38% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.00 +0.3 0.31 ± 37% perf-profile.children.cycles-pp.process_one_work
0.47 ± 48% +0.4 0.91 ± 19% perf-profile.children.cycles-pp.prep_new_huge_page
0.70 ± 29% +0.5 1.16 ± 18% perf-profile.children.cycles-pp.free_huge_page
0.73 ± 29% +0.5 1.19 ± 18% perf-profile.children.cycles-pp.tlb_flush_mmu
0.72 ± 29% +0.5 1.18 ± 18% perf-profile.children.cycles-pp.release_pages
0.73 ± 29% +0.5 1.19 ± 18% perf-profile.children.cycles-pp.tlb_finish_mmu
0.76 ± 27% +0.5 1.23 ± 18% perf-profile.children.cycles-pp.exit_mmap
0.77 ± 27% +0.5 1.24 ± 18% perf-profile.children.cycles-pp.mmput
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.children.cycles-pp.do_group_exit
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.children.cycles-pp.do_exit
1.28 ± 29% +0.5 1.76 ± 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.77 ± 28% +0.5 1.26 ± 13% perf-profile.children.cycles-pp.alloc_fresh_huge_page
1.53 ± 15% +0.7 2.26 ± 14% perf-profile.children.cycles-pp.do_syscall_64
1.53 ± 15% +0.7 2.27 ± 14% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.13 ± 3% +0.9 2.07 ± 14% perf-profile.children.cycles-pp.interrupt_entry
0.79 ± 9% +1.0 1.76 ± 5% perf-profile.children.cycles-pp.perf_event_task_tick
1.71 ± 39% +1.4 3.08 ± 16% perf-profile.children.cycles-pp.alloc_surplus_huge_page
2.66 ± 42% +2.3 4.94 ± 17% perf-profile.children.cycles-pp.alloc_huge_page
2.89 ± 45% +2.7 5.54 ± 18% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.34 ± 35% +2.7 6.02 ± 17% perf-profile.children.cycles-pp._raw_spin_lock
12.77 ± 14% +3.9 16.63 ± 7% perf-profile.children.cycles-pp.mutex_spin_on_owner
20.12 ± 9% +4.0 24.16 ± 6% perf-profile.children.cycles-pp.hugetlb_cow
15.40 ± 10% -3.6 11.84 ± 28% perf-profile.self.cycles-pp.do_rw_once
4.02 ± 9% -1.3 2.73 ± 30% perf-profile.self.cycles-pp.do_access
2.00 ± 14% -0.6 1.41 ± 13% perf-profile.self.cycles-pp.cpuidle_enter_state
1.26 ± 16% -0.5 0.74 ± 13% perf-profile.self.cycles-pp.native_sched_clock
0.42 ± 17% -0.2 0.27 ± 16% perf-profile.self.cycles-pp.account_process_tick
0.27 ± 19% -0.2 0.12 ± 17% perf-profile.self.cycles-pp.timerqueue_del
0.53 ± 3% -0.1 0.38 ± 11% perf-profile.self.cycles-pp.update_curr
0.27 ± 6% -0.1 0.14 ± 14% perf-profile.self.cycles-pp.__acct_update_integrals
0.27 ± 18% -0.1 0.16 ± 13% perf-profile.self.cycles-pp.rcu_segcblist_ready_cbs
0.61 ± 4% -0.1 0.51 ± 8% perf-profile.self.cycles-pp.task_tick_fair
0.20 ± 8% -0.1 0.12 ± 14% perf-profile.self.cycles-pp.account_system_index_time
0.23 ± 15% -0.1 0.16 ± 17% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.25 ± 11% -0.1 0.18 ± 14% perf-profile.self.cycles-pp.find_next_bit
0.10 ± 11% -0.1 0.03 ±100% perf-profile.self.cycles-pp.tick_sched_do_timer
0.29 -0.1 0.23 ± 11% perf-profile.self.cycles-pp.timerqueue_add
0.12 ± 10% -0.1 0.06 ± 17% perf-profile.self.cycles-pp.account_user_time
0.22 ± 15% -0.1 0.16 ± 6% perf-profile.self.cycles-pp.scheduler_tick
0.17 ± 6% -0.0 0.12 ± 10% perf-profile.self.cycles-pp.cpuacct_charge
0.18 ± 20% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.irq_work_tick
0.07 ± 13% -0.0 0.03 ±100% perf-profile.self.cycles-pp.update_process_times
0.12 ± 7% -0.0 0.08 ± 15% perf-profile.self.cycles-pp.get_cpu_device
0.07 ± 11% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.raise_softirq
0.12 ± 11% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.tick_nohz_get_sleep_length
0.11 ± 11% +0.0 0.14 ± 6% perf-profile.self.cycles-pp.native_write_msr
0.10 ± 5% +0.1 0.15 ± 8% perf-profile.self.cycles-pp.__remove_hrtimer
0.07 ± 23% +0.1 0.13 ± 8% perf-profile.self.cycles-pp.rb_erase
0.08 ± 17% +0.1 0.15 ± 7% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.smp_call_function_single
0.32 ± 17% +0.1 0.42 ± 7% perf-profile.self.cycles-pp.run_timer_softirq
0.22 ± 5% +0.1 0.34 ± 4% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.45 ± 15% +0.2 0.60 ± 12% perf-profile.self.cycles-pp.rcu_irq_enter
0.31 ± 8% +0.2 0.46 ± 16% perf-profile.self.cycles-pp.irq_enter
0.29 ± 10% +0.2 0.44 ± 16% perf-profile.self.cycles-pp.apic_timer_interrupt
0.71 ± 30% +0.2 0.92 ± 8% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.3 0.28 ± 37% perf-profile.self.cycles-pp.memcpy_erms
1.12 ± 3% +0.9 2.02 ± 15% perf-profile.self.cycles-pp.interrupt_entry
0.79 ± 9% +0.9 1.73 ± 5% perf-profile.self.cycles-pp.perf_event_task_tick
2.49 ± 45% +2.1 4.55 ± 20% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
10.95 ± 15% +2.7 13.61 ± 8% perf-profile.self.cycles-pp.mutex_spin_on_owner
vm-scalability.throughput
1.6e+07 +-+---------------------------------------------------------------+
|..+.+ +..+.+..+.+. +. +..+.+..+.+..+.+..+.+..+ + |
1.4e+07 +-+ : : O O O O |
1.2e+07 O-+O O O O O O O O O O O O O O O O O O
| : : O O O O |
1e+07 +-+ : : |
| : : |
8e+06 +-+ : : |
| : : |
6e+06 +-+ : : |
4e+06 +-+ : : |
| :: |
2e+06 +-+ : |
| : |
0 +-+---------------------------------------------------------------+
vm-scalability.time.minor_page_faults
2.5e+06 +-+---------------------------------------------------------------+
| |
|..+.+ +..+.+..+.+..+.+..+.+.. .+. .+.+..+.+..+.+..+.+..+ |
2e+06 +-+ : : +. +. |
O O O: O O O O O O O O O O |
| : : O O O O O O O O O O O O O O
1.5e+06 +-+ : : |
| : : |
1e+06 +-+ : : |
| : : |
| : : |
500000 +-+ : : |
| : |
| : |
0 +-+---------------------------------------------------------------+
vm-scalability.workload
3.5e+09 +-+---------------------------------------------------------------+
| .+. .+.+.. .+.. |
3e+09 +-+ + +..+.+..+.+..+.+. +..+.+..+.+..+.+..+.+..+ + |
| : : O O O |
2.5e+09 O-+O O: O O O O O O O O O |
| : : O O O O O O O O O O O O
2e+09 +-+ : : |
| : : |
1.5e+09 +-+ : : |
| : : |
1e+09 +-+ : : |
| : : |
5e+08 +-+ : |
| : |
0 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 3 months
[cpuidle] 259231a045: will-it-scale.per_process_ops -12.6% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.6% regression of will-it-scale.per_process_ops due to commit:
commit: 259231a045616c4101d023a8f4dcc8379af265a6 ("cpuidle: add poll_limit_ns to cpuidle_device structure")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: will-it-scale
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
nr_task: 100%
mode: process
test: mmap1
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2019-05-14.cgz/lkp-knm01/mmap1/will-it-scale
commit:
fa86ee90eb ("add cpuidle-haltpoll driver")
259231a045 ("cpuidle: add poll_limit_ns to cpuidle_device structure")
fa86ee90eb111126 259231a045616c4101d023a8f4d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
%stddev %change %stddev
\ | \
1611 -12.6% 1408 will-it-scale.per_process_ops
464144 -12.6% 405580 will-it-scale.workload
1581 ± 2% +3.3% 1633 vmstat.system.cs
35.13 -1.0% 34.78 boot-time.dhcp
11888 -1.0% 11765 boot-time.idle
5207 ± 4% +25.7% 6547 ± 7% slabinfo.kmalloc-rcl-64.active_objs
5207 ± 4% +25.7% 6547 ± 7% slabinfo.kmalloc-rcl-64.num_objs
1.07 ± 53% -38.8% 0.66 ± 6% turbostat.CPU%c6
2.63 -14.9% 2.23 ± 2% turbostat.RAMWatt
874.11 ± 2% +10.7% 967.81 ± 5% sched_debug.cfs_rq:/.exec_clock.stddev
4672082 ± 24% +60.3% 7488441 ± 26% sched_debug.cpu.avg_idle.max
662.84 ± 3% +21.4% 804.72 ± 6% sched_debug.cpu.clock.stddev
662.84 ± 3% +21.4% 804.72 ± 6% sched_debug.cpu.clock_task.stddev
1185029 ± 14% +51.4% 1793617 ± 28% sched_debug.cpu.max_idle_balance_cost.max
75638 ± 25% +62.8% 123124 ± 17% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 2% +21.3% 0.00 ± 6% sched_debug.cpu.next_balance.stddev
1.16 ± 6% -0.1 1.05 perf-stat.i.branch-miss-rate%
1.104e+08 -5.6% 1.042e+08 perf-stat.i.branch-misses
45924834 -3.9% 44147120 perf-stat.i.cache-misses
1455 +2.1% 1485 perf-stat.i.context-switches
9825 +4.4% 10255 perf-stat.i.cycles-between-cache-misses
0.18 ± 8% -0.0 0.16 perf-stat.i.iTLB-load-miss-rate%
67884172 -6.4% 63509655 perf-stat.i.iTLB-load-misses
596.76 +7.3% 640.20 perf-stat.i.instructions-per-iTLB-miss
1.12 -0.1 1.05 perf-stat.overall.branch-miss-rate%
9817 +4.3% 10239 perf-stat.overall.cycles-between-cache-misses
0.17 -0.0 0.16 perf-stat.overall.iTLB-load-miss-rate%
596.10 +7.1% 638.47 perf-stat.overall.instructions-per-iTLB-miss
26461122 +13.8% 30121468 perf-stat.overall.path-length
1.098e+08 -5.7% 1.035e+08 perf-stat.ps.branch-misses
45700304 -3.9% 43918015 perf-stat.ps.cache-misses
67526222 -6.5% 63149942 perf-stat.ps.iTLB-load-misses
481381 ± 18% +17.7% 566437 ± 7% interrupts.CPU0.LOC:Local_timer_interrupts
79.50 ±111% -76.4% 18.75 ± 99% interrupts.CPU100.RES:Rescheduling_interrupts
5783 ± 33% -17.7% 4757 ± 34% interrupts.CPU103.NMI:Non-maskable_interrupts
5783 ± 33% -17.7% 4757 ± 34% interrupts.CPU103.PMI:Performance_monitoring_interrupts
128.00 ±105% -92.8% 9.25 ± 37% interrupts.CPU107.RES:Rescheduling_interrupts
5769 ± 33% -33.5% 3837 interrupts.CPU109.NMI:Non-maskable_interrupts
5769 ± 33% -33.5% 3837 interrupts.CPU109.PMI:Performance_monitoring_interrupts
6774 ± 24% -29.4% 4785 ± 34% interrupts.CPU111.NMI:Non-maskable_interrupts
6774 ± 24% -29.4% 4785 ± 34% interrupts.CPU111.PMI:Performance_monitoring_interrupts
140.50 ± 43% -78.3% 30.50 ±118% interrupts.CPU118.RES:Rescheduling_interrupts
5776 ± 33% -33.7% 3830 interrupts.CPU120.NMI:Non-maskable_interrupts
5776 ± 33% -33.7% 3830 interrupts.CPU120.PMI:Performance_monitoring_interrupts
5801 ± 33% -17.7% 4776 ± 34% interrupts.CPU126.NMI:Non-maskable_interrupts
5801 ± 33% -17.7% 4776 ± 34% interrupts.CPU126.PMI:Performance_monitoring_interrupts
5772 ± 33% -17.7% 4749 ± 34% interrupts.CPU138.NMI:Non-maskable_interrupts
5772 ± 33% -17.7% 4749 ± 34% interrupts.CPU138.PMI:Performance_monitoring_interrupts
5786 ± 33% -18.4% 4722 ± 34% interrupts.CPU14.NMI:Non-maskable_interrupts
5786 ± 33% -18.4% 4722 ± 34% interrupts.CPU14.PMI:Performance_monitoring_interrupts
3844 +74.8% 6718 ± 24% interrupts.CPU167.NMI:Non-maskable_interrupts
3844 +74.8% 6718 ± 24% interrupts.CPU167.PMI:Performance_monitoring_interrupts
96.50 ± 98% -82.9% 16.50 ± 98% interrupts.CPU172.RES:Rescheduling_interrupts
5757 ± 33% -33.8% 3809 interrupts.CPU175.NMI:Non-maskable_interrupts
5757 ± 33% -33.8% 3809 interrupts.CPU175.PMI:Performance_monitoring_interrupts
481026 ± 19% +18.0% 567722 ± 7% interrupts.CPU18.LOC:Local_timer_interrupts
58.50 ± 91% -71.4% 16.75 ± 94% interrupts.CPU180.RES:Rescheduling_interrupts
6737 ± 24% -43.7% 3795 interrupts.CPU184.NMI:Non-maskable_interrupts
6737 ± 24% -43.7% 3795 interrupts.CPU184.PMI:Performance_monitoring_interrupts
5770 ± 32% -18.1% 4728 ± 34% interrupts.CPU188.NMI:Non-maskable_interrupts
5770 ± 32% -18.1% 4728 ± 34% interrupts.CPU188.PMI:Performance_monitoring_interrupts
281.00 ± 87% -80.5% 54.75 ± 49% interrupts.CPU188.RES:Rescheduling_interrupts
529.50 ±124% -95.6% 23.50 ± 81% interrupts.CPU189.RES:Rescheduling_interrupts
7713 -50.7% 3803 interrupts.CPU192.NMI:Non-maskable_interrupts
7713 -50.7% 3803 interrupts.CPU192.PMI:Performance_monitoring_interrupts
5762 ± 33% -33.9% 3809 interrupts.CPU203.NMI:Non-maskable_interrupts
5762 ± 33% -33.9% 3809 interrupts.CPU203.PMI:Performance_monitoring_interrupts
5783 ± 33% -17.9% 4750 ± 35% interrupts.CPU217.NMI:Non-maskable_interrupts
5783 ± 33% -17.9% 4750 ± 35% interrupts.CPU217.PMI:Performance_monitoring_interrupts
16.75 ± 49% +443.3% 91.00 ±129% interrupts.CPU224.RES:Rescheduling_interrupts
6779 ± 24% -43.5% 3830 interrupts.CPU239.NMI:Non-maskable_interrupts
6779 ± 24% -43.5% 3830 interrupts.CPU239.PMI:Performance_monitoring_interrupts
478215 ± 19% +17.7% 562671 ± 8% interrupts.CPU258.LOC:Local_timer_interrupts
350.50 ± 89% -88.3% 41.00 ±121% interrupts.CPU259.RES:Rescheduling_interrupts
493.00 ±139% -95.1% 24.25 ± 69% interrupts.CPU261.RES:Rescheduling_interrupts
6726 ± 24% -43.3% 3813 interrupts.CPU263.NMI:Non-maskable_interrupts
6726 ± 24% -43.3% 3813 interrupts.CPU263.PMI:Performance_monitoring_interrupts
5755 ± 33% -34.2% 3789 interrupts.CPU270.NMI:Non-maskable_interrupts
5755 ± 33% -34.2% 3789 interrupts.CPU270.PMI:Performance_monitoring_interrupts
5776 ± 32% -33.8% 3825 interrupts.CPU275.NMI:Non-maskable_interrupts
5776 ± 32% -33.8% 3825 interrupts.CPU275.PMI:Performance_monitoring_interrupts
5799 ± 33% -17.8% 4769 ± 35% interrupts.CPU276.NMI:Non-maskable_interrupts
5799 ± 33% -17.8% 4769 ± 35% interrupts.CPU276.PMI:Performance_monitoring_interrupts
6708 ± 24% -29.1% 4754 ± 34% interrupts.CPU277.NMI:Non-maskable_interrupts
6708 ± 24% -29.1% 4754 ± 34% interrupts.CPU277.PMI:Performance_monitoring_interrupts
5809 ± 32% -34.2% 3820 interrupts.CPU3.NMI:Non-maskable_interrupts
5809 ± 32% -34.2% 3820 interrupts.CPU3.PMI:Performance_monitoring_interrupts
150.25 ± 27% -61.9% 57.25 ± 21% interrupts.CPU3.RES:Rescheduling_interrupts
7721 -50.0% 3858 interrupts.CPU34.NMI:Non-maskable_interrupts
7721 -50.0% 3858 interrupts.CPU34.PMI:Performance_monitoring_interrupts
135.00 ± 57% -77.6% 30.25 ± 24% interrupts.CPU36.RES:Rescheduling_interrupts
6764 ± 24% -43.4% 3831 interrupts.CPU39.NMI:Non-maskable_interrupts
6764 ± 24% -43.4% 3831 interrupts.CPU39.PMI:Performance_monitoring_interrupts
7661 -38.2% 4733 ± 33% interrupts.CPU4.NMI:Non-maskable_interrupts
7661 -38.2% 4733 ± 33% interrupts.CPU4.PMI:Performance_monitoring_interrupts
117.25 ± 52% -66.1% 39.75 ± 76% interrupts.CPU40.RES:Rescheduling_interrupts
480972 ± 18% +17.6% 565776 ± 7% interrupts.CPU42.LOC:Local_timer_interrupts
1034 ± 65% -85.0% 154.75 ±127% interrupts.CPU45.RES:Rescheduling_interrupts
24.00 ± 68% +2232.3% 559.75 ±103% interrupts.CPU50.RES:Rescheduling_interrupts
4794 ± 34% +20.8% 5790 ± 33% interrupts.CPU51.NMI:Non-maskable_interrupts
4794 ± 34% +20.8% 5790 ± 33% interrupts.CPU51.PMI:Performance_monitoring_interrupts
48.25 ± 96% +590.7% 333.25 ±110% interrupts.CPU51.RES:Rescheduling_interrupts
6767 ± 24% -43.3% 3837 interrupts.CPU53.NMI:Non-maskable_interrupts
6767 ± 24% -43.3% 3837 interrupts.CPU53.PMI:Performance_monitoring_interrupts
16.25 ± 88% +696.9% 129.50 ±137% interrupts.CPU53.RES:Rescheduling_interrupts
479539 ± 19% +17.4% 562923 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
6744 ± 24% -43.2% 3830 interrupts.CPU58.NMI:Non-maskable_interrupts
6744 ± 24% -43.2% 3830 interrupts.CPU58.PMI:Performance_monitoring_interrupts
6731 ± 24% -29.5% 4747 ± 35% interrupts.CPU6.NMI:Non-maskable_interrupts
6731 ± 24% -29.5% 4747 ± 35% interrupts.CPU6.PMI:Performance_monitoring_interrupts
7693 -50.2% 3834 interrupts.CPU65.NMI:Non-maskable_interrupts
7693 -50.2% 3834 interrupts.CPU65.PMI:Performance_monitoring_interrupts
5821 ± 33% -34.2% 3829 interrupts.CPU66.NMI:Non-maskable_interrupts
5821 ± 33% -34.2% 3829 interrupts.CPU66.PMI:Performance_monitoring_interrupts
18.75 ± 35% +392.0% 92.25 ± 75% interrupts.CPU72.RES:Rescheduling_interrupts
6782 ± 24% -43.4% 3837 interrupts.CPU80.NMI:Non-maskable_interrupts
6782 ± 24% -43.4% 3837 interrupts.CPU80.PMI:Performance_monitoring_interrupts
5776 ± 33% -17.5% 4764 ± 33% interrupts.CPU97.NMI:Non-maskable_interrupts
5776 ± 33% -17.5% 4764 ± 33% interrupts.CPU97.PMI:Performance_monitoring_interrupts
48.96 -0.3 48.70 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
48.94 -0.3 48.67 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
49.21 -0.3 48.95 perf-profile.calltrace.cycles-pp.mmap64
46.12 -0.2 45.89 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.percpu_counter_add_batch.__vm_enough_memory.mmap_region.do_mmap
46.08 -0.2 45.85 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.percpu_counter_add_batch.__vm_enough_memory.mmap_region
48.68 -0.2 48.47 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
48.65 -0.2 48.45 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
48.47 -0.2 48.27 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.27 -0.1 1.16 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
1.19 -0.1 1.09 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
1.61 -0.1 1.54 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
50.20 +0.2 50.37 perf-profile.calltrace.cycles-pp.munmap
49.95 +0.2 50.13 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
49.96 +0.2 50.15 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
46.38 +0.2 46.56 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.percpu_counter_add_batch.__do_munmap.__vm_munmap
49.67 +0.2 49.88 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
49.64 +0.2 49.85 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
49.51 +0.3 49.76 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.52 +0.4 47.88 perf-profile.calltrace.cycles-pp.percpu_counter_add_batch.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.00 +1.0 1.03 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__vm_enough_memory
0.00 +1.0 1.03 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__do_munmap
0.00 +1.1 1.10 ± 4% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__vm_enough_memory.mmap_region
0.00 +1.1 1.10 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__do_munmap.__vm_munmap
0.00 +1.1 1.12 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__vm_enough_memory.mmap_region.do_mmap
0.00 +1.1 1.13 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__do_munmap.__vm_munmap.__x64_sys_munmap
0.00 +1.4 1.40 ± 4% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore
0.00 +1.7 1.72 ± 5% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch
49.22 -0.3 48.96 perf-profile.children.cycles-pp.mmap64
48.68 -0.2 48.47 perf-profile.children.cycles-pp.ksys_mmap_pgoff
48.66 -0.2 48.45 perf-profile.children.cycles-pp.vm_mmap_pgoff
48.47 -0.2 48.27 perf-profile.children.cycles-pp.do_mmap
1.27 -0.1 1.16 perf-profile.children.cycles-pp.unmap_vmas
0.36 ± 3% -0.1 0.25 ± 5% perf-profile.children.cycles-pp.perf_event_mmap
1.23 -0.1 1.13 perf-profile.children.cycles-pp.unmap_page_range
1.62 -0.1 1.54 perf-profile.children.cycles-pp.unmap_region
0.20 ± 6% -0.1 0.14 ± 8% perf-profile.children.cycles-pp.perf_iterate_sb
0.55 ± 2% -0.0 0.51 ± 2% perf-profile.children.cycles-pp.___might_sleep
0.17 ± 6% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.22 -0.0 0.19 ± 2% perf-profile.children.cycles-pp.get_unmapped_area
0.16 ± 6% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.17 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.30 ± 2% -0.0 0.28 perf-profile.children.cycles-pp._cond_resched
0.19 ± 3% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.free_p4d_range
0.11 -0.0 0.10 ± 5% perf-profile.children.cycles-pp.unmapped_area_topdown
0.12 +0.0 0.14 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.08 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.07 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.cred_has_capability
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__might_sleep
0.11 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.15 ± 12% +0.0 0.19 ± 6% perf-profile.children.cycles-pp.update_curr
0.20 ± 33% +0.1 0.26 ± 30% perf-profile.children.cycles-pp.vfs_write
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.perf_event_task_tick
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.53 ± 5% +0.1 0.61 ± 3% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.09 perf-profile.children.cycles-pp.lru_add_drain_cpu
0.00 +0.1 0.10 ± 11% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.01 ±173% +0.1 0.11 ± 7% perf-profile.children.cycles-pp.irq_enter
0.00 +0.1 0.10 ± 4% perf-profile.children.cycles-pp.lru_add_drain
0.69 ± 7% +0.1 0.82 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.87 ± 6% +0.2 1.02 ± 4% perf-profile.children.cycles-pp.update_process_times
0.94 ± 6% +0.2 1.09 ± 4% perf-profile.children.cycles-pp.tick_sched_timer
0.89 ± 6% +0.2 1.04 ± 4% perf-profile.children.cycles-pp.tick_sched_handle
50.23 +0.2 50.40 perf-profile.children.cycles-pp.munmap
1.28 ± 7% +0.2 1.48 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
49.67 +0.2 49.88 perf-profile.children.cycles-pp.__x64_sys_munmap
49.64 +0.2 49.86 perf-profile.children.cycles-pp.__vm_munmap
49.52 +0.3 49.77 perf-profile.children.cycles-pp.__do_munmap
94.81 +0.3 95.07 perf-profile.children.cycles-pp.percpu_counter_add_batch
1.54 ± 9% +0.3 1.83 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
1.83 ± 9% +0.4 2.18 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.95 ± 10% +0.4 2.34 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
0.00 +2.4 2.38 ± 5% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.15 ± 5% -0.0 0.10 ± 11% perf-profile.self.cycles-pp.perf_iterate_sb
0.17 ± 6% -0.0 0.13 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.51 -0.0 0.46 ± 2% perf-profile.self.cycles-pp.unmap_page_range
0.52 ± 2% -0.0 0.48 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.10 ± 7% -0.0 0.06 perf-profile.self.cycles-pp.perf_event_mmap
0.15 ± 4% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.15 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp._cond_resched
0.10 ± 4% -0.0 0.09 perf-profile.self.cycles-pp.unmapped_area_topdown
0.09 -0.0 0.08 perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.08 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.mmap_region
0.05 +0.0 0.08 perf-profile.self.cycles-pp.mmap64
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.task_tick_fair
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.irq_enter
0.04 ± 59% +0.1 0.10 ± 12% perf-profile.self.cycles-pp.hrtimer_interrupt
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.perf_event_task_tick
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.01 ±173% +0.1 0.08 ± 5% perf-profile.self.cycles-pp.munmap
0.00 +0.1 0.09 ± 5% perf-profile.self.cycles-pp.lru_add_drain_cpu
0.00 +0.1 0.10 ± 11% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.00 +0.1 0.15 ± 5% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
will-it-scale.per_process_ops
1800 +-+------------------------------------------------------------------+
| .+ .++.++.++. .++.++.++.+ .++.++.++.+ .+ .+|
1600 +-++.++ + ++.+ ++.+ ++.++ +.++ + + |
1400 +-+O OO OO OO OO OO :O :O OO OO O OO OO OO OO OO |
O O OO O O : : O |
1200 +-+ : : : : |
1000 +-+ : : : : |
| : : : : |
800 +-+ :: :: |
600 +-+ :: :: |
| :: :: |
400 +-+ :: :: |
200 +-+ : : |
| : : |
0 +-+------------------------------------------------------------------+
will-it-scale.workload
500000 +-+----------------------------------------------------------------+
450000 +-+++.++.++.++.+++.++.+ ++.+ ++.+++.++.++.++.++.+++.++.++.++.++.+|
| O O : : : : O O |
400000 OO+OO O OO O OOO OO OO OO OO OO OO O OO OO OO O |
350000 +-+ : : : : |
| : : : : |
300000 +-+ : : : : |
250000 +-+ : : : : |
200000 +-+ :: :: |
| :: :: |
150000 +-+ :: :: |
100000 +-+ : : |
| : : |
50000 +-+ : : |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 3 months
[mm] 87eaceb3fa: stress-ng.madvise.ops_per_sec -19.6% regression
by kernel test robot
Greeting,
FYI, we noticed a -19.6% regression of stress-ng.madvise.ops_per_sec due to commit:
commit: 87eaceb3faa59b9b4d940ec9554ce251325d83fe ("mm: thp: make deferred split shrinker memcg aware")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: stress-ng
on test machine: 72 threads Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz with 192G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 1s
class: vm
ucode: 0x200005e
cpufreq_governor: performance
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
vm/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-05-14.cgz/lkp-skl-2sp8/stress-ng/1s/0x200005e
commit:
0a432dcbeb ("mm: shrinker: make shrinker not depend on memcg kmem")
87eaceb3fa ("mm: thp: make deferred split shrinker memcg aware")
0a432dcbeb32edcd 87eaceb3faa59b9b4d940ec9554
---------------- ---------------------------
%stddev %change %stddev
\ | \
6457 -19.5% 5198 stress-ng.madvise.ops
6409 -19.6% 5154 stress-ng.madvise.ops_per_sec
3575 -26.8% 2618 ± 6% stress-ng.mremap.ops
3575 -26.9% 2613 ± 6% stress-ng.mremap.ops_per_sec
15.77 -5.8% 14.85 ± 2% iostat.cpu.user
3427944 ± 4% -9.3% 3109984 meminfo.AnonPages
33658 ± 22% +69791.3% 23524535 ±165% sched_debug.cfs_rq:/.load.max
19951 ± 7% +13.3% 22611 ± 3% softirqs.CPU54.TIMER
109.94 -4.1% 105.41 turbostat.RAMWatt
5.89 ± 62% -3.1 2.78 ±173% perf-profile.calltrace.cycles-pp.page_fault
3.39 ±101% -0.6 2.78 ±173% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
3.39 ±101% -0.6 2.78 ±173% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
5.28 ±100% -5.3 0.00 perf-profile.children.cycles-pp.___might_sleep
5.28 ±100% -5.3 0.00 perf-profile.self.cycles-pp.___might_sleep
885704 ± 5% -11.0% 787888 ± 6% proc-vmstat.nr_anon_pages
1.742e+08 ± 5% -9.6% 1.576e+08 ± 2% proc-vmstat.pgalloc_normal
1.741e+08 ± 5% -9.6% 1.575e+08 ± 2% proc-vmstat.pgfree
375505 -19.4% 302688 proc-vmstat.pglazyfree
55236 ± 38% -55.5% 24552 ± 41% proc-vmstat.thp_deferred_split_page
55234 ± 38% -55.6% 24543 ± 41% proc-vmstat.thp_fault_alloc
3218 -19.4% 2595 proc-vmstat.thp_split_page
12163 ± 7% -22.1% 9473 ± 12% proc-vmstat.thp_split_pmd
8193516 +3.2% 8459146 proc-vmstat.unevictable_pgs_scanned
79085 ± 10% -7.8% 72890 interrupts.CAL:Function_call_interrupts
1139 ± 9% -10.7% 1018 ± 3% interrupts.CPU0.CAL:Function_call_interrupts
3596 ± 3% -13.8% 3100 ± 8% interrupts.CPU20.TLB:TLB_shootdowns
3602 ± 4% -12.6% 3149 ± 10% interrupts.CPU23.TLB:TLB_shootdowns
3512 ± 5% -9.9% 3163 ± 9% interrupts.CPU25.TLB:TLB_shootdowns
3512 ± 3% -12.1% 3088 ± 9% interrupts.CPU26.TLB:TLB_shootdowns
3610 ± 5% -13.2% 3134 ± 5% interrupts.CPU29.TLB:TLB_shootdowns
3602 ± 5% -17.4% 2973 ± 8% interrupts.CPU31.TLB:TLB_shootdowns
3548 ± 4% -12.7% 3098 ± 6% interrupts.CPU32.TLB:TLB_shootdowns
3637 ± 5% -15.2% 3085 ± 7% interrupts.CPU35.TLB:TLB_shootdowns
3588 ± 3% -12.7% 3131 ± 9% interrupts.CPU56.TLB:TLB_shootdowns
3664 ± 5% -14.3% 3142 ± 10% interrupts.CPU59.TLB:TLB_shootdowns
3542 ± 6% -13.0% 3082 ± 5% interrupts.CPU64.TLB:TLB_shootdowns
3539 ± 5% +12.4% 3977 ± 11% interrupts.CPU7.TLB:TLB_shootdowns
3485 ± 5% -13.0% 3033 ± 10% interrupts.CPU70.TLB:TLB_shootdowns
3651 ± 4% -16.1% 3062 ± 9% interrupts.CPU71.TLB:TLB_shootdowns
1.557e+10 ± 2% -8.1% 1.431e+10 ± 3% perf-stat.i.branch-instructions
1.887e+08 ± 9% -21.4% 1.484e+08 perf-stat.i.cache-misses
5.026e+08 ± 2% -8.6% 4.595e+08 ± 4% perf-stat.i.cache-references
2609 ± 3% +6.0% 2766 perf-stat.i.cycles-between-cache-misses
7.344e+09 -6.6% 6.861e+09 ± 3% perf-stat.i.dTLB-stores
6.969e+10 -7.6% 6.44e+10 ± 3% perf-stat.i.instructions
0.37 ± 2% -7.2% 0.34 perf-stat.i.ipc
43.29 ± 5% +3.7 46.94 ± 4% perf-stat.i.node-load-miss-rate%
15474576 ± 8% -23.9% 11782653 ± 13% perf-stat.i.node-load-misses
28.50 ± 6% +3.2 31.74 ± 3% perf-stat.i.node-store-miss-rate%
26447212 ± 5% -11.6% 23382361 ± 4% perf-stat.i.node-stores
0.61 +0.0 0.65 perf-stat.overall.branch-miss-rate%
37.58 ± 8% -5.2 32.39 ± 4% perf-stat.overall.cache-miss-rate%
2.91 ± 2% +3.4% 3.00 perf-stat.overall.cpi
1091 ± 10% +19.5% 1303 ± 3% perf-stat.overall.cycles-between-cache-misses
0.17 +0.0 0.18 ± 3% perf-stat.overall.dTLB-store-miss-rate%
5922 -6.6% 5533 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.34 ± 2% -3.3% 0.33 perf-stat.overall.ipc
1.462e+10 ± 2% -5.3% 1.384e+10 ± 3% perf-stat.ps.branch-instructions
1.765e+08 ± 10% -18.7% 1.435e+08 perf-stat.ps.cache-misses
6.926e+09 -4.0% 6.648e+09 ± 3% perf-stat.ps.dTLB-stores
6.555e+10 -5.0% 6.229e+10 ± 3% perf-stat.ps.instructions
14736035 ± 8% -21.6% 11547658 ± 14% perf-stat.ps.node-load-misses
2.703e+12 -4.4% 2.585e+12 ± 2% perf-stat.total.instructions
stress-ng.madvise.ops
7000 +-+------------------------------------------------------------------+
6800 +-+ + |
| : |
6600 +-+.+ ++.+ +.+ ++ + ++. : :+.+ +. ++ .++ +. ++ + +. + |
6400 +-+ ++ ++ + +.+ + ++ + ++ + : + ++ + ++.+ + ++ :+.++|
| + + |
6200 +-+ |
6000 +-+ |
5800 +-+ |
| |
5600 +-+ |
5400 +-+ |
OO OOO O O O O O O O O O O |
5200 +-O O O OO OO OO O OOO O OO O O OOOO O O |
5000 +-+------------------------------------------------------------------+
stress-ng.madvise.ops_per_sec
7000 +-+------------------------------------------------------------------+
6800 +-+ + |
| : |
6600 +-+ : : |
6400 +-+.+++++.++++.+++++.+++++.++ ++.++++.+ +++.+++++.+++++.++++.++ ++.++|
| + + |
6200 +-+ |
6000 +-+ |
5800 +-+ |
| |
5600 +-+ |
5400 +-+ |
|O O O O O O |
5200 O-O O OOO O OO OO OOOOO OOOOO OOOO OO O O O |
5000 +-+---------O---------------------------OO---------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 5 months
[mm/lruvec] 3145e78472: vm-scalability.median -18.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -18.4% regression of vm-scalability.median due to commit:
commit: 3145e78472f7ad746e15180092a9a60e34e5455b ("mm/lruvec: add irqsave flags into lruvec struct")
https://github.com/alexshi/linux.git lru_lock
in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:
runtime: 300s
test: lru-file-readtwice
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | reaim: reaim.child_systime -2.7% undefined |
| test machine | 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=1000t |
| | runtime=300s |
| | test=mem_rtns_1 |
| | ucode=0x42e |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 2.2% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | test=lru-file-readonce |
| | ucode=0xb000036 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | reaim: boot-time.boot -10.9% improvement |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=1000 |
| | runtime=300s |
| | test=page_test |
| | ucode=0x43 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/lkp-skl-fpga01/lru-file-readtwice/vm-scalability
commit:
f17f33d34b ("mm/lru: add per lruvec lock for memcg")
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
f17f33d34bfab978 3145e78472f7ad746e15180092a
---------------- ---------------------------
%stddev %change %stddev
\ | \
74973 -18.4% 61174 vm-scalability.median
15723187 -18.7% 12782643 vm-scalability.throughput
9059 ± 2% -3.0% 8784 vm-scalability.time.percent_of_cpu_this_job_got
775.84 -11.7% 684.95 ± 2% vm-scalability.time.user_time
4.717e+09 -18.7% 3.835e+09 vm-scalability.workload
2.25 ± 2% -0.3 1.95 ± 2% mpstat.cpu.all.usr%
262971 ± 12% +1862.9% 5161896 ±146% cpuidle.C1.time
9396 ± 7% +398.9% 46873 ± 78% cpuidle.C1.usage
1.725e+09 ± 35% +133.2% 4.022e+09 ± 2% cpuidle.C1E.time
4854576 ± 19% +73.6% 8429348 ± 3% cpuidle.C1E.usage
2.678e+08 ± 3% -16.7% 2.23e+08 numa-numastat.node0.local_node
2.678e+08 ± 3% -16.7% 2.23e+08 numa-numastat.node0.numa_hit
2.567e+08 ± 4% -21.2% 2.022e+08 ± 2% numa-numastat.node1.local_node
2.567e+08 ± 4% -21.2% 2.022e+08 ± 2% numa-numastat.node1.numa_hit
4850776 ± 19% +73.7% 8424945 ± 3% turbostat.C1E
4.93 ± 36% +6.3 11.23 turbostat.C1E%
313.58 -2.2% 306.66 turbostat.PkgWatt
158.49 -7.5% 146.56 turbostat.RAMWatt
47477 ± 41% -64.2% 17019 ± 89% numa-meminfo.node0.AnonHugePages
163026 ± 15% -33.4% 108650 ± 15% numa-meminfo.node0.AnonPages
139320 ± 39% +70.0% 236813 ± 14% numa-meminfo.node1.Active(anon)
117671 ± 22% +45.1% 170695 ± 9% numa-meminfo.node1.AnonPages
60723 ±108% +143.3% 147737 ± 28% numa-meminfo.node1.Shmem
40729 ± 15% -33.5% 27100 ± 15% numa-vmstat.node0.nr_anon_pages
1.599e+08 ± 8% -15.6% 1.35e+08 ± 2% numa-vmstat.node0.numa_hit
1.599e+08 ± 8% -15.6% 1.349e+08 ± 2% numa-vmstat.node0.numa_local
1832813 ± 8% -42.1% 1061136 ± 2% numa-vmstat.node0.workingset_nodereclaim
34859 ± 40% +69.8% 59181 ± 14% numa-vmstat.node1.nr_active_anon
29378 ± 22% +45.0% 42593 ± 9% numa-vmstat.node1.nr_anon_pages
15244 ±108% +142.4% 36959 ± 28% numa-vmstat.node1.nr_shmem
34859 ± 40% +69.8% 59181 ± 14% numa-vmstat.node1.nr_zone_active_anon
1.506e+08 ± 4% -18.4% 1.229e+08 numa-vmstat.node1.numa_hit
1.504e+08 ± 4% -18.4% 1.228e+08 numa-vmstat.node1.numa_local
5.61 ± 18% +28.4% 7.20 ± 3% sched_debug.cfs_rq:/.load_avg.min
16155378 ± 8% -15.3% 13683878 sched_debug.cfs_rq:/.min_vruntime.avg
23982413 ± 9% -22.1% 18682333 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
1553928 ± 12% -24.8% 1168658 ± 18% sched_debug.cfs_rq:/.min_vruntime.stddev
51.01 ± 32% +144.8% 124.85 ± 15% sched_debug.cfs_rq:/.nr_spread_over.avg
6.68 ± 50% +702.0% 53.60 ± 13% sched_debug.cfs_rq:/.nr_spread_over.min
8121602 ± 7% -27.1% 5924452 ± 24% sched_debug.cfs_rq:/.spread0.max
1553182 ± 12% -24.9% 1166573 ± 18% sched_debug.cfs_rq:/.spread0.stddev
1.52 ± 5% +9.9% 1.67 sched_debug.cpu.nr_running.avg
2.44 ± 4% +14.7% 2.80 ± 5% sched_debug.cpu.nr_running.max
0.40 ± 3% +11.0% 0.45 ± 3% sched_debug.cpu.nr_running.stddev
5648 ± 31% +39.4% 7874 interrupts.CPU102.NMI:Non-maskable_interrupts
5648 ± 31% +39.4% 7874 interrupts.CPU102.PMI:Performance_monitoring_interrupts
5634 ± 31% +40.0% 7888 interrupts.CPU12.NMI:Non-maskable_interrupts
5634 ± 31% +40.0% 7888 interrupts.CPU12.PMI:Performance_monitoring_interrupts
11638 ± 20% +30.1% 15142 ± 3% interrupts.CPU12.RES:Rescheduling_interrupts
3164 ± 5% -12.3% 2775 ± 3% interrupts.CPU20.CAL:Function_call_interrupts
10956 ± 22% +32.4% 14502 ± 3% interrupts.CPU20.RES:Rescheduling_interrupts
3140 ± 5% -9.3% 2849 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
11003 ± 22% +43.8% 15819 ± 12% interrupts.CPU24.RES:Rescheduling_interrupts
10885 ± 22% +32.9% 14469 ± 3% interrupts.CPU25.RES:Rescheduling_interrupts
11238 ± 21% +36.2% 15306 ± 3% interrupts.CPU46.RES:Rescheduling_interrupts
11964 ± 18% +26.7% 15158 ± 4% interrupts.CPU64.RES:Rescheduling_interrupts
5655 ± 31% +39.5% 7886 interrupts.CPU70.NMI:Non-maskable_interrupts
5655 ± 31% +39.5% 7886 interrupts.CPU70.PMI:Performance_monitoring_interrupts
10880 ± 22% +32.7% 14437 ± 3% interrupts.CPU73.RES:Rescheduling_interrupts
5664 ± 30% +39.6% 7909 interrupts.CPU88.NMI:Non-maskable_interrupts
5664 ± 30% +39.6% 7909 interrupts.CPU88.PMI:Performance_monitoring_interrupts
4683 ± 27% +68.2% 7875 interrupts.CPU98.NMI:Non-maskable_interrupts
4683 ± 27% +68.2% 7875 interrupts.CPU98.PMI:Performance_monitoring_interrupts
5642 ± 31% +39.4% 7866 interrupts.CPU99.NMI:Non-maskable_interrupts
5642 ± 31% +39.4% 7866 interrupts.CPU99.PMI:Performance_monitoring_interrupts
30916 ± 19% +29.1% 39910 ± 6% proc-vmstat.allocstall_normal
19250571 ± 70% -69.5% 5862524 ±115% proc-vmstat.kswapd_inodesteal
1659 ± 3% -12.4% 1453 ± 2% proc-vmstat.nr_isolated_file
504201 -4.2% 483130 proc-vmstat.nr_slab_reclaimable
80964875 ± 2% -17.4% 66838802 proc-vmstat.numa_foreign
5.245e+08 -18.9% 4.253e+08 proc-vmstat.numa_hit
5.244e+08 -18.9% 4.252e+08 proc-vmstat.numa_local
80964875 ± 2% -17.4% 66838802 proc-vmstat.numa_miss
80998443 ± 2% -17.4% 66872510 proc-vmstat.numa_other
44865 ± 24% -44.7% 24813 ± 55% proc-vmstat.numa_pte_updates
5.782e+08 -18.6% 4.708e+08 proc-vmstat.pgactivate
3998466 -14.7% 3411922 ± 2% proc-vmstat.pgalloc_dma32
6.02e+08 -18.7% 4.892e+08 proc-vmstat.pgalloc_normal
5.359e+08 -20.5% 4.258e+08 proc-vmstat.pgdeactivate
6.062e+08 -18.7% 4.928e+08 proc-vmstat.pgfree
5.359e+08 -20.5% 4.258e+08 proc-vmstat.pgrefill
4.438e+08 ± 3% -16.0% 3.725e+08 proc-vmstat.pgscan_direct
1.075e+08 ± 17% -38.6% 65986419 ± 9% proc-vmstat.pgscan_kswapd
4.438e+08 ± 3% -16.0% 3.725e+08 proc-vmstat.pgsteal_direct
1.075e+08 ± 17% -38.6% 65986278 ± 9% proc-vmstat.pgsteal_kswapd
5655090 ± 4% -30.4% 3935762 ± 2% proc-vmstat.slabs_scanned
1799903 ± 8% -41.9% 1045192 ± 2% proc-vmstat.workingset_nodereclaim
2538960 -6.7% 2369106 proc-vmstat.workingset_nodes
46802538 ± 5% -15.1% 39750899 perf-stat.i.branch-misses
1.731e+08 -20.8% 1.371e+08 perf-stat.i.cache-misses
1444 ± 3% +22.2% 1764 perf-stat.i.cycles-between-cache-misses
0.11 ± 13% -0.0 0.09 ± 5% perf-stat.i.dTLB-load-miss-rate%
12694366 ± 3% -21.6% 9954799 ± 5% perf-stat.i.dTLB-load-misses
0.04 ± 9% -0.0 0.03 ± 2% perf-stat.i.dTLB-store-miss-rate%
2419175 ± 5% -13.6% 2089064 perf-stat.i.dTLB-store-misses
4.367e+09 ± 7% -15.1% 3.709e+09 perf-stat.i.dTLB-stores
62.78 ± 5% +14.0 76.82 ± 3% perf-stat.i.iTLB-load-miss-rate%
7043117 -20.7% 5588038 perf-stat.i.iTLB-load-misses
3445769 ± 11% -69.1% 1064065 ± 21% perf-stat.i.iTLB-loads
0.24 +4.0% 0.25 perf-stat.i.ipc
33398454 ± 7% -14.8% 28456341 ± 3% perf-stat.i.node-load-misses
43937966 ± 4% -21.3% 34560695 perf-stat.i.node-loads
59.03 -3.6 55.45 ± 4% perf-stat.i.node-store-miss-rate%
6486394 ± 4% -34.5% 4247840 perf-stat.i.node-store-misses
4894501 ± 3% -22.7% 3784507 perf-stat.i.node-stores
1488 ± 2% +23.4% 1836 perf-stat.overall.cycles-between-cache-misses
67.42 ± 4% +17.1 84.57 ± 3% perf-stat.overall.iTLB-load-miss-rate%
5960 ± 8% +12.8% 6723 perf-stat.overall.instructions-per-iTLB-miss
57.02 ± 2% -4.1 52.95 perf-stat.overall.node-store-miss-rate%
3013 ± 8% +12.7% 3396 perf-stat.overall.path-length
46913412 ± 4% -14.9% 39925738 perf-stat.ps.branch-misses
1.748e+08 -20.7% 1.387e+08 perf-stat.ps.cache-misses
12819357 ± 3% -21.4% 10074587 ± 4% perf-stat.ps.dTLB-load-misses
2359724 ± 5% -14.2% 2024579 perf-stat.ps.dTLB-store-misses
4.395e+09 ± 6% -14.8% 3.742e+09 perf-stat.ps.dTLB-stores
7106851 -20.5% 5648000 perf-stat.ps.iTLB-load-misses
3449581 ± 12% -69.9% 1039118 ± 22% perf-stat.ps.iTLB-loads
33724459 ± 6% -14.6% 28810854 ± 3% perf-stat.ps.node-load-misses
44344489 ± 5% -21.3% 34891557 perf-stat.ps.node-loads
6554717 ± 3% -34.4% 4301848 perf-stat.ps.node-store-misses
4938399 ± 3% -22.6% 3822265 perf-stat.ps.node-stores
16404 ± 9% +33.8% 21951 ± 10% softirqs.CPU100.RCU
17144 ± 6% +48.8% 25502 ± 13% softirqs.CPU103.RCU
16425 ± 9% +23.3% 20245 ± 6% softirqs.CPU11.RCU
16928 ± 8% +36.0% 23024 ± 22% softirqs.CPU21.RCU
17425 ± 8% +19.0% 20728 ± 5% softirqs.CPU25.RCU
17358 ± 11% +33.2% 23119 ± 15% softirqs.CPU28.RCU
17545 ± 9% +34.2% 23538 ± 13% softirqs.CPU30.RCU
7428 ± 15% +27.4% 9461 ± 11% softirqs.CPU33.SCHED
17479 ± 7% +21.4% 21214 ± 9% softirqs.CPU36.RCU
16701 ± 13% +25.3% 20925 ± 8% softirqs.CPU37.RCU
17165 ± 5% +17.1% 20109 ± 5% softirqs.CPU4.RCU
17614 ± 9% +20.2% 21168 ± 8% softirqs.CPU41.RCU
15981 ± 9% +31.6% 21033 ± 5% softirqs.CPU43.RCU
15882 ± 7% +42.1% 22562 ± 11% softirqs.CPU45.RCU
17174 ± 10% +41.9% 24371 ± 16% softirqs.CPU48.RCU
17769 ± 7% +21.8% 21643 ± 14% softirqs.CPU49.RCU
16707 ± 6% +17.5% 19634 ± 4% softirqs.CPU55.RCU
16743 ± 5% +14.1% 19096 softirqs.CPU56.RCU
16092 ± 9% +22.1% 19649 softirqs.CPU59.RCU
15956 ± 9% +19.9% 19132 softirqs.CPU60.RCU
16252 ± 8% +31.8% 21426 ± 16% softirqs.CPU61.RCU
16237 ± 9% +18.1% 19169 ± 2% softirqs.CPU63.RCU
15702 ± 9% +25.7% 19735 ± 8% softirqs.CPU70.RCU
16246 ± 10% +21.1% 19677 ± 3% softirqs.CPU71.RCU
16291 ± 7% +19.4% 19456 ± 3% softirqs.CPU73.RCU
16242 ± 6% +29.1% 20960 ± 10% softirqs.CPU77.RCU
16559 ± 8% +17.6% 19468 softirqs.CPU8.RCU
16827 ± 11% +21.7% 20482 ± 7% softirqs.CPU81.RCU
16796 ± 8% +41.3% 23736 ± 10% softirqs.CPU82.RCU
16968 ± 10% +20.9% 20509 ± 7% softirqs.CPU83.RCU
17763 ± 13% +21.3% 21552 ± 13% softirqs.CPU86.RCU
16916 ± 5% +28.4% 21728 ± 12% softirqs.CPU88.RCU
16041 ± 8% +21.1% 19420 ± 2% softirqs.CPU89.RCU
16277 ± 8% +29.3% 21053 ± 9% softirqs.CPU9.RCU
16742 ± 9% +18.6% 19853 ± 5% softirqs.CPU93.RCU
15877 ± 8% +22.4% 19436 ± 3% softirqs.CPU95.RCU
17233 ± 6% +19.3% 20560 ± 5% softirqs.CPU97.RCU
1844097 ± 7% +18.2% 2179875 softirqs.RCU
17.18 ± 6% -3.9 13.28 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_active_list.shrink_node_memcg.shrink_node
17.52 ± 7% -3.5 13.97 perf-profile.calltrace.cycles-pp.shrink_active_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
16.81 ± 7% -3.5 13.29 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_active_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
19.50 ± 6% -1.6 17.88 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_read_iter
19.48 ± 6% -1.6 17.86 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed
20.13 ± 5% -1.5 18.59 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read
20.15 ± 5% -1.5 18.62 perf-profile.calltrace.cycles-pp.activate_page.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
17.06 ± 5% -1.3 15.74 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node
20.50 ± 5% -1.3 19.20 perf-profile.calltrace.cycles-pp.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
15.90 ± 6% -1.2 14.66 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
20.37 ± 4% -1.1 19.32 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor
20.36 ± 4% -1.1 19.30 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru
20.91 ± 3% -1.0 19.93 perf-profile.calltrace.cycles-pp.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages
20.89 ± 3% -1.0 19.91 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply
18.30 ± 3% -0.8 17.53 perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
2.06 -0.5 1.59 perf-profile.calltrace.cycles-pp.iomap_readpage_actor.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages
1.57 ± 3% -0.4 1.18 perf-profile.calltrace.cycles-pp.copy_page_to_iter.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
1.46 ± 3% -0.4 1.10 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
1.45 ± 3% -0.4 1.09 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.generic_file_read_iter.xfs_file_buffered_aio_read
1.95 -0.3 1.62 perf-profile.calltrace.cycles-pp.write
1.08 ± 2% -0.3 0.81 perf-profile.calltrace.cycles-pp.memset_erms.iomap_readpage_actor.iomap_readpages_actor.iomap_apply.iomap_readpages
0.75 ± 10% -0.2 0.53 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead
0.92 -0.2 0.72 perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_readpage_actor.iomap_readpages_actor.iomap_apply.iomap_readpages
1.19 -0.2 1.03 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
1.15 -0.1 1.01 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.10 ± 10% +0.2 1.27 ± 6% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
0.38 ± 57% +0.2 0.55 perf-profile.calltrace.cycles-pp.__activate_page.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_read_iter
0.45 ± 57% +0.5 0.90 ± 5% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.workingset_activation.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
0.00 +0.5 0.54 ± 3% perf-profile.calltrace.cycles-pp.workingset_eviction.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
94.21 +0.6 94.86 perf-profile.calltrace.cycles-pp.read
93.43 +0.8 94.23 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
93.40 +0.8 94.21 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
92.64 +0.9 93.51 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
92.53 +0.9 93.43 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
92.17 +1.0 93.16 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
92.12 +1.0 93.12 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
92.00 +1.0 93.02 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_read
91.85 +1.1 92.91 perf-profile.calltrace.cycles-pp.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read.vfs_read
69.07 +2.9 71.93 perf-profile.calltrace.cycles-pp.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
69.07 +2.9 71.93 perf-profile.calltrace.cycles-pp.ondemand_readahead.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.new_sync_read
23.87 ± 5% +4.1 27.99 ± 9% perf-profile.calltrace.cycles-pp.read_pages.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter.xfs_file_buffered_aio_read
23.86 ± 5% +4.1 27.99 ± 9% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_readpages.read_pages.__do_page_cache_readahead.ondemand_readahead
23.86 ± 5% +4.1 27.99 ± 9% perf-profile.calltrace.cycles-pp.iomap_readpages.read_pages.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter
23.84 ± 5% +4.1 27.97 ± 9% perf-profile.calltrace.cycles-pp.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages.__do_page_cache_readahead
21.72 ± 5% +4.6 26.34 ± 9% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages
44.55 ± 4% +5.0 49.58 perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
44.72 ± 4% +5.1 49.78 perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
0.31 ±173% +5.8 6.06 ± 39% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_slab_page
0.31 ±173% +5.8 6.07 ± 39% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_slab_page.new_slab
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_slab_page.new_slab.___slab_alloc.__slab_alloc
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_slab_page.new_slab.___slab_alloc
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.__slab_alloc.kmem_cache_alloc.xas_nomem.__add_to_page_cache_locked.add_to_page_cache_lru
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.___slab_alloc.__slab_alloc.kmem_cache_alloc.xas_nomem.__add_to_page_cache_locked
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.xas_nomem.__add_to_page_cache_locked.add_to_page_cache_lru.iomap_readpages_actor
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.new_slab.___slab_alloc.__slab_alloc.kmem_cache_alloc.xas_nomem
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.alloc_slab_page.new_slab.___slab_alloc.__slab_alloc.kmem_cache_alloc
0.31 ±173% +5.8 6.08 ± 39% perf-profile.calltrace.cycles-pp.xas_nomem.__add_to_page_cache_locked.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply
0.53 ±121% +5.8 6.34 ± 38% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages
6.24 ± 42% +7.3 13.55 perf-profile.calltrace.cycles-pp.lruvec_lru_size.inactive_list_is_low.shrink_node_memcg.shrink_node.do_try_to_free_pages
7.94 ± 42% +9.1 17.09 perf-profile.calltrace.cycles-pp.inactive_list_is_low.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
77.11 ± 4% -7.8 69.28 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
35.47 ± 5% -5.2 30.22 perf-profile.children.cycles-pp._raw_spin_lock_irq
18.65 ± 6% -3.9 14.72 perf-profile.children.cycles-pp.shrink_active_list
41.19 ± 4% -2.4 38.77 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
42.19 ± 3% -2.3 39.92 perf-profile.children.cycles-pp.pagevec_lru_move_fn
20.15 ± 5% -1.5 18.62 perf-profile.children.cycles-pp.activate_page
20.50 ± 5% -1.3 19.20 perf-profile.children.cycles-pp.mark_page_accessed
20.29 ± 2% -1.0 19.29 perf-profile.children.cycles-pp.shrink_inactive_list
20.94 ± 3% -1.0 19.99 perf-profile.children.cycles-pp.__lru_cache_add
2.07 -0.5 1.59 perf-profile.children.cycles-pp.iomap_readpage_actor
1.57 ± 3% -0.4 1.19 perf-profile.children.cycles-pp.copy_page_to_iter
1.46 ± 3% -0.4 1.10 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
1.47 ± 3% -0.4 1.11 perf-profile.children.cycles-pp.copyout
2.00 -0.3 1.67 perf-profile.children.cycles-pp.write
0.99 ± 6% -0.3 0.70 ± 3% perf-profile.children.cycles-pp.get_page_from_freelist
1.08 ± 2% -0.3 0.81 perf-profile.children.cycles-pp.memset_erms
0.92 -0.2 0.72 perf-profile.children.cycles-pp.iomap_set_range_uptodate
0.66 ± 3% -0.2 0.48 perf-profile.children.cycles-pp.__list_del_entry_valid
0.76 -0.2 0.60 perf-profile.children.cycles-pp.entry_SYSCALL_64
0.45 -0.2 0.29 ± 2% perf-profile.children.cycles-pp.free_unref_page_list
0.72 -0.1 0.57 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.56 -0.1 0.41 ± 2% perf-profile.children.cycles-pp.isolate_lru_pages
0.34 ± 2% -0.1 0.25 ± 4% perf-profile.children.cycles-pp.security_file_permission
0.46 ± 2% -0.1 0.37 perf-profile.children.cycles-pp.pagecache_get_page
0.45 ± 2% -0.1 0.36 perf-profile.children.cycles-pp.find_get_entry
0.24 ± 5% -0.1 0.16 ± 2% perf-profile.children.cycles-pp.wake_all_kswapds
0.22 ± 5% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.wakeup_kswapd
0.27 -0.1 0.20 ± 4% perf-profile.children.cycles-pp.selinux_file_permission
0.48 ± 2% -0.1 0.42 ± 2% perf-profile.children.cycles-pp.ksys_write
0.28 ± 2% -0.1 0.22 perf-profile.children.cycles-pp.xas_store
0.32 ± 3% -0.1 0.27 ± 3% perf-profile.children.cycles-pp.__delete_from_page_cache
0.17 ± 3% -0.0 0.12 ± 4% perf-profile.children.cycles-pp.__fdget_pos
0.28 ± 2% -0.0 0.24 ± 2% perf-profile.children.cycles-pp.xas_load
0.14 ± 3% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__fget_light
0.38 ± 2% -0.0 0.34 ± 4% perf-profile.children.cycles-pp.vfs_write
0.11 ± 4% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.__isolate_lru_page
0.18 -0.0 0.15 ± 5% perf-profile.children.cycles-pp.__lock_text_start
0.09 ± 4% -0.0 0.07 ± 6% perf-profile.children.cycles-pp.__fsnotify_parent
0.31 -0.0 0.28 ± 3% perf-profile.children.cycles-pp.apic_timer_interrupt
0.17 ± 4% -0.0 0.15 perf-profile.children.cycles-pp.xas_create
0.12 ± 3% -0.0 0.10 perf-profile.children.cycles-pp.fsnotify
0.09 ± 4% -0.0 0.07 perf-profile.children.cycles-pp.xa_load
0.09 -0.0 0.07 ± 6% perf-profile.children.cycles-pp.__mod_memcg_state
0.06 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.xas_init_marks
0.08 ± 8% -0.0 0.06 perf-profile.children.cycles-pp.__list_add_valid
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.__inode_security_revalidate
0.11 ± 4% -0.0 0.09 ± 4% perf-profile.children.cycles-pp.___might_sleep
0.09 ± 5% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.xfs_ilock
0.08 ± 5% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__mod_node_page_state
0.08 ± 8% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.page_evictable
0.08 -0.0 0.06 ± 6% perf-profile.children.cycles-pp.touch_atime
0.07 -0.0 0.05 ± 9% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.09 -0.0 0.08 ± 6% perf-profile.children.cycles-pp._cond_resched
0.07 ± 7% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.atime_needs_update
0.07 -0.0 0.06 perf-profile.children.cycles-pp.down_read
0.09 -0.0 0.08 perf-profile.children.cycles-pp.release_pages
0.21 ± 5% +0.0 0.23 perf-profile.children.cycles-pp.move_pages_to_lru
0.15 ± 27% +0.0 0.19 ± 2% perf-profile.children.cycles-pp.shrink_slab
0.15 ± 26% +0.0 0.19 ± 2% perf-profile.children.cycles-pp.do_shrink_slab
0.37 ± 11% +0.1 0.42 ± 2% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.02 ±173% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.search_binary_handler
0.02 ±173% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.load_elf_binary
0.03 ±100% +0.1 0.08 ± 15% perf-profile.children.cycles-pp.new_sync_write
0.01 ±173% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.generic_file_write_iter
0.01 ±173% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.__generic_file_write_iter
0.01 ±173% +0.1 0.07 ± 17% perf-profile.children.cycles-pp.generic_perform_write
0.51 ± 5% +0.1 0.57 ± 2% perf-profile.children.cycles-pp.__activate_page
0.06 ± 58% +0.1 0.15 ± 3% perf-profile.children.cycles-pp.count_shadow_nodes
0.36 ± 9% +0.1 0.51 perf-profile.children.cycles-pp.__mod_lruvec_state
0.00 +0.2 0.18 ± 3% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.28 ± 4% +0.3 0.53 ± 2% perf-profile.children.cycles-pp.workingset_activation
1.38 ± 2% +0.3 1.65 ± 2% perf-profile.children.cycles-pp.shrink_page_list
0.71 ± 2% +0.5 1.17 ± 3% perf-profile.children.cycles-pp.__remove_mapping
0.17 ± 2% +0.5 0.70 ± 2% perf-profile.children.cycles-pp.workingset_eviction
94.27 +0.6 94.91 perf-profile.children.cycles-pp.read
92.70 +0.9 93.57 perf-profile.children.cycles-pp.ksys_read
92.59 +0.9 93.50 perf-profile.children.cycles-pp.vfs_read
92.17 +1.0 93.16 perf-profile.children.cycles-pp.new_sync_read
92.12 +1.0 93.12 perf-profile.children.cycles-pp.xfs_file_read_iter
92.01 +1.0 93.03 perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
91.86 +1.1 92.92 perf-profile.children.cycles-pp.generic_file_read_iter
69.07 +2.9 71.93 perf-profile.children.cycles-pp.ondemand_readahead
69.07 +2.9 71.93 perf-profile.children.cycles-pp.__do_page_cache_readahead
23.87 ± 5% +4.1 27.99 ± 9% perf-profile.children.cycles-pp.read_pages
23.86 ± 5% +4.1 27.99 ± 9% perf-profile.children.cycles-pp.iomap_apply
23.86 ± 5% +4.1 27.99 ± 9% perf-profile.children.cycles-pp.iomap_readpages
23.84 ± 5% +4.1 27.98 ± 9% perf-profile.children.cycles-pp.iomap_readpages_actor
47.35 ± 3% +4.4 51.78 perf-profile.children.cycles-pp.shrink_node_memcg
47.51 ± 3% +4.5 51.98 perf-profile.children.cycles-pp.shrink_node
46.81 ± 4% +4.5 51.31 perf-profile.children.cycles-pp.__alloc_pages_nodemask
46.53 ± 4% +4.6 51.10 perf-profile.children.cycles-pp.__alloc_pages_slowpath
21.72 ± 5% +4.6 26.34 ± 9% perf-profile.children.cycles-pp.add_to_page_cache_lru
45.48 ± 3% +4.9 50.35 perf-profile.children.cycles-pp.try_to_free_pages
45.46 ± 3% +4.9 50.32 perf-profile.children.cycles-pp.do_try_to_free_pages
0.76 ± 62% +5.6 6.35 ± 38% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.43 ±109% +5.6 6.08 ± 39% perf-profile.children.cycles-pp.xas_nomem
0.44 ±109% +5.7 6.10 ± 39% perf-profile.children.cycles-pp.__slab_alloc
0.44 ±109% +5.7 6.10 ± 39% perf-profile.children.cycles-pp.___slab_alloc
0.43 ±109% +5.7 6.09 ± 39% perf-profile.children.cycles-pp.alloc_slab_page
0.44 ±110% +5.7 6.10 ± 39% perf-profile.children.cycles-pp.new_slab
0.43 ±108% +5.7 6.10 ± 39% perf-profile.children.cycles-pp.kmem_cache_alloc
6.80 ± 41% +7.5 14.31 perf-profile.children.cycles-pp.lruvec_lru_size
8.34 ± 41% +9.3 17.64 perf-profile.children.cycles-pp.inactive_list_is_low
77.11 ± 4% -7.8 69.28 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.46 ± 3% -0.4 1.10 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.08 ± 2% -0.3 0.81 perf-profile.self.cycles-pp.memset_erms
0.91 -0.2 0.72 perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.66 ± 2% -0.2 0.48 perf-profile.self.cycles-pp.__list_del_entry_valid
0.67 -0.2 0.52 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.72 -0.1 0.57 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.44 -0.1 0.32 ± 2% perf-profile.self.cycles-pp.get_page_from_freelist
1.37 -0.1 1.27 perf-profile.self.cycles-pp.do_syscall_64
0.22 ± 5% -0.1 0.14 ± 3% perf-profile.self.cycles-pp.wakeup_kswapd
0.29 ± 2% -0.1 0.23 ± 2% perf-profile.self.cycles-pp.find_get_entry
0.21 ± 2% -0.1 0.16 ± 4% perf-profile.self.cycles-pp.free_pcppages_bulk
0.18 -0.0 0.14 ± 3% perf-profile.self.cycles-pp.selinux_file_permission
0.06 -0.0 0.03 ±100% perf-profile.self.cycles-pp.shrink_active_list
0.11 ± 4% -0.0 0.07 ± 5% perf-profile.self.cycles-pp.__isolate_lru_page
0.17 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.xas_create
0.14 ± 3% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.__fget_light
0.23 -0.0 0.20 perf-profile.self.cycles-pp.xas_load
0.12 -0.0 0.09 ± 4% perf-profile.self.cycles-pp.fsnotify
0.07 ± 7% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.iomap_readpage_actor
0.11 ± 7% -0.0 0.09 ± 4% perf-profile.self.cycles-pp.xfs_file_read_iter
0.11 ± 4% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.___might_sleep
0.09 -0.0 0.07 ± 7% perf-profile.self.cycles-pp.__fsnotify_parent
0.08 ± 5% -0.0 0.05 ± 9% perf-profile.self.cycles-pp.__list_add_valid
0.09 -0.0 0.07 ± 6% perf-profile.self.cycles-pp.__mod_memcg_state
0.13 ± 6% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.generic_file_read_iter
0.10 ± 4% -0.0 0.09 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.10 ± 5% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.shrink_page_list
0.07 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.mark_page_accessed
0.07 -0.0 0.06 ± 9% perf-profile.self.cycles-pp.read
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.__mod_node_page_state
0.09 -0.0 0.08 ± 6% perf-profile.self.cycles-pp.isolate_lru_pages
0.11 ± 4% -0.0 0.09 ± 4% perf-profile.self.cycles-pp.move_pages_to_lru
0.06 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.write
0.06 -0.0 0.05 perf-profile.self.cycles-pp.release_pages
0.22 ± 7% +0.0 0.25 ± 3% perf-profile.self.cycles-pp.__activate_page
0.05 ± 58% +0.1 0.13 perf-profile.self.cycles-pp.count_shadow_nodes
0.24 ± 17% +0.2 0.41 ± 2% perf-profile.self.cycles-pp.__mod_lruvec_state
0.00 +0.2 0.18 ± 3% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.28 ± 2% +0.3 0.53 ± 2% perf-profile.self.cycles-pp.workingset_activation
0.17 ± 2% +0.5 0.70 ± 2% perf-profile.self.cycles-pp.workingset_eviction
1.25 ± 41% +1.9 3.10 perf-profile.self.cycles-pp.inactive_list_is_low
5.48 ± 41% +7.5 12.96 perf-profile.self.cycles-pp.lruvec_lru_size
vm-scalability.throughput
1.6e+07 +-+---------------------------------------------------------------+
|.+. +.+.+..+ +.+..+ +..+.+ +. : +..+ +..+ : +. |
1.4e+07 +-+ : : : : : : |
1.2e+07 +-O O O O O : O O O O : : : |
O O O O O O O O O O O O : : : : : : |
1e+07 +-+ : : : : : : |
| : : : : : : |
8e+06 +-+ O:O: : : : : |
| : : : : : : |
6e+06 +-+ : : : : : : |
4e+06 +-+ : : : : : : |
| : : : : |
2e+06 +-+ : : : : |
| : : : : |
0 +-+---------------------------------------------------------------+
vm-scalability.median
80000 +-+-----------------------------------------------------------------+
|.+..+. .+..+.+.+..+.+.+..+.+.+..+.+..+.+ +.+ +.+ +.+..+.|
70000 +-+ + : : : : : : |
60000 +-+ : O O O O : : : |
| O O O O O O O O O O O O O : : : : : : |
50000 O-+ O O O : : : : : : |
| : : : : : : |
40000 +-+ :O : : : : : |
| O: : : : : : |
30000 +-+ : : : : : : |
20000 +-+ : : : : : : |
| :: : : :: |
10000 +-+ : : : : |
| : : : : |
0 +-+-----------------------------------------------------------------+
vm-scalability.workload
5e+09 +-+---------------------------------------------------------------+
4.5e+09 +-+..+.+.+.+..+.+.+.+..+.+.+..+.+.+.+..+ +..+ +..+ +.+..+.|
| : : : : : : |
4e+09 +-O O : O O O O : : : |
3.5e+09 O-+ O O O O O O O O O O O O O O : : : : : : |
| : : : : : : |
3e+09 +-+ : : : : : : |
2.5e+09 +-+ O:O: : : : : |
2e+09 +-+ : : : : : : |
| : : : : : : |
1.5e+09 +-+ : : : : : : |
1e+09 +-+ : : : : |
| : : : : |
5e+08 +-+ : : : : |
0 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/lkp-skl-fpga01/lru-file-mmap-read/vm-scalability
commit:
f17f33d34b ("mm/lru: add per lruvec lock for memcg")
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
f17f33d34bfab978 3145e78472f7ad746e15180092a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
0:2 5% 0:4 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8788 -1.0% 8696 vm-scalability.time.percent_of_cpu_this_job_got
0.04 ± 26% -0.0 0.03 ± 2% mpstat.cpu.all.soft%
2422 -0.8% 2402 turbostat.Avg_MHz
13.00 +7.7% 14.00 vmstat.cpu.id
65254 ± 24% -27.9% 47062 ± 43% numa-meminfo.node0.AnonHugePages
9827329 ± 3% +12.6% 11060798 ± 5% numa-meminfo.node0.MemFree
35367734 ± 6% -10.9% 31510195 ± 8% numa-numastat.node0.numa_foreign
35367734 ± 6% -10.9% 31510195 ± 8% numa-numastat.node1.numa_miss
35381209 ± 6% -10.9% 31537133 ± 8% numa-numastat.node1.other_node
2506995 ± 2% +13.5% 2846006 ± 6% numa-vmstat.node0.nr_free_pages
392.50 -15.6% 331.25 ± 6% numa-vmstat.node0.nr_isolated_file
17986039 ± 5% -10.8% 16037734 ± 4% numa-vmstat.node0.numa_foreign
380.00 -14.8% 323.75 ± 6% numa-vmstat.node1.nr_isolated_file
17989920 ± 5% -10.8% 16041777 ± 4% numa-vmstat.node1.numa_miss
18168024 ± 5% -10.6% 16233205 ± 4% numa-vmstat.node1.numa_other
12452 +15.3% 14351 ± 10% softirqs.CPU19.RCU
16541 ± 14% -14.5% 14139 ± 10% softirqs.CPU34.RCU
19191 ± 17% -31.5% 13143 ± 10% softirqs.CPU51.RCU
12516 ± 2% +35.4% 16941 ± 8% softirqs.CPU54.RCU
22182 ± 50% -60.1% 8843 ± 6% softirqs.CPU55.SCHED
13333 ± 2% +16.7% 15556 ± 11% softirqs.CPU62.RCU
14664 ± 7% -11.2% 13018 ± 5% softirqs.CPU95.RCU
130615 -2.7% 127051 ± 2% proc-vmstat.allocstall_movable
471.50 +28.0% 603.50 ± 9% proc-vmstat.compact_fail
481.50 +26.8% 610.75 ± 8% proc-vmstat.compact_stall
785.00 -13.5% 679.25 ± 4% proc-vmstat.nr_isolated_file
38399333 -1.4% 37862915 proc-vmstat.nr_mapped
917688 -2.7% 892837 proc-vmstat.nr_page_table_pages
1353 ± 39% +177.7% 3757 ± 58% proc-vmstat.numa_pages_migrated
43414 ± 20% -46.4% 23285 ± 45% proc-vmstat.numa_pte_updates
2099 ± 4% -11.6% 1856 ± 8% slabinfo.UNIX.active_objs
2099 ± 4% -11.6% 1856 ± 8% slabinfo.UNIX.num_objs
533.00 ± 12% +39.0% 741.00 ± 9% slabinfo.kmem_cache_node.active_objs
576.00 ± 11% +36.0% 783.25 ± 8% slabinfo.kmem_cache_node.num_objs
7240 +9.8% 7950 ± 7% slabinfo.proc_inode_cache.active_objs
3249 ± 5% -9.5% 2940 ± 7% slabinfo.sock_inode_cache.active_objs
3249 ± 5% -9.5% 2940 ± 7% slabinfo.sock_inode_cache.num_objs
1413 ± 3% -14.5% 1207 ± 5% slabinfo.task_group.active_objs
1413 ± 3% -14.5% 1207 ± 5% slabinfo.task_group.num_objs
4.065e+08 +2.3% 4.16e+08 perf-stat.i.cache-references
2.493e+11 -0.8% 2.474e+11 perf-stat.i.cpu-cycles
1463950 -5.6% 1381844 perf-stat.i.dTLB-store-misses
194734 ± 4% -18.8% 158117 ± 13% perf-stat.i.instructions-per-iTLB-miss
5973676 +7.4% 6417581 perf-stat.i.node-load-misses
4563724 -3.2% 4415507 perf-stat.i.node-store-misses
5.53 +2.8% 5.69 perf-stat.overall.MPKI
28.10 -0.5 27.61 perf-stat.overall.cache-miss-rate%
0.03 ± 2% -0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
67.79 +1.0 68.82 perf-stat.overall.node-load-miss-rate%
3.99e+08 +2.0% 4.071e+08 perf-stat.ps.cache-references
1432695 ± 2% -6.0% 1347018 perf-stat.ps.dTLB-store-misses
5952404 +7.3% 6385677 perf-stat.ps.node-load-misses
4583491 -3.3% 4433048 perf-stat.ps.node-store-misses
10347990 -1.8% 10157225 perf-stat.ps.node-stores
929.00 ± 50% -48.5% 478.75 ± 70% interrupts.40:PCI-MSI.67633155-edge.eth0-TxRx-2
4344 ± 22% -23.5% 3321 ± 4% interrupts.CPU0.RES:Rescheduling_interrupts
1587 ± 6% +15.4% 1831 ± 4% interrupts.CPU12.RES:Rescheduling_interrupts
1828 ± 4% -6.3% 1712 ± 5% interrupts.CPU19.RES:Rescheduling_interrupts
7830 -28.1% 5630 ± 31% interrupts.CPU2.NMI:Non-maskable_interrupts
7830 -28.1% 5630 ± 31% interrupts.CPU2.PMI:Performance_monitoring_interrupts
1617 +12.7% 1823 ± 4% interrupts.CPU24.RES:Rescheduling_interrupts
1825 +11.2% 2030 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
1895 -8.9% 1726 ± 3% interrupts.CPU29.RES:Rescheduling_interrupts
929.00 ± 50% -48.5% 478.75 ± 70% interrupts.CPU32.40:PCI-MSI.67633155-edge.eth0-TxRx-2
1602 +11.6% 1788 ± 3% interrupts.CPU39.RES:Rescheduling_interrupts
1946 ± 9% -9.6% 1759 ± 3% interrupts.CPU4.RES:Rescheduling_interrupts
2113 ± 20% -21.1% 1667 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
3928 +94.1% 7622 ± 5% interrupts.CPU58.NMI:Non-maskable_interrupts
3928 +94.1% 7622 ± 5% interrupts.CPU58.PMI:Performance_monitoring_interrupts
1544 ± 5% +11.9% 1728 ± 4% interrupts.CPU67.RES:Rescheduling_interrupts
1623 ± 5% +14.4% 1857 ± 4% interrupts.CPU7.RES:Rescheduling_interrupts
7875 -30.9% 5438 ± 29% interrupts.CPU83.NMI:Non-maskable_interrupts
7875 -30.9% 5438 ± 29% interrupts.CPU83.PMI:Performance_monitoring_interrupts
1691 ± 2% +22.1% 2065 ± 14% interrupts.CPU9.RES:Rescheduling_interrupts
7875 -29.3% 5567 ± 30% interrupts.CPU98.NMI:Non-maskable_interrupts
7875 -29.3% 5567 ± 30% interrupts.CPU98.PMI:Performance_monitoring_interrupts
7854 -28.2% 5642 ± 31% interrupts.CPU99.NMI:Non-maskable_interrupts
7854 -28.2% 5642 ± 31% interrupts.CPU99.PMI:Performance_monitoring_interrupts
120751 +18.2% 142724 ± 8% sched_debug.cfs_rq:/.exec_clock.avg
121892 +17.9% 143740 ± 8% sched_debug.cfs_rq:/.exec_clock.max
117596 +19.5% 140476 ± 7% sched_debug.cfs_rq:/.exec_clock.min
34.40 ± 3% -8.0% 31.65 ± 2% sched_debug.cfs_rq:/.load_avg.avg
13304569 +18.2% 15728934 ± 8% sched_debug.cfs_rq:/.min_vruntime.avg
13797625 +18.1% 16288367 ± 7% sched_debug.cfs_rq:/.min_vruntime.max
691785 ± 3% +16.4% 805273 ± 5% sched_debug.cfs_rq:/.min_vruntime.stddev
14.28 ± 6% +19.3% 17.04 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
204.80 -12.7% 178.80 ± 7% sched_debug.cfs_rq:/.removed.load_avg.max
9454 -12.8% 8245 ± 8% sched_debug.cfs_rq:/.removed.runnable_sum.max
107.20 ± 2% -24.1% 81.38 ± 24% sched_debug.cfs_rq:/.removed.util_avg.max
14.64 ± 29% -31.9% 9.96 ± 17% sched_debug.cfs_rq:/.removed.util_avg.stddev
-4004068 +18.3% -4735917 sched_debug.cfs_rq:/.spread0.min
690760 ± 3% +16.4% 804372 ± 5% sched_debug.cfs_rq:/.spread0.stddev
275.20 -38.3% 169.92 ± 40% sched_debug.cfs_rq:/.util_est_enqueued.min
164179 +12.7% 185002 ± 6% sched_debug.cpu.clock.avg
164305 +12.7% 185092 ± 6% sched_debug.cpu.clock.max
164106 +12.7% 184938 ± 6% sched_debug.cpu.clock.min
78.86 ± 11% -31.7% 53.88 ± 42% sched_debug.cpu.clock.stddev
164179 +12.7% 185002 ± 6% sched_debug.cpu.clock_task.avg
164305 +12.7% 185092 ± 6% sched_debug.cpu.clock_task.max
164106 +12.7% 184938 ± 6% sched_debug.cpu.clock_task.min
78.86 ± 11% -31.7% 53.88 ± 42% sched_debug.cpu.clock_task.stddev
464.84 +26.7% 589.08 ± 11% sched_debug.cpu.curr->pid.stddev
0.00 ± 10% -25.8% 0.00 ± 28% sched_debug.cpu.next_balance.stddev
12111 ± 4% +44.2% 17462 ± 19% sched_debug.cpu.sched_count.max
302.08 ± 8% -15.4% 255.59 ± 15% sched_debug.cpu.sched_goidle.stddev
1499 ± 2% +20.0% 1799 ± 8% sched_debug.cpu.ttwu_local.avg
164105 +12.7% 184938 ± 6% sched_debug.cpu_clk
161243 +12.9% 182076 ± 6% sched_debug.ktime
164476 +12.7% 185310 ± 6% sched_debug.sched_clk
73.02 -6.0 67.03 ± 4% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead.filemap_fault.__xfs_filemap_fault
64.54 -5.4 59.10 ± 6% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead.filemap_fault
85.22 -5.2 80.00 ± 2% perf-profile.calltrace.cycles-pp.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault.__handle_mm_fault
85.22 -5.2 80.00 ± 2% perf-profile.calltrace.cycles-pp.__do_page_cache_readahead.ondemand_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
62.77 -5.1 57.65 ± 6% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead
1.54 ± 2% -0.3 1.27 ± 7% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead
0.69 -0.3 0.43 ± 58% perf-profile.calltrace.cycles-pp.isolate_lru_pages.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
1.37 ± 2% -0.2 1.12 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead
1.36 ± 2% -0.2 1.12 ± 8% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask
0.97 ± 10% -0.1 0.82 ± 12% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node.balance_pgdat
0.69 ± 4% +0.2 0.91 ± 10% perf-profile.calltrace.cycles-pp.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_node_memcg
0.54 ± 4% +0.2 0.75 ± 10% perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list
0.75 ± 5% +0.2 0.97 ± 9% perf-profile.calltrace.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
2.42 ± 12% +0.6 3.02 ± 3% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
7.29 ± 6% +1.7 9.03 ± 15% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_node_memcg
7.45 ± 6% +1.8 9.20 ± 15% perf-profile.calltrace.cycles-pp.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
6.87 ± 6% +1.8 8.62 ± 16% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.shrink_page_list.shrink_inactive_list
6.83 ± 6% +1.8 8.59 ± 16% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.shrink_page_list
18.80 ± 5% +3.5 22.30 ± 9% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
85.22 -5.2 80.00 ± 2% perf-profile.children.cycles-pp.ondemand_readahead
87.51 -4.5 82.97 ± 3% perf-profile.children.cycles-pp.__do_page_cache_readahead
0.79 -0.1 0.70 ± 2% perf-profile.children.cycles-pp.isolate_lru_pages
0.24 ± 4% -0.1 0.17 ± 26% perf-profile.children.cycles-pp.alloc_pages_vma
0.19 ± 10% -0.1 0.13 ± 20% perf-profile.children.cycles-pp.do_wp_page
0.19 ± 10% -0.1 0.13 ± 20% perf-profile.children.cycles-pp.wp_page_copy
0.15 ± 20% -0.0 0.10 ± 25% perf-profile.children.cycles-pp.__put_user_4
0.15 ± 20% -0.0 0.10 ± 25% perf-profile.children.cycles-pp.schedule_tail
0.22 ± 4% -0.0 0.18 ± 8% perf-profile.children.cycles-pp.wake_all_kswapds
0.23 -0.0 0.19 ± 4% perf-profile.children.cycles-pp.__isolate_lru_page
0.17 ± 2% -0.0 0.14 ± 7% perf-profile.children.cycles-pp.wakeup_kswapd
0.16 -0.0 0.14 ± 3% perf-profile.children.cycles-pp.___might_sleep
0.08 ± 6% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.10 +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__lock_text_start
0.19 ± 5% +0.0 0.22 ± 6% perf-profile.children.cycles-pp.alloc_set_pte
0.28 ± 3% +0.0 0.32 ± 6% perf-profile.children.cycles-pp.__mod_lruvec_state
0.14 ± 7% +0.0 0.18 ± 4% perf-profile.children.cycles-pp.page_add_file_rmap
0.00 +0.1 0.07 ± 31% perf-profile.children.cycles-pp.__pmd_alloc
0.00 +0.1 0.08 ± 20% perf-profile.children.cycles-pp.search_binary_handler
0.00 +0.1 0.08 ± 20% perf-profile.children.cycles-pp.load_elf_binary
0.42 ± 11% +0.2 0.58 ± 18% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.42 ± 11% +0.2 0.58 ± 18% perf-profile.children.cycles-pp.do_syscall_64
0.14 ± 3% +0.3 0.39 ± 9% perf-profile.children.cycles-pp.page_remove_rmap
2.14 ± 4% +0.3 2.42 ± 3% perf-profile.children.cycles-pp.rmap_walk_file
0.65 ± 13% +0.3 0.94 ± 14% perf-profile.children.cycles-pp.try_to_unmap_one
0.88 ± 11% +0.3 1.18 ± 12% perf-profile.children.cycles-pp.try_to_unmap
0.14 ± 3% +0.3 0.45 ± 10% perf-profile.children.cycles-pp.workingset_eviction
2.59 ± 12% +0.7 3.31 ± 2% perf-profile.children.cycles-pp.__remove_mapping
7.72 ± 5% +1.8 9.54 ± 15% perf-profile.children.cycles-pp.free_pcppages_bulk
7.88 ± 5% +1.8 9.71 ± 15% perf-profile.children.cycles-pp.free_unref_page_list
20.23 ± 4% +3.4 23.60 ± 9% perf-profile.children.cycles-pp.shrink_page_list
0.19 -0.1 0.14 ± 6% perf-profile.self.cycles-pp.__remove_mapping
0.23 -0.0 0.19 ± 4% perf-profile.self.cycles-pp.__isolate_lru_page
0.17 ± 5% -0.0 0.14 ± 8% perf-profile.self.cycles-pp.wakeup_kswapd
0.16 -0.0 0.14 ± 3% perf-profile.self.cycles-pp.___might_sleep
0.07 -0.0 0.06 ± 7% perf-profile.self.cycles-pp.ptep_clear_flush_young
0.07 -0.0 0.06 perf-profile.self.cycles-pp.shrink_inactive_list
0.06 ± 9% +0.0 0.09 ± 7% perf-profile.self.cycles-pp.page_add_file_rmap
0.10 +0.1 0.15 ± 7% perf-profile.self.cycles-pp.__mod_lruvec_state
0.06 ± 9% +0.2 0.28 ± 9% perf-profile.self.cycles-pp.page_remove_rmap
0.14 ± 3% +0.3 0.45 ± 10% perf-profile.self.cycles-pp.workingset_eviction
***************************************************************************************************
lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1000t/debian-x86_64-2019-05-14.cgz/300s/lkp-ivb-2ep1/mem_rtns_1/reaim/0x42e
commit:
f17f33d34b ("mm/lru: add per lruvec lock for memcg")
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
f17f33d34bfab978 3145e78472f7ad746e15180092a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:2 149% 5:4 perf-profile.calltrace.cycles-pp.error_entry
2:2 154% 6:4 perf-profile.children.cycles-pp.error_entry
2:2 113% 4:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1285 ± 2% -2.7% 1250 reaim.child_systime
70.00 +2.6% 71.81 reaim.jti
29.48 -6.4% 27.59 ± 2% reaim.std_dev_percent
9.27 -6.7% 8.65 reaim.std_dev_time
2988 -2.2% 2922 reaim.time.maximum_resident_set_size
10285 ± 2% -2.7% 10003 reaim.time.system_time
1081 -3.0% 1049 boot-time.idle
880.50 ± 90% -91.9% 71.00 ± 13% meminfo.Mlocked
0.00 +0.0 0.00 ± 24% mpstat.cpu.all.soft%
851904 +1415.0% 12906297 ±158% cpuidle.C1.time
24823 ± 4% +959.8% 263063 ±152% cpuidle.C1.usage
219.50 ± 90% -92.1% 17.25 ± 13% proc-vmstat.nr_mlock
388318 ± 4% -7.6% 358961 ± 6% proc-vmstat.numa_pte_updates
577.50 ± 7% -13.3% 500.75 ± 4% slabinfo.file_lock_cache.active_objs
577.50 ± 7% -13.3% 500.75 ± 4% slabinfo.file_lock_cache.num_objs
42239 ± 3% -5.0% 40110 ± 4% slabinfo.kmalloc-32.active_objs
13061 ± 5% -8.5% 11945 ± 4% slabinfo.kmalloc-512.active_objs
13325 ± 3% -10.0% 11992 ± 4% slabinfo.kmalloc-512.num_objs
423.06 ± 13% -33.5% 281.42 ± 10% sched_debug.cfs_rq:/.exec_clock.stddev
282.08 ± 20% -29.0% 200.25 ± 2% sched_debug.cfs_rq:/.load_avg.max
62.75 ± 24% -26.4% 46.17 ± 23% sched_debug.cfs_rq:/.nr_spread_over.max
1325 ± 4% -12.5% 1160 sched_debug.cfs_rq:/.util_avg.max
102.19 ± 6% -25.1% 76.55 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
-17.83 -19.4% -14.38 sched_debug.cpu.nr_uninterruptible.min
8.55 ± 5% -17.0% 7.10 ± 8% sched_debug.cpu.nr_uninterruptible.stddev
116725 ± 5% -23.5% 89310 ± 7% sched_debug.cpu.sched_count.max
11450 ± 7% -30.4% 7968 ± 6% sched_debug.cpu.sched_count.stddev
12478 ± 22% -81.9% 2254 ± 89% numa-meminfo.node0.Inactive
12478 ± 22% -82.5% 2178 ± 94% numa-meminfo.node0.Inactive(anon)
36347 ± 9% -25.8% 26973 ± 10% numa-meminfo.node0.KReclaimable
14552 -32.3% 9849 numa-meminfo.node0.Mapped
36347 ± 9% -25.8% 26973 ± 10% numa-meminfo.node0.SReclaimable
18187 ± 44% -71.5% 5188 ± 76% numa-meminfo.node0.Shmem
3558 ± 80% +286.3% 13744 ± 14% numa-meminfo.node1.Inactive
3374 ± 85% +305.0% 13664 ± 15% numa-meminfo.node1.Inactive(anon)
24771 ± 13% +40.2% 34723 ± 9% numa-meminfo.node1.KReclaimable
9923 +46.4% 14531 numa-meminfo.node1.Mapped
24771 ± 13% +40.2% 34723 ± 9% numa-meminfo.node1.SReclaimable
3119 ± 22% -82.6% 543.00 ± 95% numa-vmstat.node0.nr_inactive_anon
3693 ± 2% -31.0% 2547 ± 2% numa-vmstat.node0.nr_mapped
94.00 ± 87% -90.2% 9.25 ± 19% numa-vmstat.node0.nr_mlock
4546 ± 44% -71.5% 1296 ± 76% numa-vmstat.node0.nr_shmem
9086 ± 9% -25.8% 6743 ± 10% numa-vmstat.node0.nr_slab_reclaimable
3119 ± 22% -82.6% 543.00 ± 95% numa-vmstat.node0.nr_zone_inactive_anon
843.50 ± 85% +305.2% 3417 ± 15% numa-vmstat.node1.nr_inactive_anon
2541 +44.1% 3663 numa-vmstat.node1.nr_mapped
6193 ± 13% +40.2% 8680 ± 9% numa-vmstat.node1.nr_slab_reclaimable
843.50 ± 85% +305.2% 3417 ± 15% numa-vmstat.node1.nr_zone_inactive_anon
156963 ± 5% -4.1% 150510 ± 2% numa-vmstat.node1.numa_other
337.00 ± 38% -46.1% 181.50 ± 8% interrupts.37:PCI-MSI.2621443-edge.eth0-TxRx-2
264.50 ± 2% +199.4% 792.00 ± 37% interrupts.CPU12.RES:Rescheduling_interrupts
6223 ± 32% -33.3% 4151 ± 70% interrupts.CPU18.NMI:Non-maskable_interrupts
6223 ± 32% -33.3% 4151 ± 70% interrupts.CPU18.PMI:Performance_monitoring_interrupts
8310 -50.0% 4157 ± 70% interrupts.CPU21.NMI:Non-maskable_interrupts
8310 -50.0% 4157 ± 70% interrupts.CPU21.PMI:Performance_monitoring_interrupts
6.00 ± 83% +35191.7% 2117 ± 97% interrupts.CPU24.NMI:Non-maskable_interrupts
6.00 ± 83% +35191.7% 2117 ± 97% interrupts.CPU24.PMI:Performance_monitoring_interrupts
337.00 ± 38% -46.1% 181.50 ± 8% interrupts.CPU26.37:PCI-MSI.2621443-edge.eth0-TxRx-2
2360 ± 76% -55.6% 1048 ±170% interrupts.CPU28.NMI:Non-maskable_interrupts
2360 ± 76% -55.6% 1048 ±170% interrupts.CPU28.PMI:Performance_monitoring_interrupts
789.50 ± 92% -54.4% 360.00 ±154% interrupts.CPU29.RES:Rescheduling_interrupts
1588 ± 20% -69.5% 484.25 ±138% interrupts.CPU30.RES:Rescheduling_interrupts
1480 ± 18% -68.6% 465.25 ±126% interrupts.CPU31.RES:Rescheduling_interrupts
484.50 ± 99% -99.9% 0.25 ±173% interrupts.CPU32.TLB:TLB_shootdowns
6217 ± 32% -49.8% 3123 ± 57% interrupts.CPU46.NMI:Non-maskable_interrupts
6217 ± 32% -49.8% 3123 ± 57% interrupts.CPU46.PMI:Performance_monitoring_interrupts
1390 ± 12% -75.4% 341.50 ±115% interrupts.CPU5.RES:Rescheduling_interrupts
2051 ± 2% -79.4% 421.75 ±107% interrupts.CPU6.RES:Rescheduling_interrupts
1152 ± 10% -73.0% 311.00 ± 89% interrupts.CPU7.RES:Rescheduling_interrupts
0.67 ± 2% +0.0 0.71 ± 3% perf-stat.i.branch-miss-rate%
74600669 +3.9% 77524328 perf-stat.i.branch-misses
3.38 +0.1 3.43 perf-stat.i.cache-miss-rate%
360.94 +6.3% 383.61 perf-stat.i.cpu-migrations
1.50 ± 6% -0.3 1.20 ± 8% perf-stat.i.dTLB-load-miss-rate%
2.612e+08 ± 6% -20.0% 2.09e+08 ± 9% perf-stat.i.dTLB-load-misses
0.43 -0.0 0.42 perf-stat.i.dTLB-store-miss-rate%
53834460 -2.8% 52300639 perf-stat.i.dTLB-store-misses
3705338 +5.0% 3891668 ± 2% perf-stat.i.iTLB-loads
1942698 ± 5% -5.1% 1844099 ± 5% perf-stat.i.node-store-misses
0.53 +0.0 0.54 perf-stat.overall.branch-miss-rate%
1.51 ± 6% -0.3 1.20 ± 9% perf-stat.overall.dTLB-load-miss-rate%
0.44 -0.0 0.42 perf-stat.overall.dTLB-store-miss-rate%
10.90 ± 5% -0.5 10.36 ± 3% perf-stat.overall.node-store-miss-rate%
74375503 +3.9% 77289109 perf-stat.ps.branch-misses
359.88 +6.2% 382.36 perf-stat.ps.cpu-migrations
2.604e+08 ± 6% -20.0% 2.083e+08 ± 9% perf-stat.ps.dTLB-load-misses
53671651 -2.9% 52140492 perf-stat.ps.dTLB-store-misses
3694140 +5.0% 3880003 ± 2% perf-stat.ps.iTLB-loads
1936830 ± 5% -5.1% 1838125 ± 5% perf-stat.ps.node-store-misses
1.13 -0.2 0.97 ± 3% perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.92 ± 2% -0.1 0.85 perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
1.25 ± 2% -0.1 1.19 ± 2% perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.unmap_region.__do_munmap.__x64_sys_brk
0.77 ± 5% -0.1 0.71 perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge_delay.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.37 ± 2% -0.0 1.32 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
0.57 ± 5% -0.0 0.53 ± 4% perf-profile.calltrace.cycles-pp.down_write_killable.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.54 -0.1 1.40 ± 3% perf-profile.children.cycles-pp.find_vma
0.99 ± 2% -0.1 0.92 perf-profile.children.cycles-pp.free_unref_page_list
1.27 -0.1 1.21 ± 2% perf-profile.children.cycles-pp.unlink_anon_vmas
0.41 -0.1 0.35 ± 2% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
1.39 ± 2% -0.1 1.33 ± 2% perf-profile.children.cycles-pp.free_pgtables
0.77 ± 5% -0.1 0.72 perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.23 ± 2% -0.1 0.17 ± 26% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
1.66 ± 2% -0.0 1.61 ± 2% perf-profile.children.cycles-pp.anon_vma_clone
0.79 -0.0 0.76 ± 3% perf-profile.children.cycles-pp.___perf_sw_event
0.10 -0.0 0.07 ± 10% perf-profile.children.cycles-pp.get_vma_policy
0.07 ± 7% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.free_pcp_prepare
0.12 ± 12% -0.0 0.10 ± 7% perf-profile.children.cycles-pp.unmap_single_vma
0.14 ± 7% -0.0 0.12 ± 10% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.21 ± 2% -0.0 0.20 ± 5% perf-profile.children.cycles-pp.downgrade_write
0.09 -0.0 0.07 ± 14% perf-profile.children.cycles-pp.vmacache_update
0.09 -0.0 0.08 ± 6% perf-profile.children.cycles-pp.security_mmap_addr
0.08 -0.0 0.07 ± 6% perf-profile.children.cycles-pp.perf_exclude_event
0.07 ± 23% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.tlb_table_flush
0.81 ± 4% -0.1 0.70 ± 3% perf-profile.self.cycles-pp.find_vma
0.15 ± 20% -0.1 0.08 ± 73% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.23 ± 8% -0.1 0.18 ± 2% perf-profile.self.cycles-pp.unmap_region
0.22 -0.1 0.17 ± 26% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
1.19 ± 2% -0.0 1.15 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.48 ± 4% -0.0 0.43 ± 4% perf-profile.self.cycles-pp.__might_sleep
0.82 -0.0 0.78 perf-profile.self.cycles-pp.__handle_mm_fault
0.51 ± 2% -0.0 0.46 ± 3% perf-profile.self.cycles-pp.anon_vma_clone
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.free_pcp_prepare
0.34 -0.0 0.30 ± 4% perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.17 ± 3% -0.0 0.14 ± 15% perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.14 ± 3% -0.0 0.11 ± 12% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.18 -0.0 0.15 ± 3% perf-profile.self.cycles-pp.__lock_text_start
0.12 ± 8% -0.0 0.10 ± 8% perf-profile.self.cycles-pp.unmap_single_vma
0.08 ± 5% -0.0 0.06 ± 13% perf-profile.self.cycles-pp.vmacache_update
0.21 ± 4% -0.0 0.19 ± 6% perf-profile.self.cycles-pp.tlb_finish_mmu
0.21 ± 2% -0.0 0.19 ± 4% perf-profile.self.cycles-pp.downgrade_write
0.08 ± 6% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.perf_exclude_event
0.13 -0.0 0.12 ± 3% perf-profile.self.cycles-pp.get_task_policy
0.16 +0.0 0.18 ± 3% perf-profile.self.cycles-pp.__perf_sw_event
0.17 +0.0 0.20 ± 7% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.24 ± 4% +0.0 0.27 ± 3% perf-profile.self.cycles-pp.cpumask_any_but
0.11 ± 4% +0.0 0.16 ± 5% perf-profile.self.cycles-pp.mem_cgroup_charge_statistics
0.04 ±100% +0.1 0.10 ± 4% perf-profile.self.cycles-pp.tlb_table_flush
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/lkp-bdw-ep6/lru-file-readonce/vm-scalability/0xb000036
commit:
f17f33d34b ("mm/lru: add per lruvec lock for memcg")
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
f17f33d34bfab978 3145e78472f7ad746e15180092a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:2 50% 1:4 dmesg.WARNING:at_ip_read_pages/0x
%stddev %change %stddev
\ | \
259812 +2.2% 265460 vm-scalability.median
22884878 +2.2% 23391337 vm-scalability.throughput
223.64 -2.3% 218.39 vm-scalability.time.elapsed_time
223.64 -2.3% 218.39 vm-scalability.time.elapsed_time.max
237854 +2.5% 243716 vm-scalability.time.involuntary_context_switches
15264 -2.3% 14916 vm-scalability.time.system_time
21.94 +2.3% 22.45 turbostat.RAMWatt
2926 +2.4% 2996 vmstat.system.cs
1124060 ± 3% -19.3% 907493 ± 25% cpuidle.C1E.time
12959 ± 7% -16.0% 10880 ± 24% cpuidle.C1E.usage
0.53 ±100% +457.6% 2.96 ± 75% sched_debug.cfs_rq:/.removed.util_avg.avg
46.75 ±100% +171.7% 127.00 ± 4% sched_debug.cfs_rq:/.removed.util_avg.max
4.96 ±100% +251.2% 17.40 ± 35% sched_debug.cfs_rq:/.removed.util_avg.stddev
591.00 ± 13% -18.9% 479.50 ± 17% slabinfo.kmalloc-rcl-128.active_objs
591.00 ± 13% -18.9% 479.50 ± 17% slabinfo.kmalloc-rcl-128.num_objs
1175 ± 3% +13.1% 1329 ± 3% slabinfo.task_group.active_objs
1175 ± 3% +13.1% 1329 ± 3% slabinfo.task_group.num_objs
131456 ± 4% +29.6% 170358 ± 23% numa-meminfo.node0.Active
131211 ± 4% +29.5% 169883 ± 23% numa-meminfo.node0.Active(anon)
168127 -24.4% 127152 ± 31% numa-meminfo.node1.Active
167827 -24.3% 127081 ± 31% numa-meminfo.node1.Active(anon)
31970 ± 29% -44.4% 17761 ± 72% numa-meminfo.node1.AnonHugePages
330.50 -30.0% 231.25 ± 3% proc-vmstat.kswapd_high_wmark_hit_quickly
415.00 ± 2% -47.6% 217.50 ± 4% proc-vmstat.nr_isolated_file
16308 -0.6% 16217 proc-vmstat.nr_kernel_stack
31444 -2.7% 30580 proc-vmstat.nr_shmem
7524 ± 21% -61.8% 2876 ± 49% proc-vmstat.numa_hint_faults
5213 ± 9% -92.8% 374.50 ± 61% proc-vmstat.numa_hint_faults_local
664296 -3.4% 641725 proc-vmstat.pgfault
32802 ± 4% +29.4% 42460 ± 23% numa-vmstat.node0.nr_active_anon
213.50 ± 2% -47.7% 111.75 ± 4% numa-vmstat.node0.nr_isolated_file
32802 ± 4% +29.4% 42460 ± 23% numa-vmstat.node0.nr_zone_active_anon
42032 -24.4% 31786 ± 32% numa-vmstat.node1.nr_active_anon
74.50 ± 74% -76.8% 17.25 ± 57% numa-vmstat.node1.nr_active_file
194.50 -45.8% 105.50 ± 6% numa-vmstat.node1.nr_isolated_file
42032 -24.4% 31786 ± 32% numa-vmstat.node1.nr_zone_active_anon
74.50 ± 74% -76.8% 17.25 ± 57% numa-vmstat.node1.nr_zone_active_file
21198 ± 10% -18.5% 17273 ± 11% softirqs.CPU0.RCU
22765 ± 13% -29.6% 16016 ± 14% softirqs.CPU10.RCU
18723 ± 7% -12.1% 16453 ± 6% softirqs.CPU11.RCU
20028 ± 3% -17.6% 16508 ± 15% softirqs.CPU12.RCU
25163 ± 19% -36.0% 16108 ± 17% softirqs.CPU13.RCU
19979 ± 11% -24.4% 15098 ± 9% softirqs.CPU18.RCU
30906 ± 38% -51.2% 15079 ± 8% softirqs.CPU19.RCU
20918 ± 2% -27.7% 15118 ± 9% softirqs.CPU20.RCU
17962 +27.1% 22833 ± 10% softirqs.CPU22.RCU
23341 ± 20% -24.2% 17697 ± 12% softirqs.CPU4.RCU
10386 ± 11% -15.7% 8758 softirqs.CPU46.SCHED
104887 ± 4% -7.9% 96570 ± 2% softirqs.CPU46.TIMER
17655 -12.9% 15380 ± 7% softirqs.CPU47.RCU
20640 ± 11% -15.6% 17412 ± 10% softirqs.CPU5.RCU
35464 ± 28% -56.3% 15506 ± 13% softirqs.CPU50.RCU
17606 ± 4% -8.6% 16095 ± 10% softirqs.CPU58.RCU
31517 ± 42% -50.4% 15642 ± 8% softirqs.CPU59.RCU
22560 ± 4% -32.7% 15176 ± 7% softirqs.CPU6.RCU
18926 ± 8% -13.5% 16369 ± 6% softirqs.CPU67.RCU
18276 ± 12% -16.0% 15345 ± 6% softirqs.CPU8.RCU
124918 -18.3% 102110 ± 11% softirqs.CPU87.TIMER
23209 ± 21% -29.6% 16333 ± 3% softirqs.CPU9.RCU
129.50 ± 4% +95.6% 253.25 ± 45% interrupts.35:IR-PCI-MSI.1572866-edge.eth0-TxRx-1
215.50 ± 28% -39.7% 130.00 ± 4% interrupts.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
209649 -2.6% 204207 interrupts.CAL:Function_call_interrupts
1128 ± 24% -71.9% 316.75 ± 36% interrupts.CPU1.RES:Rescheduling_interrupts
2484 ± 4% -6.2% 2330 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
740.00 ± 32% -77.1% 169.75 ± 58% interrupts.CPU10.RES:Rescheduling_interrupts
217.00 ± 37% -53.0% 102.00 ± 30% interrupts.CPU12.RES:Rescheduling_interrupts
129.50 ± 4% +95.6% 253.25 ± 45% interrupts.CPU14.35:IR-PCI-MSI.1572866-edge.eth0-TxRx-1
432.00 ± 71% -74.4% 110.50 ± 79% interrupts.CPU15.RES:Rescheduling_interrupts
215.50 ± 28% -39.7% 130.00 ± 4% interrupts.CPU18.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
310.00 ± 3% -55.6% 137.50 ± 58% interrupts.CPU24.RES:Rescheduling_interrupts
97.00 ± 10% +151.8% 244.25 ± 18% interrupts.CPU33.RES:Rescheduling_interrupts
92.50 ± 36% +111.9% 196.00 ± 22% interrupts.CPU34.RES:Rescheduling_interrupts
471.50 ± 82% -71.3% 135.50 ± 77% interrupts.CPU38.RES:Rescheduling_interrupts
74.50 ± 3% +266.1% 272.75 ± 81% interrupts.CPU4.RES:Rescheduling_interrupts
592.50 ± 66% -78.9% 125.00 ± 44% interrupts.CPU42.RES:Rescheduling_interrupts
300.50 ± 5% -74.8% 75.75 ± 15% interrupts.CPU45.RES:Rescheduling_interrupts
299.50 ± 67% -49.9% 150.00 ± 97% interrupts.CPU46.RES:Rescheduling_interrupts
56.00 ± 3% +740.2% 470.50 ± 80% interrupts.CPU47.RES:Rescheduling_interrupts
1075 ± 41% -89.6% 111.50 ± 33% interrupts.CPU54.RES:Rescheduling_interrupts
62.00 +562.1% 410.50 ±141% interrupts.CPU60.RES:Rescheduling_interrupts
245.00 ± 33% -68.9% 76.25 ± 21% interrupts.CPU62.RES:Rescheduling_interrupts
57.00 ± 7% +47.4% 84.00 ± 21% interrupts.CPU63.RES:Rescheduling_interrupts
56.50 ± 2% +156.6% 145.00 ± 33% interrupts.CPU77.RES:Rescheduling_interrupts
275.00 ± 40% -67.0% 90.75 ± 26% interrupts.CPU80.RES:Rescheduling_interrupts
720.50 ± 18% -23.9% 548.25 ± 15% interrupts.TLB:TLB_shootdowns
57500480 +2.9% 59179457 perf-stat.i.cache-misses
9.452e+08 +2.9% 9.73e+08 perf-stat.i.cache-references
2900 +2.4% 2971 perf-stat.i.context-switches
3459 -4.5% 3303 perf-stat.i.cycles-between-cache-misses
10081151 +1.8% 10261886 perf-stat.i.dTLB-store-misses
7.364e+09 +2.1% 7.52e+09 perf-stat.i.dTLB-stores
2852 -1.1% 2821 perf-stat.i.minor-faults
50.71 +1.6 52.32 perf-stat.i.node-load-miss-rate%
4766441 +12.3% 5354740 ± 2% perf-stat.i.node-load-misses
4714568 +4.5% 4926710 perf-stat.i.node-loads
4052899 -1.5% 3992921 perf-stat.i.node-store-misses
13083127 +1.8% 13315856 perf-stat.i.node-stores
2852 -1.1% 2821 perf-stat.i.page-faults
15.02 +2.6% 15.42 perf-stat.overall.MPKI
3591 -2.9% 3489 perf-stat.overall.cycles-between-cache-misses
50.29 +1.8 52.09 perf-stat.overall.node-load-miss-rate%
23.66 -0.6 23.07 perf-stat.overall.node-store-miss-rate%
3275 -1.9% 3211 perf-stat.overall.path-length
57432784 +2.9% 59087179 perf-stat.ps.cache-misses
9.44e+08 +2.9% 9.715e+08 perf-stat.ps.cache-references
2893 +2.4% 2962 perf-stat.ps.context-switches
10065230 +1.8% 10243103 perf-stat.ps.dTLB-store-misses
7.354e+09 +2.1% 7.507e+09 perf-stat.ps.dTLB-stores
2834 -1.0% 2805 perf-stat.ps.minor-faults
4759803 +12.3% 5343683 ± 2% perf-stat.ps.node-load-misses
4704882 +4.4% 4914158 perf-stat.ps.node-loads
4051050 -1.5% 3988655 perf-stat.ps.node-store-misses
13070798 +1.7% 13297704 perf-stat.ps.node-stores
2835 -1.0% 2805 perf-stat.ps.page-faults
1.407e+13 -1.9% 1.38e+13 perf-stat.total.instructions
26.93 -16.9 10.02 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node
26.95 -16.9 10.04 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
21.96 -1.6 20.33 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_readpages.read_pages.__do_page_cache_readahead.ondemand_readahead
21.88 -1.6 20.25 perf-profile.calltrace.cycles-pp.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages.__do_page_cache_readahead
21.96 -1.6 20.34 perf-profile.calltrace.cycles-pp.iomap_readpages.read_pages.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter
21.99 -1.6 20.37 perf-profile.calltrace.cycles-pp.read_pages.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter.xfs_file_buffered_aio_read
13.11 -1.6 11.48 ± 3% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply
13.19 -1.6 11.57 ± 3% perf-profile.calltrace.cycles-pp.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages
14.74 -1.6 13.15 ± 2% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages
11.98 -1.6 10.40 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor
11.95 -1.6 10.36 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru
47.93 -1.3 46.67 perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
47.94 -1.3 46.68 perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead
47.94 -1.3 46.68 perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead
50.03 -1.1 48.89 perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter
45.45 -0.9 44.56 perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
45.48 -0.9 44.59 perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
0.76 -0.1 0.68 perf-profile.calltrace.cycles-pp.isolate_lru_pages.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
0.81 -0.0 0.77 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor
0.56 -0.0 0.53 ± 2% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.20 ± 3% +0.2 4.41 perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
0.27 ±100% +0.5 0.74 ± 13% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.balance_pgdat
0.00 +0.7 0.71 ± 2% perf-profile.calltrace.cycles-pp.workingset_eviction.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
61.43 +1.6 62.98 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter.xfs_file_buffered_aio_read
9.63 ± 6% +2.7 12.29 perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead
9.56 ± 6% +2.7 12.22 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.__do_page_cache_readahead
11.08 ± 5% +2.7 13.77 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.__do_page_cache_readahead.ondemand_readahead.generic_file_read_iter
11.16 +16.1 27.28 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.shrink_page_list
11.21 +16.1 27.34 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.shrink_page_list.shrink_inactive_list
17.48 +16.2 33.64 ± 2% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
12.11 +16.3 28.42 ± 2% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_node_memcg
12.40 +16.4 28.79 ± 2% perf-profile.calltrace.cycles-pp.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
29.85 -17.5 12.33 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irq
14.48 -1.7 12.80 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
21.96 -1.6 20.33 perf-profile.children.cycles-pp.iomap_apply
21.96 -1.6 20.34 perf-profile.children.cycles-pp.iomap_readpages
13.21 -1.6 11.59 ± 3% perf-profile.children.cycles-pp.__lru_cache_add
21.99 -1.6 20.37 perf-profile.children.cycles-pp.read_pages
13.20 -1.6 11.57 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
21.88 -1.6 20.26 perf-profile.children.cycles-pp.iomap_readpages_actor
14.76 -1.6 13.16 ± 2% perf-profile.children.cycles-pp.add_to_page_cache_lru
49.05 -1.3 47.79 perf-profile.children.cycles-pp.shrink_node
48.05 -1.2 46.80 perf-profile.children.cycles-pp.do_try_to_free_pages
48.05 -1.2 46.81 perf-profile.children.cycles-pp.try_to_free_pages
50.15 -1.1 49.02 perf-profile.children.cycles-pp.__alloc_pages_slowpath
46.58 -0.9 45.67 perf-profile.children.cycles-pp.shrink_inactive_list
46.59 -0.9 45.70 perf-profile.children.cycles-pp.shrink_node_memcg
0.81 -0.1 0.71 perf-profile.children.cycles-pp.isolate_lru_pages
0.78 -0.1 0.70 perf-profile.children.cycles-pp.__list_del_entry_valid
0.82 -0.0 0.79 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
1.17 -0.0 1.14 perf-profile.children.cycles-pp.xas_store
0.49 ± 2% -0.0 0.47 perf-profile.children.cycles-pp.__isolate_lru_page
0.08 +0.0 0.09 perf-profile.children.cycles-pp.task_tick_fair
0.05 +0.0 0.06 perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.13 +0.0 0.14 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.17 ± 3% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.__mod_lruvec_state
4.37 ± 3% +0.2 4.57 perf-profile.children.cycles-pp.__remove_mapping
0.42 +0.3 0.73 ± 2% perf-profile.children.cycles-pp.workingset_eviction
61.59 +1.6 63.16 perf-profile.children.cycles-pp.__alloc_pages_nodemask
12.19 ± 4% +2.8 14.96 perf-profile.children.cycles-pp.get_page_from_freelist
12.46 +16.1 28.60 ± 2% perf-profile.children.cycles-pp.free_pcppages_bulk
12.79 +16.2 29.00 ± 2% perf-profile.children.cycles-pp.free_unref_page_list
18.05 +16.4 34.48 ± 2% perf-profile.children.cycles-pp.shrink_page_list
21.77 ± 2% +18.8 40.58 ± 2% perf-profile.children.cycles-pp._raw_spin_lock
0.77 -0.1 0.69 ± 2% perf-profile.self.cycles-pp.__list_del_entry_valid
0.51 -0.0 0.46 perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.49 ± 2% -0.0 0.47 perf-profile.self.cycles-pp.__isolate_lru_page
0.10 -0.0 0.09 ± 4% perf-profile.self.cycles-pp.__inode_security_revalidate
0.07 +0.0 0.08 perf-profile.self.cycles-pp.free_unref_page_list
0.33 +0.0 0.34 perf-profile.self.cycles-pp.fsnotify
0.00 +0.1 0.06 perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
1.40 +0.1 1.47 perf-profile.self.cycles-pp.get_page_from_freelist
0.42 +0.3 0.73 ± 2% perf-profile.self.cycles-pp.workingset_eviction
***************************************************************************************************
lkp-hsw-ep4: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1000/debian-x86_64-2019-05-14.cgz/300s/lkp-hsw-ep4/page_test/reaim/0x43
commit:
f17f33d34b ("mm/lru: add per lruvec lock for memcg")
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
f17f33d34bfab978 3145e78472f7ad746e15180092a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:2 -50% :4 dmesg.WARNING:at_ip_perf_event_mmap_output/0x
1:2 112% 3:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1494 -1.9% 1466 reaim.child_systime
139.16 +19.8% 166.67 ± 7% reaim.child_utime
61.31 -1.0% 60.72 reaim.jti
38.28 +1.2% 38.75 reaim.std_dev_percent
6.18 +1.0% 6.24 reaim.std_dev_time
3375615 -1.0% 3342567 reaim.time.involuntary_context_switches
3052 -1.1% 3018 reaim.time.maximum_resident_set_size
19436 -1.9% 19060 reaim.time.system_time
1809 +19.8% 2166 ± 7% reaim.time.user_time
36720 +12.6% 41362 ± 4% meminfo.Shmem
76.00 +14294.3% 10939 ± 86% numa-numastat.node0.other_node
0.00 +0.0 0.00 ± 91% mpstat.cpu.all.soft%
7.77 +1.5 9.30 ± 8% mpstat.cpu.all.usr%
82.00 -2.0% 80.33 vmstat.cpu.sy
11476 -1.4% 11317 vmstat.system.cs
36.05 -10.9% 32.10 ± 6% boot-time.boot
29.89 -13.7% 25.79 ± 8% boot-time.dhcp
2265 -13.0% 1971 ± 6% boot-time.idle
0.81 -9.0% 0.74 ± 9% boot-time.smp_boot
11382 -13.3% 9863 ± 5% numa-vmstat.node0.nr_slab_reclaimable
495.00 +2166.2% 11217 ± 83% numa-vmstat.node0.numa_other
538.00 +311.5% 2213 ± 86% numa-vmstat.node1.nr_shmem
5747 +24.1% 7132 ± 7% numa-vmstat.node1.nr_slab_reclaimable
97564 -38.7% 59821 ± 17% numa-meminfo.node0.AnonHugePages
45518 -13.3% 39453 ± 5% numa-meminfo.node0.KReclaimable
45518 -13.3% 39453 ± 5% numa-meminfo.node0.SReclaimable
78966 +46.5% 115692 ± 9% numa-meminfo.node1.AnonHugePages
22988 +24.1% 28530 ± 7% numa-meminfo.node1.KReclaimable
22988 +24.1% 28530 ± 7% numa-meminfo.node1.SReclaimable
2155 +311.8% 8875 ± 86% numa-meminfo.node1.Shmem
306352 +16.7% 357542 ± 11% cpuidle.C1.time
10526 +15.5% 12160 ± 6% cpuidle.C1.usage
9532913 -81.1% 1800414 ± 70% cpuidle.C1E.time
98734 -82.9% 16909 ± 53% cpuidle.C1E.usage
3096468 -21.5% 2431411 ± 11% cpuidle.C3.usage
8.774e+08 +43.9% 1.262e+09 ± 24% cpuidle.C6.time
1030095 +88.7% 1944254 ± 7% cpuidle.C6.usage
6500 -19.8% 5210 ± 16% cpuidle.POLL.usage
83855 +1.6% 85163 proc-vmstat.nr_active_anon
9181 +12.7% 10346 ± 4% proc-vmstat.nr_shmem
83855 +1.6% 85163 proc-vmstat.nr_zone_active_anon
90516 -6.5% 84611 ± 4% proc-vmstat.numa_hint_faults
99741 -16.9% 82924 ± 6% proc-vmstat.numa_pages_migrated
203375 -4.9% 193476 ± 3% proc-vmstat.numa_pte_updates
7734 +29.1% 9987 ± 8% proc-vmstat.pgactivate
99741 -16.9% 82924 ± 6% proc-vmstat.pgmigrate_success
98468 -83.0% 16696 ± 54% turbostat.C1E
0.04 -0.0 0.01 ±141% turbostat.C1E%
3096255 -21.5% 2431227 ± 11% turbostat.C3
1019774 +89.8% 1935131 ± 7% turbostat.C6
3.73 +1.6 5.36 ± 24% turbostat.C6%
3.35 +26.2% 4.23 ± 9% turbostat.CPU%c1
3.78 -51.3% 1.84 ± 60% turbostat.CPU%c3
1.62 +80.7% 2.93 ± 42% turbostat.CPU%c6
2.21 -78.0% 0.49 ± 31% turbostat.Pkg%pc3
725.00 +14.2% 827.67 ± 6% slabinfo.buffer_head.active_objs
725.00 +14.2% 827.67 ± 6% slabinfo.buffer_head.num_objs
1375 -16.0% 1155 ± 9% slabinfo.dmaengine-unmap-16.active_objs
1375 -16.0% 1155 ± 9% slabinfo.dmaengine-unmap-16.num_objs
3248 -13.4% 2812 ± 5% slabinfo.eventpoll_pwq.active_objs
3248 -13.4% 2812 ± 5% slabinfo.eventpoll_pwq.num_objs
1050 +16.0% 1218 ± 8% slabinfo.kmalloc-rcl-96.active_objs
1050 +16.0% 1218 ± 8% slabinfo.kmalloc-rcl-96.num_objs
12671 -10.5% 11335 slabinfo.proc_inode_cache.active_objs
2976 -10.0% 2677 ± 7% slabinfo.skbuff_head_cache.active_objs
3104 -11.0% 2762 ± 6% slabinfo.skbuff_head_cache.num_objs
29.66 -18.5% 24.15 ± 17% sched_debug.cfs_rq:/.load_avg.avg
10.95 -48.4% 5.65 ± 70% sched_debug.cfs_rq:/.removed.load_avg.avg
45.18 -40.0% 27.11 ± 70% sched_debug.cfs_rq:/.removed.load_avg.stddev
506.45 -48.6% 260.38 ± 70% sched_debug.cfs_rq:/.removed.runnable_sum.avg
2090 -40.3% 1248 ± 70% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
30.60 -10.9% 27.27 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.max
0.20 +100.0% 0.40 sched_debug.cfs_rq:/.runnable_load_avg.min
457.60 -13.1% 397.60 ± 7% sched_debug.cfs_rq:/.util_avg.min
752.29 +17.6% 884.32 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.avg
2848 +19.5% 3405 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.max
0.80 +1175.0% 10.20 ± 49% sched_debug.cfs_rq:/.util_est_enqueued.min
751.27 +18.4% 889.75 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.stddev
8.47 +29.4% 10.96 ± 14% sched_debug.cpu.clock.stddev
8.47 +29.4% 10.96 ± 14% sched_debug.cpu.clock_task.stddev
6256 -46.0% 3376 ± 62% sched_debug.cpu.curr->pid.min
600.00 +44.5% 867.09 ± 21% sched_debug.cpu.curr->pid.stddev
8549 -8.2% 7852 ± 2% sched_debug.cpu.nr_switches.min
46.40 -15.1% 39.40 ± 4% sched_debug.cpu.sched_goidle.min
12782 +14.6% 14652 ± 9% softirqs.CPU14.RCU
11938 +13.6% 13562 ± 6% softirqs.CPU15.RCU
11984 +16.8% 13991 ± 12% softirqs.CPU16.RCU
11812 +15.4% 13632 ± 7% softirqs.CPU17.RCU
12452 +12.9% 14059 ± 5% softirqs.CPU19.RCU
20512 -28.4% 14677 ± 20% softirqs.CPU2.RCU
144654 -13.3% 125437 softirqs.CPU2.TIMER
12586 +15.7% 14564 ± 9% softirqs.CPU21.RCU
12036 +13.2% 13623 ± 7% softirqs.CPU27.RCU
168187 -26.3% 123989 softirqs.CPU28.TIMER
12184 +19.1% 14512 ± 5% softirqs.CPU29.RCU
12427 +25.4% 15584 ± 15% softirqs.CPU3.RCU
12585 +25.6% 15813 ± 13% softirqs.CPU37.RCU
12611 +10.3% 13906 ± 6% softirqs.CPU41.RCU
12813 +11.0% 14223 ± 7% softirqs.CPU5.RCU
12393 +15.5% 14317 ± 6% softirqs.CPU51.RCU
12422 +14.0% 14160 ± 7% softirqs.CPU52.RCU
11917 +13.7% 13547 ± 7% softirqs.CPU54.RCU
11147 +20.9% 13478 ± 6% softirqs.CPU64.RCU
11727 +13.8% 13351 ± 7% softirqs.CPU65.RCU
11931 +12.8% 13459 ± 5% softirqs.CPU66.RCU
11942 +19.1% 14217 ± 9% softirqs.CPU69.RCU
11793 +15.9% 13666 ± 4% softirqs.CPU71.RCU
13203 +16.6% 15392 ± 11% softirqs.CPU8.RCU
2.82 +34.5% 3.80 ± 7% perf-stat.i.MPKI
0.60 +0.2 0.75 ± 6% perf-stat.i.branch-miss-rate%
62745856 +2.2% 64097573 perf-stat.i.branch-misses
11596 -1.7% 11396 perf-stat.i.context-switches
3.13 +2.5% 3.20 perf-stat.i.cpi
483.81 -2.8% 470.48 perf-stat.i.cpu-migrations
0.20 +0.0 0.22 ± 7% perf-stat.i.dTLB-load-miss-rate%
27796217 +9.1% 30315348 ± 6% perf-stat.i.dTLB-load-misses
25763033 -3.2% 24949798 perf-stat.i.iTLB-load-misses
2335 +2.7% 2397 perf-stat.i.instructions-per-iTLB-miss
153927 +4.5% 160907 perf-stat.i.node-load-misses
80.08 -5.1 75.03 ± 3% perf-stat.i.node-store-miss-rate%
375401 -15.1% 318586 perf-stat.i.node-store-misses
0.47 +0.0 0.48 perf-stat.overall.branch-miss-rate%
0.17 +0.0 0.19 ± 7% perf-stat.overall.dTLB-load-miss-rate%
57.22 -0.7 56.52 perf-stat.overall.iTLB-load-miss-rate%
2370 +3.0% 2441 perf-stat.overall.instructions-per-iTLB-miss
80.62 -5.5 75.14 ± 3% perf-stat.overall.node-store-miss-rate%
62589195 +2.1% 63881985 perf-stat.ps.branch-misses
11498 -1.3% 11353 perf-stat.ps.context-switches
481.71 -2.5% 469.53 perf-stat.ps.cpu-migrations
27710605 +9.0% 30208362 ± 6% perf-stat.ps.dTLB-load-misses
25683321 -3.2% 24857831 perf-stat.ps.iTLB-load-misses
153794 +4.2% 160296 perf-stat.ps.node-load-misses
374339 -15.2% 317392 perf-stat.ps.node-store-misses
909.00 -80.3% 178.67 ± 6% interrupts.41:IR-PCI-MSI.1572866-edge.eth0-TxRx-2
2438 -92.7% 177.00 interrupts.43:IR-PCI-MSI.1572868-edge.eth0-TxRx-4
1124 -81.7% 206.00 ± 11% interrupts.45:IR-PCI-MSI.1572870-edge.eth0-TxRx-6
487.00 -58.9% 200.33 ± 11% interrupts.47:IR-PCI-MSI.1572872-edge.eth0-TxRx-8
182.00 +71.4% 312.00 ± 46% interrupts.49:IR-PCI-MSI.1572874-edge.eth0-TxRx-10
225.00 -19.0% 182.33 ± 2% interrupts.50:IR-PCI-MSI.1572875-edge.eth0-TxRx-11
172.00 +343.2% 762.33 ± 81% interrupts.51:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
169.00 +509.7% 1030 ±113% interrupts.52:IR-PCI-MSI.1572877-edge.eth0-TxRx-13
180.00 +30.9% 235.67 ± 11% interrupts.53:IR-PCI-MSI.1572878-edge.eth0-TxRx-14
184.00 -10.0% 165.67 ± 4% interrupts.88:IR-PCI-MSI.1572911-edge.eth0-TxRx-47
177.00 -9.8% 159.67 interrupts.96:IR-PCI-MSI.1572919-edge.eth0-TxRx-55
202928 +1.7% 206470 interrupts.CAL:Function_call_interrupts
2101 -33.6% 1395 ± 7% interrupts.CPU0.RES:Rescheduling_interrupts
2.00 +3.9e+05% 7844 interrupts.CPU1.NMI:Non-maskable_interrupts
2.00 +3.9e+05% 7844 interrupts.CPU1.PMI:Performance_monitoring_interrupts
182.00 +71.4% 312.00 ± 46% interrupts.CPU10.49:IR-PCI-MSI.1572874-edge.eth0-TxRx-10
799.00 +17.6% 939.67 ± 11% interrupts.CPU10.RES:Rescheduling_interrupts
225.00 -19.0% 182.33 ± 2% interrupts.CPU11.50:IR-PCI-MSI.1572875-edge.eth0-TxRx-11
7.00 +93419.0% 6546 ± 28% interrupts.CPU11.NMI:Non-maskable_interrupts
7.00 +93419.0% 6546 ± 28% interrupts.CPU11.PMI:Performance_monitoring_interrupts
722.00 +20.9% 872.67 ± 11% interrupts.CPU11.RES:Rescheduling_interrupts
172.00 +343.2% 762.33 ± 81% interrupts.CPU12.51:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
169.00 +509.7% 1030 ±113% interrupts.CPU13.52:IR-PCI-MSI.1572877-edge.eth0-TxRx-13
180.00 +30.9% 235.67 ± 11% interrupts.CPU14.53:IR-PCI-MSI.1572878-edge.eth0-TxRx-14
3899 +101.0% 7838 interrupts.CPU16.NMI:Non-maskable_interrupts
3899 +101.0% 7838 interrupts.CPU16.PMI:Performance_monitoring_interrupts
1021 -59.1% 417.33 ± 69% interrupts.CPU16.RES:Rescheduling_interrupts
166.00 -100.0% 0.00 interrupts.CPU16.TLB:TLB_shootdowns
913.00 -57.0% 393.00 ± 68% interrupts.CPU17.RES:Rescheduling_interrupts
27.00 +1543.2% 443.67 ±107% interrupts.CPU19.RES:Rescheduling_interrupts
909.00 -80.3% 178.67 ± 6% interrupts.CPU2.41:IR-PCI-MSI.1572866-edge.eth0-TxRx-2
1092 -54.0% 502.33 ± 71% interrupts.CPU2.RES:Rescheduling_interrupts
601.00 -99.9% 0.33 ±141% interrupts.CPU2.TLB:TLB_shootdowns
0.00 +7.1e+103% 71.00 ±138% interrupts.CPU21.TLB:TLB_shootdowns
8.00 +2529.2% 210.33 ±121% interrupts.CPU22.RES:Rescheduling_interrupts
10.00 +7233.3% 733.33 ± 61% interrupts.CPU24.RES:Rescheduling_interrupts
14.00 +7542.9% 1070 ± 91% interrupts.CPU25.RES:Rescheduling_interrupts
3961 -100.0% 1.00 ± 81% interrupts.CPU27.NMI:Non-maskable_interrupts
3961 -100.0% 1.00 ± 81% interrupts.CPU27.PMI:Performance_monitoring_interrupts
170.00 +837.6% 1594 ± 59% interrupts.CPU27.RES:Rescheduling_interrupts
2141 +32.2% 2830 ± 3% interrupts.CPU28.CAL:Function_call_interrupts
13.00 +3984.6% 531.00 ± 68% interrupts.CPU28.RES:Rescheduling_interrupts
7914 -99.9% 11.33 ± 81% interrupts.CPU30.NMI:Non-maskable_interrupts
7914 -99.9% 11.33 ± 81% interrupts.CPU30.PMI:Performance_monitoring_interrupts
9.00 +3585.2% 331.67 ±119% interrupts.CPU30.RES:Rescheduling_interrupts
1200 -73.3% 320.00 ±111% interrupts.CPU31.RES:Rescheduling_interrupts
7886 -100.0% 3.00 ±118% interrupts.CPU32.NMI:Non-maskable_interrupts
7886 -100.0% 3.00 ±118% interrupts.CPU32.PMI:Performance_monitoring_interrupts
7870 -50.3% 3908 ± 81% interrupts.CPU34.NMI:Non-maskable_interrupts
7870 -50.3% 3908 ± 81% interrupts.CPU34.PMI:Performance_monitoring_interrupts
7861 -66.5% 2632 ±139% interrupts.CPU37.NMI:Non-maskable_interrupts
7861 -66.5% 2632 ±139% interrupts.CPU37.PMI:Performance_monitoring_interrupts
993.00 -48.0% 516.67 ± 70% interrupts.CPU38.RES:Rescheduling_interrupts
5.00 +1.3e+05% 6542 ± 28% interrupts.CPU39.NMI:Non-maskable_interrupts
5.00 +1.3e+05% 6542 ± 28% interrupts.CPU39.PMI:Performance_monitoring_interrupts
684.00 -43.4% 387.00 ± 72% interrupts.CPU39.RES:Rescheduling_interrupts
2438 -92.7% 177.00 interrupts.CPU4.43:IR-PCI-MSI.1572868-edge.eth0-TxRx-4
2652 +12.0% 2969 interrupts.CPU4.CAL:Function_call_interrupts
1111 -63.6% 404.33 ± 69% interrupts.CPU40.RES:Rescheduling_interrupts
1436 -57.2% 615.00 ± 68% interrupts.CPU41.RES:Rescheduling_interrupts
858.00 -39.5% 519.33 ± 69% interrupts.CPU42.RES:Rescheduling_interrupts
1233 -52.5% 586.00 ± 83% interrupts.CPU43.RES:Rescheduling_interrupts
7907 -50.5% 3914 ± 81% interrupts.CPU45.NMI:Non-maskable_interrupts
7907 -50.5% 3914 ± 81% interrupts.CPU45.PMI:Performance_monitoring_interrupts
184.00 -10.0% 165.67 ± 4% interrupts.CPU47.88:IR-PCI-MSI.1572911-edge.eth0-TxRx-47
7806 -99.9% 6.00 ±117% interrupts.CPU47.NMI:Non-maskable_interrupts
7806 -99.9% 6.00 ±117% interrupts.CPU47.PMI:Performance_monitoring_interrupts
943.00 -57.1% 404.67 ± 68% interrupts.CPU47.RES:Rescheduling_interrupts
1003 -45.4% 548.00 ± 60% interrupts.CPU48.RES:Rescheduling_interrupts
1190 -52.7% 562.67 ± 70% interrupts.CPU51.RES:Rescheduling_interrupts
1521 -62.3% 573.00 ± 73% interrupts.CPU52.RES:Rescheduling_interrupts
7820 -66.7% 2605 ±141% interrupts.CPU53.NMI:Non-maskable_interrupts
7820 -66.7% 2605 ±141% interrupts.CPU53.PMI:Performance_monitoring_interrupts
177.00 -9.8% 159.67 interrupts.CPU55.96:IR-PCI-MSI.1572919-edge.eth0-TxRx-55
5.00 +78053.3% 3907 ± 81% interrupts.CPU55.NMI:Non-maskable_interrupts
5.00 +78053.3% 3907 ± 81% interrupts.CPU55.PMI:Performance_monitoring_interrupts
8.00 +3287.5% 271.00 ±123% interrupts.CPU55.RES:Rescheduling_interrupts
9.00 +3040.7% 282.67 ±123% interrupts.CPU56.RES:Rescheduling_interrupts
2509 +18.2% 2965 ± 5% interrupts.CPU57.CAL:Function_call_interrupts
5.00 +52993.3% 2654 ±138% interrupts.CPU57.NMI:Non-maskable_interrupts
5.00 +52993.3% 2654 ±138% interrupts.CPU57.PMI:Performance_monitoring_interrupts
18.00 +1816.7% 345.00 ±117% interrupts.CPU57.RES:Rescheduling_interrupts
9.00 +2629.6% 245.67 ±129% interrupts.CPU58.RES:Rescheduling_interrupts
1124 -81.7% 206.00 ± 11% interrupts.CPU6.45:IR-PCI-MSI.1572870-edge.eth0-TxRx-6
2652 +14.1% 3026 ± 3% interrupts.CPU6.CAL:Function_call_interrupts
12.00 +2219.4% 278.33 ±124% interrupts.CPU60.RES:Rescheduling_interrupts
8.00 +19716.7% 1585 ± 79% interrupts.CPU61.RES:Rescheduling_interrupts
1.00 +5.3e+05% 5254 ± 70% interrupts.CPU62.NMI:Non-maskable_interrupts
1.00 +5.3e+05% 5254 ± 70% interrupts.CPU62.PMI:Performance_monitoring_interrupts
14.00 +2102.4% 308.33 ±123% interrupts.CPU62.RES:Rescheduling_interrupts
23.00 +34085.5% 7862 interrupts.CPU63.NMI:Non-maskable_interrupts
23.00 +34085.5% 7862 interrupts.CPU63.PMI:Performance_monitoring_interrupts
10.00 +17923.3% 1802 ± 29% interrupts.CPU63.RES:Rescheduling_interrupts
3931 +99.5% 7841 interrupts.CPU64.NMI:Non-maskable_interrupts
3931 +99.5% 7841 interrupts.CPU64.PMI:Performance_monitoring_interrupts
12.00 +5933.3% 724.00 ± 75% interrupts.CPU64.RES:Rescheduling_interrupts
3042 -22.9% 2345 ± 19% interrupts.CPU65.CAL:Function_call_interrupts
1.00 +6.5e+05% 6531 ± 28% interrupts.CPU66.NMI:Non-maskable_interrupts
1.00 +6.5e+05% 6531 ± 28% interrupts.CPU66.PMI:Performance_monitoring_interrupts
8.00 +5854.2% 476.33 ± 96% interrupts.CPU66.RES:Rescheduling_interrupts
907.00 +211.3% 2823 ± 7% interrupts.CPU67.CAL:Function_call_interrupts
1885 -74.2% 487.00 ±118% interrupts.CPU67.RES:Rescheduling_interrupts
1.00 +6.5e+05% 6522 ± 28% interrupts.CPU68.NMI:Non-maskable_interrupts
1.00 +6.5e+05% 6522 ± 28% interrupts.CPU68.PMI:Performance_monitoring_interrupts
2279 -78.5% 491.00 ±119% interrupts.CPU69.RES:Rescheduling_interrupts
3.00 +88211.1% 2649 ±139% interrupts.CPU70.NMI:Non-maskable_interrupts
3.00 +88211.1% 2649 ±139% interrupts.CPU70.PMI:Performance_monitoring_interrupts
487.00 -58.9% 200.33 ± 11% interrupts.CPU8.47:IR-PCI-MSI.1572872-edge.eth0-TxRx-8
207091 +30.2% 269614 ± 8% interrupts.NMI:Non-maskable_interrupts
207091 +30.2% 269614 ± 8% interrupts.PMI:Performance_monitoring_interrupts
16.77 -0.1 16.66 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk
16.77 -0.1 16.67 perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
16.75 -0.1 16.65 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
16.35 -0.1 16.27 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
16.39 -0.1 16.32 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
0.93 -0.1 0.88 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.68 -0.0 0.64 ± 2% perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge_delay.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.70 -0.0 5.66 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.39 -0.0 25.36 perf-profile.calltrace.cycles-pp.page_fault
25.24 -0.0 25.21 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
1.81 -0.0 1.78 perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
0.76 +0.0 0.77 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64
1.53 +0.0 1.55 perf-profile.calltrace.cycles-pp.vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.90 +0.0 0.92 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
1.61 +0.0 1.63 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.17 +0.0 1.19 perf-profile.calltrace.cycles-pp.__vma_adjust.vma_merge.do_brk_flags.__x64_sys_brk.do_syscall_64
23.16 +0.0 23.20 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.06 +0.1 1.11 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
2.31 +0.1 2.36 perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region
2.28 +0.1 2.34 perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
2.36 +0.1 2.43 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
15.10 +0.1 15.18 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
16.46 +0.1 16.54 perf-profile.calltrace.cycles-pp.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
16.34 +0.1 16.42 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault
54.51 +0.2 54.66 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.24 +0.2 34.44 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
32.27 +0.2 32.47 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
32.18 +0.2 32.39 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
36.84 +0.3 37.10 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
36.77 +0.3 37.03 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk
16.81 -0.1 16.70 perf-profile.children.cycles-pp.lru_add_drain_cpu
16.81 -0.1 16.70 perf-profile.children.cycles-pp.lru_add_drain
0.89 -0.1 0.83 perf-profile.children.cycles-pp.find_vma
0.48 -0.0 0.43 perf-profile.children.cycles-pp.security_mmap_addr
0.94 -0.0 0.89 perf-profile.children.cycles-pp.get_unmapped_area
5.72 -0.0 5.68 perf-profile.children.cycles-pp.do_brk_flags
0.69 -0.0 0.65 ± 3% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.48 -0.0 0.44 ± 4% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.20 -0.0 0.16 ± 5% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.18 -0.0 0.14 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
25.42 -0.0 25.39 perf-profile.children.cycles-pp.page_fault
0.44 -0.0 0.41 ± 2% perf-profile.children.cycles-pp.apic_timer_interrupt
1.82 -0.0 1.79 perf-profile.children.cycles-pp.prep_new_page
25.25 -0.0 25.22 perf-profile.children.cycles-pp.do_page_fault
0.37 -0.0 0.34 ± 2% perf-profile.children.cycles-pp.__lock_text_start
0.16 -0.0 0.13 ± 3% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.06 -0.0 0.03 ± 70% perf-profile.children.cycles-pp.kmem_cache_alloc
0.22 -0.0 0.19 ± 2% perf-profile.children.cycles-pp.__mod_node_page_state
0.28 -0.0 0.26 perf-profile.children.cycles-pp.try_charge
0.28 -0.0 0.26 ± 3% perf-profile.children.cycles-pp._cond_resched
0.29 -0.0 0.27 perf-profile.children.cycles-pp.cap_vm_enough_memory
0.65 -0.0 0.63 perf-profile.children.cycles-pp.___perf_sw_event
0.32 -0.0 0.30 ± 2% perf-profile.children.cycles-pp.__list_del_entry_valid
0.32 -0.0 0.30 ± 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.24 -0.0 0.22 ± 3% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.16 -0.0 0.14 ± 5% perf-profile.children.cycles-pp.tick_sched_handle
0.31 -0.0 0.29 ± 4% perf-profile.children.cycles-pp.strlcpy
0.38 -0.0 0.36 perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.14 -0.0 0.12 ± 3% perf-profile.children.cycles-pp.rcu_all_qs
0.39 -0.0 0.37 ± 2% perf-profile.children.cycles-pp.vmacache_find
0.28 -0.0 0.26 perf-profile.children.cycles-pp.sync_regs
0.17 -0.0 0.15 ± 6% perf-profile.children.cycles-pp.tick_sched_timer
0.15 -0.0 0.13 ± 7% perf-profile.children.cycles-pp.update_process_times
0.16 -0.0 0.14 ± 3% perf-profile.children.cycles-pp.cap_mmap_addr
1.37 -0.0 1.35 perf-profile.children.cycles-pp.native_irq_return_iret
0.07 -0.0 0.06 ± 8% perf-profile.children.cycles-pp.userfaultfd_unmap_complete
0.13 -0.0 0.12 perf-profile.children.cycles-pp.selinux_mmap_addr
0.78 +0.0 0.79 perf-profile.children.cycles-pp.perf_iterate_sb
0.09 +0.0 0.10 perf-profile.children.cycles-pp.cap_capable
0.10 +0.0 0.11 ± 4% perf-profile.children.cycles-pp.free_unref_page_prepare
1.55 +0.0 1.56 perf-profile.children.cycles-pp.vma_merge
0.28 +0.0 0.30 ± 3% perf-profile.children.cycles-pp.__mod_lruvec_state
0.43 +0.0 0.45 perf-profile.children.cycles-pp.free_unref_page_list
1.65 +0.0 1.67 perf-profile.children.cycles-pp.perf_event_mmap
0.17 +0.0 0.19 perf-profile.children.cycles-pp.__count_memcg_events
0.36 +0.0 0.38 perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.05 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.strlen
0.06 +0.0 0.09 ± 28% perf-profile.children.cycles-pp.mem_cgroup_from_task
1.23 +0.0 1.27 perf-profile.children.cycles-pp.syscall_return_via_sysret
1.29 +0.0 1.33 perf-profile.children.cycles-pp.__vma_adjust
23.20 +0.0 23.24 perf-profile.children.cycles-pp.handle_mm_fault
0.00 +0.1 0.05 perf-profile.children.cycles-pp.page_mapping
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vmacache_update
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vm_normal_page
2.32 +0.1 2.37 perf-profile.children.cycles-pp.flush_tlb_func_common
2.37 +0.1 2.43 perf-profile.children.cycles-pp.flush_tlb_mm_range
2.29 +0.1 2.35 perf-profile.children.cycles-pp.native_flush_tlb_one_user
16.47 +0.1 16.55 perf-profile.children.cycles-pp.__lru_cache_add
54.52 +0.1 54.67 perf-profile.children.cycles-pp.unmap_region
34.48 +0.2 34.69 perf-profile.children.cycles-pp.release_pages
63.81 +0.2 64.04 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
63.64 +0.2 63.87 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
36.79 +0.3 37.05 perf-profile.children.cycles-pp.tlb_flush_mmu
36.85 +0.3 37.12 perf-profile.children.cycles-pp.tlb_finish_mmu
0.45 -0.0 0.40 ± 3% perf-profile.self.cycles-pp.find_vma
0.19 -0.0 0.16 ± 3% perf-profile.self.cycles-pp.cap_vm_enough_memory
0.28 -0.0 0.25 ± 3% perf-profile.self.cycles-pp.try_charge
0.06 -0.0 0.03 ± 70% perf-profile.self.cycles-pp.userfaultfd_unmap_complete
0.12 -0.0 0.09 ± 5% perf-profile.self.cycles-pp.rcu_all_qs
0.14 -0.0 0.12 ± 4% perf-profile.self.cycles-pp.strlcpy
0.22 -0.0 0.20 ± 2% perf-profile.self.cycles-pp.vma_merge
0.21 -0.0 0.19 ± 2% perf-profile.self.cycles-pp.__mod_node_page_state
0.12 -0.0 0.10 perf-profile.self.cycles-pp.selinux_mmap_addr
0.56 -0.0 0.54 perf-profile.self.cycles-pp.___perf_sw_event
0.38 -0.0 0.36 ± 2% perf-profile.self.cycles-pp.vmacache_find
0.14 -0.0 0.12 ± 3% perf-profile.self.cycles-pp.cap_mmap_addr
0.15 -0.0 0.13 ± 3% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.15 -0.0 0.13 ± 3% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
1.37 -0.0 1.35 perf-profile.self.cycles-pp.native_irq_return_iret
0.25 -0.0 0.24 perf-profile.self.cycles-pp.sync_regs
0.13 -0.0 0.12 perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.15 +0.0 0.16 perf-profile.self.cycles-pp.down_write_killable
0.53 +0.0 0.54 perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.11 +0.0 0.12 ± 3% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.09 +0.0 0.10 ± 4% perf-profile.self.cycles-pp._cond_resched
0.09 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.free_unref_page_prepare
0.47 +0.0 0.48 perf-profile.self.cycles-pp.perf_event_mmap
0.13 +0.0 0.15 ± 3% perf-profile.self.cycles-pp.alloc_pages_vma
0.55 +0.0 0.57 perf-profile.self.cycles-pp.__handle_mm_fault
0.14 +0.0 0.16 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.16 +0.0 0.18 perf-profile.self.cycles-pp.__count_memcg_events
0.15 +0.0 0.17 ± 2% perf-profile.self.cycles-pp.cred_has_capability
0.05 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.strlen
1.22 +0.0 1.26 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.05 perf-profile.self.cycles-pp.page_mapping
2.29 +0.1 2.35 perf-profile.self.cycles-pp.native_flush_tlb_one_user
63.64 +0.2 63.87 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1000t/debian-x86_64-2019-05-14.cgz/300s/lkp-ivb-2ep1/page_test/reaim/0x42e
commit:
f17f33d34b ("mm/lru: add per lruvec lock for memcg")
3145e78472 ("mm/lruvec: add irqsave flags into lruvec struct")
f17f33d34bfab978 3145e78472f7ad746e15180092a
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
1:2 -50% :4 dmesg.WARNING:at_ip___perf_sw_event/0x
1:2 -50% :4 dmesg.WARNING:at_ip_perf_event_mmap_output/0x
3:2 166% 6:4 perf-profile.calltrace.cycles-pp.error_entry.brk
3:2 200% 7:4 perf-profile.children.cycles-pp.error_entry
2:2 159% 6:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
12505 ± 24% -42.7% 7168 ± 53% numa-numastat.node1.other_node
12168 ± 66% -78.8% 2574 ± 24% cpuidle.POLL.time
4032 ± 2% -38.8% 2469 ± 27% cpuidle.POLL.usage
8.09 ± 4% -1.0 7.14 ± 12% turbostat.C6%
5.26 ± 5% -18.8% 4.27 ± 20% turbostat.CPU%c6
14372 ± 5% -60.8% 5632 ± 98% numa-meminfo.node0.Inactive
14212 ± 5% -61.2% 5512 ±100% numa-meminfo.node0.Inactive(anon)
14678 -24.3% 11116 ± 18% numa-meminfo.node0.Mapped
25783 ± 3% -47.3% 13589 ± 61% numa-meminfo.node0.Shmem
22528 -0.8% 22345 proc-vmstat.nr_kernel_stack
7457 -4.8% 7098 ± 3% proc-vmstat.nr_shmem
255669 ± 6% -14.4% 218783 ± 6% proc-vmstat.numa_pte_updates
5670 -12.3% 4970 ± 8% proc-vmstat.pgactivate
344.16 ± 8% -20.8% 272.68 ± 15% sched_debug.cfs_rq:/.exec_clock.stddev
2880 ± 4% +17.9% 3397 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.max
100.25 ± 18% -66.8% 33.25 ± 87% sched_debug.cfs_rq:/.util_est_enqueued.min
7805 ± 7% -39.7% 4703 ± 49% sched_debug.cpu.curr->pid.min
3548 ± 5% -61.2% 1377 ±100% numa-vmstat.node0.nr_inactive_anon
3712 -23.0% 2860 ± 18% numa-vmstat.node0.nr_mapped
6449 ± 3% -47.3% 3396 ± 61% numa-vmstat.node0.nr_shmem
3548 ± 5% -61.2% 1377 ±100% numa-vmstat.node0.nr_zone_inactive_anon
3428 ± 84% +168.2% 9192 ± 42% numa-vmstat.node0.numa_other
1401 ± 4% -7.2% 1300 ± 7% slabinfo.UNIX.active_objs
1401 ± 4% -7.2% 1300 ± 7% slabinfo.UNIX.num_objs
11019 +12.1% 12351 ± 5% slabinfo.kmalloc-512.active_objs
11077 +13.4% 12562 ± 4% slabinfo.kmalloc-512.num_objs
738.50 ± 3% +13.9% 841.00 ± 10% slabinfo.mnt_cache.active_objs
738.50 ± 3% +13.9% 841.00 ± 10% slabinfo.mnt_cache.num_objs
7241 ± 6% +15.6% 8369 ± 4% slabinfo.pid.active_objs
2.33 ± 2% -0.1 2.18 ± 4% perf-stat.i.cache-miss-rate%
54912 ± 2% -4.5% 52425 ± 4% perf-stat.i.cycles-between-cache-misses
22.24 ± 2% -1.0 21.28 ± 3% perf-stat.i.node-store-miss-rate%
494199 +1.4% 501246 perf-stat.i.node-store-misses
0.91 ± 2% +0.1 0.96 ± 2% perf-stat.overall.cache-miss-rate%
38047 ± 2% -5.2% 36060 ± 3% perf-stat.overall.cycles-between-cache-misses
19.05 ± 3% -1.1 17.92 ± 3% perf-stat.overall.node-store-miss-rate%
492484 +1.4% 499536 perf-stat.ps.node-store-misses
25275 ± 7% -14.3% 21655 ± 9% softirqs.CPU0.RCU
25522 ± 4% -14.1% 21912 ± 9% softirqs.CPU1.RCU
23883 ± 4% -11.5% 21143 ± 9% softirqs.CPU10.RCU
24251 ± 6% -13.2% 21056 ± 10% softirqs.CPU11.RCU
24634 ± 6% -11.2% 21866 ± 10% softirqs.CPU16.RCU
24584 -12.7% 21458 ± 9% softirqs.CPU19.RCU
24301 ± 5% -9.5% 21992 ± 10% softirqs.CPU20.RCU
23809 ± 5% -10.9% 21222 ± 11% softirqs.CPU22.RCU
24488 ± 7% -14.3% 20986 ± 10% softirqs.CPU24.RCU
25382 ± 10% -15.7% 21408 ± 12% softirqs.CPU25.RCU
29090 ± 6% -20.0% 23270 ± 14% softirqs.CPU26.RCU
25549 ± 7% -13.8% 22021 ± 9% softirqs.CPU27.RCU
24891 ± 6% -11.6% 22004 ± 9% softirqs.CPU29.RCU
26001 ± 8% -13.1% 22604 ± 9% softirqs.CPU3.RCU
24479 ± 6% -11.8% 21583 ± 9% softirqs.CPU30.RCU
24151 ± 7% -10.2% 21686 ± 11% softirqs.CPU31.RCU
24263 ± 8% -14.0% 20877 ± 9% softirqs.CPU32.RCU
23429 ± 6% -9.2% 21276 ± 10% softirqs.CPU33.RCU
23260 ± 4% -11.1% 20680 ± 10% softirqs.CPU35.RCU
24114 ± 6% -11.0% 21467 ± 11% softirqs.CPU38.RCU
24513 ± 6% -9.5% 22182 ± 11% softirqs.CPU4.RCU
24006 ± 7% -10.9% 21401 ± 10% softirqs.CPU40.RCU
23472 ± 6% -9.5% 21230 ± 10% softirqs.CPU43.RCU
23596 ± 6% -8.8% 21513 ± 10% softirqs.CPU44.RCU
24855 ± 6% -12.4% 21785 ± 10% softirqs.CPU5.RCU
25005 ± 4% -12.7% 21838 ± 10% softirqs.CPU6.RCU
24678 ± 3% -12.0% 21716 ± 10% softirqs.CPU7.RCU
24303 ± 5% -7.5% 22478 ± 7% softirqs.CPU8.RCU
24102 ± 6% -11.3% 21390 ± 10% softirqs.CPU9.RCU
1167517 ± 6% -11.0% 1038997 ± 10% softirqs.RCU
359.50 ± 44% -50.8% 177.00 ± 4% interrupts.39:PCI-MSI.2621445-edge.eth0-TxRx-4
2327 ± 19% -73.2% 624.50 ± 32% interrupts.CPU0.RES:Rescheduling_interrupts
5347 ± 30% -83.7% 869.50 ±173% interrupts.CPU10.NMI:Non-maskable_interrupts
5347 ± 30% -83.7% 869.50 ±173% interrupts.CPU10.PMI:Performance_monitoring_interrupts
7191 ± 2% -99.8% 13.00 ±173% interrupts.CPU13.NMI:Non-maskable_interrupts
7191 ± 2% -99.8% 13.00 ±173% interrupts.CPU13.PMI:Performance_monitoring_interrupts
3201 ± 2% -20.8% 2535 ± 27% interrupts.CPU14.CAL:Function_call_interrupts
7180 ± 2% -99.6% 26.00 ±173% interrupts.CPU14.NMI:Non-maskable_interrupts
7180 ± 2% -99.6% 26.00 ±173% interrupts.CPU14.PMI:Performance_monitoring_interrupts
5466 ± 35% -83.3% 911.75 ±172% interrupts.CPU15.NMI:Non-maskable_interrupts
5466 ± 35% -83.3% 911.75 ±172% interrupts.CPU15.PMI:Performance_monitoring_interrupts
5355 ± 31% -83.1% 906.50 ±173% interrupts.CPU16.NMI:Non-maskable_interrupts
5355 ± 31% -83.1% 906.50 ±173% interrupts.CPU16.PMI:Performance_monitoring_interrupts
1320 ± 3% -29.6% 929.25 ± 57% interrupts.CPU18.RES:Rescheduling_interrupts
5363 ± 30% -99.6% 20.50 ±151% interrupts.CPU20.NMI:Non-maskable_interrupts
5363 ± 30% -99.6% 20.50 ±151% interrupts.CPU20.PMI:Performance_monitoring_interrupts
1735 ± 44% -88.3% 202.25 ±152% interrupts.CPU24.RES:Rescheduling_interrupts
3557 ± 97% -50.7% 1752 ±170% interrupts.CPU25.NMI:Non-maskable_interrupts
3557 ± 97% -50.7% 1752 ±170% interrupts.CPU25.PMI:Performance_monitoring_interrupts
1281 ± 25% -72.4% 353.75 ±164% interrupts.CPU25.RES:Rescheduling_interrupts
5478 ± 35% -67.2% 1798 ± 98% interrupts.CPU26.NMI:Non-maskable_interrupts
5478 ± 35% -67.2% 1798 ± 98% interrupts.CPU26.PMI:Performance_monitoring_interrupts
359.50 ± 44% -50.8% 177.00 ± 4% interrupts.CPU28.39:PCI-MSI.2621445-edge.eth0-TxRx-4
59.00 ± 98% +7561.9% 4520 ± 34% interrupts.CPU37.NMI:Non-maskable_interrupts
59.00 ± 98% +7561.9% 4520 ± 34% interrupts.CPU37.PMI:Performance_monitoring_interrupts
1641 -37.6% 1024 ± 56% interrupts.CPU38.RES:Rescheduling_interrupts
1861 ± 18% -40.3% 1110 ± 57% interrupts.CPU39.RES:Rescheduling_interrupts
3520 ± 99% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts
3520 ± 99% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts
2149 ± 9% -51.8% 1035 ± 57% interrupts.CPU43.RES:Rescheduling_interrupts
137.50 ±100% +3846.7% 5426 ± 33% interrupts.CPU44.NMI:Non-maskable_interrupts
137.50 ±100% +3846.7% 5426 ± 33% interrupts.CPU44.PMI:Performance_monitoring_interrupts
1708 -37.4% 1069 ± 58% interrupts.CPU44.RES:Rescheduling_interrupts
148247 -14.4% 126923 ± 10% interrupts.NMI:Non-maskable_interrupts
148247 -14.4% 126923 ± 10% interrupts.PMI:Performance_monitoring_interrupts
0.58 -0.2 0.40 ± 57% perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault.page_test
5.55 -0.1 5.43 perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.09 -0.1 4.97 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
4.25 -0.1 4.13 perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
2.96 -0.1 2.87 perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
2.83 -0.1 2.74 perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
1.29 -0.1 1.22 ± 2% perf-profile.calltrace.cycles-pp.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 -0.1 0.69 ± 2% perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge_delay.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.69 -0.0 0.65 ± 2% perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_brk_flags.__x64_sys_brk.do_syscall_64
1.31 +0.1 1.37 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
2.27 +0.1 2.40 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.96 ± 5% +0.1 1.09 perf-profile.calltrace.cycles-pp.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64
1.40 ± 4% +0.1 1.54 perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.17 +0.2 8.32 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
10.20 +0.2 10.43 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault
10.14 +0.2 10.39 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page
12.15 +0.3 12.44 perf-profile.calltrace.cycles-pp.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
12.00 +0.3 12.29 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.do_anonymous_page.__handle_mm_fault.handle_mm_fault
5.62 -0.2 5.47 perf-profile.children.cycles-pp.alloc_pages_vma
5.29 -0.1 5.15 perf-profile.children.cycles-pp.__alloc_pages_nodemask
4.38 -0.1 4.26 perf-profile.children.cycles-pp.get_page_from_freelist
0.91 -0.1 0.82 ± 5% perf-profile.children.cycles-pp.___perf_sw_event
2.84 -0.1 2.75 perf-profile.children.cycles-pp.clear_page_erms
2.97 -0.1 2.88 perf-profile.children.cycles-pp.prep_new_page
1.10 -0.1 1.02 ± 4% perf-profile.children.cycles-pp.__perf_sw_event
0.78 -0.1 0.71 ± 2% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.29 ± 5% -0.1 0.22 ± 3% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
1.31 -0.1 1.25 ± 2% perf-profile.children.cycles-pp.get_unmapped_area
0.70 -0.0 0.66 ± 2% perf-profile.children.cycles-pp.security_mmap_addr
0.74 ± 2% -0.0 0.70 ± 2% perf-profile.children.cycles-pp.free_unref_page_list
0.41 ± 3% -0.0 0.36 perf-profile.children.cycles-pp.down_write
0.10 ± 5% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.get_vma_policy
0.06 -0.0 0.03 ±100% perf-profile.children.cycles-pp.__x86_indirect_thunk_r12
1.60 -0.0 1.57 perf-profile.children.cycles-pp.native_irq_return_iret
0.21 -0.0 0.18 ± 4% perf-profile.children.cycles-pp.__sbrk
0.33 ± 4% -0.0 0.30 perf-profile.children.cycles-pp.__might_sleep
0.41 ± 2% -0.0 0.39 ± 2% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.11 -0.0 0.09 ± 7% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.24 ± 4% -0.0 0.22 perf-profile.children.cycles-pp.__tlb_remove_page_size
0.15 ± 3% -0.0 0.14 ± 6% perf-profile.children.cycles-pp.__vm_enough_memory
0.20 -0.0 0.18 ± 2% perf-profile.children.cycles-pp.memcpy_erms
0.08 ± 6% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__page_set_anon_rmap
0.08 ± 5% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.mpx_unmapped_area_check
0.10 +0.0 0.11 ± 4% perf-profile.children.cycles-pp.unlink_anon_vmas
0.14 ± 3% +0.0 0.17 ± 8% perf-profile.children.cycles-pp.vprintk_emit
0.10 ± 5% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.strlen
0.12 ± 12% +0.0 0.17 ± 10% perf-profile.children.cycles-pp.serial8250_console_write
0.12 ± 8% +0.0 0.16 ± 9% perf-profile.children.cycles-pp.serial8250_console_putchar
0.12 ± 8% +0.0 0.16 ± 11% perf-profile.children.cycles-pp.wait_for_xmitr
0.13 ± 7% +0.0 0.18 ± 9% perf-profile.children.cycles-pp.console_unlock
1.60 +0.1 1.66 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
2.33 +0.1 2.45 perf-profile.children.cycles-pp.perf_event_mmap
0.97 ± 6% +0.1 1.10 ± 2% perf-profile.children.cycles-pp.selinux_vm_enough_memory
1.40 ± 4% +0.1 1.54 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
8.20 +0.2 8.35 perf-profile.children.cycles-pp.do_brk_flags
12.16 +0.3 12.45 perf-profile.children.cycles-pp.__lru_cache_add
24.03 +0.5 24.57 perf-profile.children.cycles-pp.pagevec_lru_move_fn
2.83 -0.1 2.74 perf-profile.self.cycles-pp.clear_page_erms
0.80 -0.1 0.71 ± 6% perf-profile.self.cycles-pp.___perf_sw_event
0.22 ± 6% -0.1 0.17 ± 4% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.34 ± 4% -0.0 0.30 ± 4% perf-profile.self.cycles-pp.security_mmap_addr
1.60 -0.0 1.56 perf-profile.self.cycles-pp.native_irq_return_iret
0.88 ± 2% -0.0 0.84 ± 2% perf-profile.self.cycles-pp.__handle_mm_fault
0.14 ± 3% -0.0 0.11 ± 10% perf-profile.self.cycles-pp.__sbrk
0.08 ± 6% -0.0 0.05 ± 58% perf-profile.self.cycles-pp.try_charge
0.23 -0.0 0.20 ± 7% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.21 ± 2% -0.0 0.18 ± 3% perf-profile.self.cycles-pp.alloc_pages_vma
0.17 ± 2% -0.0 0.15 ± 12% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.11 -0.0 0.09 ± 5% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.29 ± 3% -0.0 0.27 ± 3% perf-profile.self.cycles-pp.__might_sleep
0.21 ± 4% -0.0 0.19 ± 4% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.53 -0.0 0.51 perf-profile.self.cycles-pp.__x64_sys_brk
0.21 -0.0 0.19 ± 6% perf-profile.self.cycles-pp.page_fault
0.08 ± 6% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.__page_set_anon_rmap
0.08 ± 5% -0.0 0.07 ± 5% perf-profile.self.cycles-pp.do_page_fault
0.22 ± 4% +0.0 0.24 ± 2% perf-profile.self.cycles-pp.cred_has_capability
0.08 ± 5% +0.0 0.12 ± 3% perf-profile.self.cycles-pp.strlen
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 6 months
d4f4de5e5e: aim9.dir_rtns_1.ops_per_sec -26.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -26.1% regression of aim9.dir_rtns_1.ops_per_sec due to commit:
commit: d4f4de5e5ef8efde85febb6876cd3c8ab1631999 ("Fix the locking in dcache_readdir() and friends")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: aim9
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 4G memory
with following parameters:
testtime: 5s
test: all
cpufreq_governor: performance
ucode: 0x21
test-description: Suite IX is the "AIM Independent Resource Benchmark:" the famous synthetic benchmark.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite9/
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------+
| testcase: change | aim9: aim9.dir_rtns_1.ops_per_sec -26.3% regression |
| test machine | 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 4G memory |
| test parameters | cpufreq_governor=performance |
| | test=dir_rtns_1 |
| | testtime=300s |
| | ucode=0x21 |
+------------------+------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/lkp-ivb-d03/all/aim9/5s/0x21
commit:
mainline-tracking-v5.3-190917T112302Z
d4f4de5e5e ("Fix the locking in dcache_readdir() and friends")
mainline-trackin d4f4de5e5ef8efde85febb6876c
---------------- ---------------------------
%stddev %change %stddev
\ | \
10976333 -26.1% 8116500 aim9.dir_rtns_1.ops_per_sec
916821 +1.6% 931164 aim9.sync_disk_wrt.ops_per_sec
145728 -1.1% 144085 aim9.tcp_test.ops_per_sec
276050 +1.3% 279655 aim9.udp_test.ops_per_sec
2307 ± 59% -53.1% 1083 ± 45% interrupts.CPU3.RES:Rescheduling_interrupts
3476 -2.0% 3406 proc-vmstat.nr_kernel_stack
1.205e+08 ± 47% -73.8% 31527642 ± 73% cpuidle.C3.time
403015 ± 70% -77.9% 89237 ± 65% cpuidle.C3.usage
1085 ± 2% +16.1% 1260 ± 4% slabinfo.kmalloc-96.active_objs
1085 ± 2% +16.1% 1260 ± 4% slabinfo.kmalloc-96.num_objs
176352 ±167% +294.7% 696134 ± 90% softirqs.CPU3.NET_RX
73740 ± 51% +136.7% 174570 ± 52% softirqs.CPU3.RCU
36381 ± 23% +33.7% 48655 ± 13% sched_debug.cfs_rq:/.min_vruntime.min
6.86 ± 59% +85.8% 12.75 ± 37% sched_debug.cfs_rq:/.runnable_load_avg.min
-57832 -85.8% -8235 sched_debug.cfs_rq:/.spread0.min
403011 ± 70% -77.9% 89237 ± 65% turbostat.C3
9.99 ± 47% -7.4 2.61 ± 73% turbostat.C3%
3.64 ±214% +323.7% 15.41 ± 99% turbostat.CPU%c6
39.33 -7.8% 36.25 turbostat.CoreTmp
38.83 -7.3% 36.00 turbostat.PkgTmp
1.57 ±141% +3.2 4.78 ± 15% perf-profile.calltrace.cycles-pp.div_double
0.02 ±149% +0.1 0.08 ± 26% perf-profile.children.cycles-pp.native_sched_clock
0.02 ±149% +0.1 0.08 ± 26% perf-profile.children.cycles-pp.sched_clock
0.02 ±146% +0.1 0.09 ± 27% perf-profile.children.cycles-pp.arch_stack_walk
0.02 ±146% +0.1 0.09 ± 20% perf-profile.children.cycles-pp.swake_up_one
0.02 ±149% +0.1 0.09 ± 35% perf-profile.children.cycles-pp.sched_clock_cpu
0.01 ±223% +0.1 0.09 ± 24% perf-profile.children.cycles-pp.swake_up_locked
0.03 ±152% +0.1 0.10 ± 30% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.01 ±223% +0.1 0.09 ± 34% perf-profile.children.cycles-pp.__fxstat64
0.05 ±114% +0.1 0.14 ± 33% perf-profile.children.cycles-pp.security_task_getsecid
0.03 ±152% +0.1 0.11 ± 32% perf-profile.children.cycles-pp.__account_scheduler_latency
0.05 ±104% +0.1 0.14 ± 27% perf-profile.children.cycles-pp.selinux_inode_free_security
0.05 ±141% +0.1 0.14 ± 39% perf-profile.children.cycles-pp.__x64_sys_chdir
0.05 ±141% +0.1 0.14 ± 39% perf-profile.children.cycles-pp.ksys_chdir
0.05 ±120% +0.1 0.14 ± 29% perf-profile.children.cycles-pp.enqueue_entity
0.01 ±223% +0.1 0.11 ± 36% perf-profile.children.cycles-pp.d_set_d_op
0.05 ± 94% +0.1 0.15 ± 38% perf-profile.children.cycles-pp.ttwu_do_activate
0.05 ± 94% +0.1 0.15 ± 38% perf-profile.children.cycles-pp.activate_task
0.05 ±120% +0.1 0.15 ± 36% perf-profile.children.cycles-pp.enqueue_task_fair
0.00 +0.1 0.13 ± 27% perf-profile.children.cycles-pp.__srcu_read_lock
0.06 ±116% +0.2 0.22 ± 38% perf-profile.children.cycles-pp.d_lookup
0.08 ±103% +0.2 0.24 ± 32% perf-profile.children.cycles-pp.may_open
0.07 ±119% +0.2 0.25 ± 38% perf-profile.children.cycles-pp.inode_doinit_with_dentry
0.06 ±101% +0.2 0.24 ± 33% perf-profile.children.cycles-pp.fsnotify_grab_connector
0.08 ±107% +0.2 0.28 ± 35% perf-profile.children.cycles-pp.fsnotify_destroy_marks
0.10 ±104% +0.2 0.30 ± 27% perf-profile.children.cycles-pp.selinux_inode_init_security
0.11 ±115% +0.2 0.33 ± 39% perf-profile.children.cycles-pp.lockref_put_or_lock
0.12 ±116% +0.2 0.36 ± 29% perf-profile.children.cycles-pp._IO_fgets
0.12 ±102% +0.3 0.38 ± 36% perf-profile.children.cycles-pp.vfprintf
0.21 ±106% +0.3 0.56 ± 29% perf-profile.children.cycles-pp.destroy_inode
0.37 ± 79% +0.5 0.86 ± 44% perf-profile.children.cycles-pp.kthread
0.37 ± 79% +0.5 0.86 ± 45% perf-profile.children.cycles-pp.ret_from_fork
0.27 ±112% +0.5 0.80 ± 29% perf-profile.children.cycles-pp.user_path_at_empty
1.57 ±141% +3.2 4.79 ± 15% perf-profile.children.cycles-pp.div_double
0.03 ±141% +0.1 0.09 ± 37% perf-profile.self.cycles-pp.selinux_inode_free_security
0.01 ±223% +0.1 0.07 ± 31% perf-profile.self.cycles-pp.may_open
0.03 ±142% +0.1 0.11 ± 41% perf-profile.self.cycles-pp.shmem_getattr
0.01 ±223% +0.1 0.10 ± 27% perf-profile.self.cycles-pp.may_create
0.01 ±223% +0.1 0.11 ± 36% perf-profile.self.cycles-pp.d_set_d_op
0.04 ±142% +0.1 0.14 ± 36% perf-profile.self.cycles-pp.inode_doinit_with_dentry
0.01 ±223% +0.1 0.12 ± 29% perf-profile.self.cycles-pp.__d_lookup_done
0.03 ±143% +0.1 0.15 ± 16% perf-profile.self.cycles-pp.getname_flags
0.02 ±144% +0.1 0.15 ± 36% perf-profile.self.cycles-pp.__alloc_fd
0.00 +0.1 0.13 ± 22% perf-profile.self.cycles-pp.__srcu_read_lock
0.07 ±110% +0.1 0.20 ± 26% perf-profile.self.cycles-pp.selinux_inode_init_security
0.09 ±114% +0.2 0.28 ± 36% perf-profile.self.cycles-pp.lockref_put_or_lock
0.11 ±117% +0.2 0.33 ± 28% perf-profile.self.cycles-pp._IO_fgets
0.15 ±110% +0.2 0.40 ± 30% perf-profile.self.cycles-pp.inode_permission
0.12 ±103% +0.2 0.37 ± 36% perf-profile.self.cycles-pp.vfprintf
1.52 ±141% +3.3 4.78 ± 15% perf-profile.self.cycles-pp.div_double
aim9.dir_rtns_1.ops_per_sec
1.2e+07 +-+---------------------------------------------------------------+
|.+.+ +.+.+ +.+.+.+.+.+.+.+.+ +.+.+ +.+.+ +.+.|
1e+07 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
O O : O O O O O O O O O O O O O O O O : O : O : : : |
8e+06 +-+ : : : : :O:O O:O : : : |
| : : : : : : : : : : |
6e+06 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
4e+06 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
| : : : : : : : |
2e+06 +-+ : : : : : : : |
| : : : : : : : |
0 +-+-O-------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-ivb-d03: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 4G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/lkp-ivb-d03/dir_rtns_1/aim9/300s/0x21
commit:
mainline-tracking-v5.3-190917T112302Z
d4f4de5e5e ("Fix the locking in dcache_readdir() and friends")
mainline-trackin d4f4de5e5ef8efde85febb6876c
---------------- ---------------------------
%stddev %change %stddev
\ | \
10956895 -26.3% 8074108 aim9.dir_rtns_1.ops_per_sec
235.88 ± 3% +8.0% 254.66 aim9.time.system_time
64.12 ± 14% -29.3% 45.34 ± 5% aim9.time.user_time
20.04 ± 3% +1.6 21.67 mpstat.cpu.all.sys%
0.00 +0.1 0.07 ± 33% perf-profile.children.cycles-pp.schedule
0.00 +13.6 13.63 ± 81% perf-profile.children.cycles-pp.scan_positives
395.86 +2.6% 406.00 proc-vmstat.nr_active_file
395.86 +2.6% 406.00 proc-vmstat.nr_zone_active_file
1092 ± 2% +18.0% 1288 ± 3% slabinfo.kmalloc-96.active_objs
1092 ± 2% +18.0% 1288 ± 3% slabinfo.kmalloc-96.num_objs
376.29 ± 96% +1150.0% 4703 ±111% interrupts.CPU3.NMI:Non-maskable_interrupts
376.29 ± 96% +1150.0% 4703 ±111% interrupts.CPU3.PMI:Performance_monitoring_interrupts
318.00 ± 15% -32.2% 215.50 ± 6% interrupts.TLB:TLB_shootdowns
58972 ± 13% -23.9% 44848 ± 9% softirqs.CPU1.RCU
61482 ± 9% -20.1% 49110 ± 3% softirqs.CPU2.RCU
122988 ± 14% +25.8% 154740 ± 8% softirqs.CPU3.TIMER
243766 ± 8% -20.7% 193320 ± 3% softirqs.RCU
20.60 ± 54% +67.9% 34.59 ± 23% perf-stat.i.MPKI
2.08 ± 27% +1.1 3.20 ± 39% perf-stat.i.branch-miss-rate%
21252312 ± 50% -57.6% 9012007 ± 56% perf-stat.i.branch-misses
6.94 ± 57% +5.5 12.42 ± 27% perf-stat.i.cache-miss-rate%
5870887 ± 40% -35.5% 3788398 ± 5% perf-stat.i.cache-references
0.45 ± 40% +0.2 0.69 ± 21% perf-stat.i.dTLB-load-miss-rate%
5.18 ± 63% +5.3 10.45 ± 28% perf-stat.overall.cache-miss-rate%
0.63 ± 4% +19.4% 0.75 ± 15% perf-stat.overall.cpi
21183509 ± 50% -57.5% 8992621 ± 56% perf-stat.ps.branch-misses
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 6 months
[x86] 16f187a5f3: WARNING:at_arch/x86/mm/ioremap.c:#__ioremap_caller
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 16f187a5f332b9b8f78fcfccc82752797e1d8e07 ("x86: clean up ioremap")
git://git.infradead.org/users/hch/misc.git generic-ioremap
in testcase: ndctl
with following parameters:
bp_memmap: 4G!8G
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------+------------+------------+
| | faeeef744c | 16f187a5f3 |
+----------------------------------------------------+------------+------------+
| boot_successes | 50 | 0 |
| boot_failures | 0 | 34 |
| WARNING:at_arch/x86/mm/ioremap.c:#__ioremap_caller | 0 | 34 |
| RIP:__ioremap_caller | 0 | 34 |
+----------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 15.391069] WARNING: CPU: 1 PID: 373 at arch/x86/mm/ioremap.c:177 __ioremap_caller+0x2ed/0x310
[ 15.393063] Modules linked in: nfit_test(O+) dax_pmem(O) nfit(O) sr_mod cdrom sg intel_rapl_msr intel_rapl_common ata_generic pata_acpi crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel dax_pmem_compat(O) dax_pmem_core(O) device_dax(O) nd_pmem(O) nd_btt(O) ppdev snd_pcm aesni_intel nd_e820(O) libnvdimm(O) snd_timer crypto_simd bochs_drm ata_piix drm_vram_helper ttm snd cryptd glue_helper drm_kms_helper libata syscopyarea sysfillrect soundcore sysimgblt fb_sys_fops pcspkr drm joydev serio_raw nfit_test_iomap(O) i2c_piix4 parport_pc floppy parport ip_tables
[ 15.403025] CPU: 1 PID: 373 Comm: kworker/u4:3 Tainted: G O 5.3.0-12404-g16f187a5f332b #1
[ 15.404966] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 15.406793] Workqueue: events_unbound async_run_entry_fn
[ 15.408167] RIP: 0010:__ioremap_caller+0x2ed/0x310
[ 15.409483] Code: 09 c1 e9 99 fe ff ff 0f b7 05 71 20 5c 01 48 09 c1 e9 8a fe ff ff e8 c2 26 02 00 48 89 fe 48 c7 c7 28 25 4f b7 e8 61 d5 08 00 <0f> 0b 45 31 ff e9 9a fd ff ff 89 c6 48 c7 c7 88 25 4f b7 45 31 ff
[ 15.413382] RSP: 0018:ffffa6a1c0313b88 EFLAGS: 00010282
[ 15.414798] RAX: 0000000000000032 RBX: ffffa6a1c023d000 RCX: 0000000000000000
[ 15.416440] RDX: 0000000000000000 RSI: ffff8b92ffd17778 RDI: ffff8b92ffd17778
[ 15.418197] RBP: 0000000000001000 R08: 00000000000002b0 R09: 0000000000aaaaaa
[ 15.419846] R10: 000ffffa6a1c023d R11: ffff8b92d8dec6c0 R12: 0000000000001000
[ 15.421491] R13: 0000000000000000 R14: ffffffffc05a050b R15: ffff8b92d4e67e48
[ 15.423185] FS: 0000000000000000(0000) GS:ffff8b92ffd00000(0000) knlGS:0000000000000000
[ 15.424963] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 15.426457] CR2: 00007fd1601f65e0 CR3: 00000001a2c02000 CR4: 00000000000406e0
[ 15.428115] Call Trace:
[ 15.429663] ? nfit_test_request_region+0x25c/0x310 [nfit_test_iomap]
[ 15.431250] devm_nvdimm_memremap+0x10b/0x280 [libnvdimm]
[ 15.432682] nd_region_activate+0x19b/0x330 [libnvdimm]
[ 15.434158] ? _cond_resched+0x19/0x30
[ 15.435344] nd_region_probe+0x48/0x250 [libnvdimm]
[ 15.436677] ? kernfs_add_one+0xe4/0x130
[ 15.437895] nvdimm_bus_probe+0x69/0x190 [libnvdimm]
[ 15.439241] really_probe+0xef/0x430
[ 15.440389] ? driver_allows_async_probing+0x50/0x50
[ 15.441727] driver_probe_device+0x110/0x120
[ 15.443006] ? driver_allows_async_probing+0x50/0x50
[ 15.444347] bus_for_each_drv+0x69/0xb0
[ 15.445517] __device_attach+0xd4/0x160
[ 15.446695] bus_probe_device+0x87/0xa0
[ 15.447852] device_add+0x3f2/0x680
[ 15.448950] ? sched_clock_cpu+0xc/0xc0
[ 15.450145] nd_async_device_register+0xe/0x50 [libnvdimm]
[ 15.451516] async_run_entry_fn+0x39/0x160
[ 15.452693] process_one_work+0x1ae/0x3d0
[ 15.453867] worker_thread+0x3c/0x3b0
[ 15.454979] ? process_one_work+0x3d0/0x3d0
[ 15.456154] kthread+0x11e/0x140
[ 15.457193] ? kthread_park+0xa0/0xa0
[ 15.458340] ret_from_fork+0x35/0x40
[ 15.459431] ---[ end trace 5ea54cb57b6ddf70 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-12404-g16f187a5f332b .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 6 months
[xfs] b54bfac0e4: BUG:kernel_NULL_pointer_dereference, address
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b54bfac0e4b30beee5f086ef8b81ab49ae5fe138 ("xfs: Use filemap_huge_fault")
git://git.infradead.org/users/willy/linux-dax.git xarray-pagecache
in testcase: ltp
with following parameters:
disk: 1HDD
fs: xfs
test: ltp-aiodio.part2
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------+------------+------------+
| | a47853185d | b54bfac0e4 |
+----------------------------------------------------+------------+------------+
| boot_successes | 0 | 3 |
| boot_failures | 4 | 23 |
| WARNING:at_fs/iomap/buffered-io.c:#iomap_readpages | 4 | 18 |
| RIP:iomap_readpages | 4 | 18 |
| BUG:unable_to_handle_page_fault_for_address:ff | 0 | 1 |
| BUG:soft_lockup-CPU##stuck_for#s | 0 | 7 |
| RIP:__find_get_page | 0 | 3 |
| RIP:xas_load | 0 | 6 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 7 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 4 |
| Oops:#[##] | 0 | 4 |
| RIP:iomap_page_mkwrite | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
+----------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 40.131732] INFO: creating /lkp/benchmarks/ltp/output directory
[ 40.131734]
[ 40.139428] INFO: creating /lkp/benchmarks/ltp/results directory
[ 40.139430]
[ 40.153998] Checking for required user/group ids
[ 40.154000]
[ 40.157482]
[ 40.171285] 'nobody' user id and group found.
[ 40.171288]
[ 40.175406] 'bin' user id and group found.
[ 40.175408]
[ 40.180443] 'daemon' user id and group found.
[ 40.180445]
[ 40.184363] Users group found.
[ 40.184365]
[ 40.187619] Sys group found.
[ 40.187621]
[ 40.200650] Required users/groups exist.
[ 40.200652]
[ 40.207315] If some fields are empty or look unusual you may have an old version.
[ 40.207317]
[ 40.213146] Compare to the current minimal requirements in Documentation/Changes.
[ 40.213149]
[ 40.217098]
[ 40.220644] /etc/os-release
[ 40.220646]
[ 40.225051] PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
[ 40.225053]
[ 40.229427] NAME="Debian GNU/Linux"
[ 40.229429]
[ 40.232418] VERSION_ID="9"
[ 40.232420]
[ 40.235723] VERSION="9 (stretch)"
[ 40.235726]
[ 40.239565] ID=debian
[ 40.239568]
[ 40.242920] HOME_URL="https://www.debian.org/"
[ 40.242922]
[ 40.246937] SUPPORT_URL="https://www.debian.org/support"
[ 40.246939]
[ 40.252011] BUG_REPORT_URL="https://bugs.debian.org/"
[ 40.252013]
[ 40.257258]
[ 40.258940] uname:
[ 40.258942]
[ 40.263910] Linux vm-snb-8G-1beec5b16908 5.3.0-11851-gb54bfac0e4b30 #1 SMP Wed Sep 25 20:46:33 CST 2019 x86_64 GNU/Linux
[ 40.263912]
[ 40.268364]
[ 40.270266] /proc/cmdline
[ 40.270268]
[ 41.493061] loop: module loaded
[ 41.514099] LTP: starting ADSP000 (aiodio_sparse)
[ 41.520670] BUG: kernel NULL pointer dereference, address: 0000000000000008
[ 41.522062] #PF: supervisor read access in kernel mode
[ 41.523203] #PF: error_code(0x0000) - not-present page
[ 41.524237] PGD 0 P4D 0
[ 41.524935] Oops: 0000 [#1] SMP PTI
[ 41.525753] CPU: 0 PID: 2408 Comm: aiodio_sparse Not tainted 5.3.0-11851-gb54bfac0e4b30 #1
[ 41.527273] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 41.528740] RIP: 0010:iomap_page_mkwrite+0x40/0x1a0
[ 41.529761] Code: 02 00 00 48 83 ec 08 48 8b 07 48 8b 5f 48 48 c7 c7 70 44 10 a5 48 8b 80 a0 00 00 00 4c 8b 60 20 e8 65 b4 d8 ff e8 d0 36 74 00 <48> 8b 53 08 48 8d 42 ff 83 e2 01 48 0f 44 c3 f0 48 0f ba 28 00 0f
[ 41.533401] RSP: 0000:ffffb1af0047bd58 EFLAGS: 00010246
[ 41.534606] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 41.535976] RDX: 0000000000000000 RSI: 000000000000020e RDI: ffffffffa5104470
[ 41.537365] RBP: ffff8ca897311150 R08: ffff8ca89743c320 R09: 0000000000000000
[ 41.538945] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8ca897311150
[ 41.540543] R13: ffffffffc04f4130 R14: 0000000000000001 R15: ffffb1af0047bdf0
[ 41.541983] FS: 00007f3911f52700(0000) GS:ffff8ca93fc00000(0000) knlGS:0000000000000000
[ 41.543633] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 41.544898] CR2: 0000000000000008 CR3: 000000019a160000 CR4: 00000000000406f0
[ 41.546347] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 41.547869] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 41.549327] Call Trace:
[ 41.550748] ? down_read+0x21/0xb0
[ 41.551757] __xfs_filemap_fault+0x157/0x220 [xfs]
[ 41.552872] ? mmap_region+0x23f/0x660
[ 41.553852] __handle_mm_fault+0x443/0xf60
[ 41.555004] handle_mm_fault+0xdd/0x220
[ 41.556003] __do_page_fault+0x2f1/0x520
[ 41.557012] ? ksys_mmap_pgoff+0x1c1/0x220
[ 41.558045] do_page_fault+0x30/0x120
[ 41.559111] async_page_fault+0x3e/0x50
[ 41.560118] RIP: 0033:0x7f39116003a8
[ 41.561090] Code: c3 48 81 fa 00 08 00 00 77 a8 48 83 fa 40 77 16 f3 0f 7f 07 f3 0f 7f 47 10 f3 0f 7f 44 17 f0 f3 0f 7f 44 17 e0 c3 48 8d 4f 40 <f3> 0f 7f 07 48 83 e1 c0 f3 0f 7f 44 17 f0 f3 0f 7f 47 10 f3 0f 7f
[ 41.564877] RSP: 002b:00007ffd118c92b8 EFLAGS: 00010206
[ 41.566162] RAX: 00007f390b000000 RBX: 0000000006400000 RCX: 00007f390b000040
[ 41.567774] RDX: 0000000006400000 RSI: 00000000000000aa RDI: 00007f390b000000
[ 41.569273] RBP: 0000000006400000 R08: 0000000000000007 R09: 0000000000000000
[ 41.570835] R10: 000000000000034e R11: 00007f3911600300 R12: 0000000000000007
[ 41.572306] R13: 00007f390b000000 R14: 0000000000000001 R15: 00007ffd118ca280
[ 41.573746] Modules linked in: loop xfs libcrc32c dm_mod intel_rapl_msr sr_mod intel_rapl_common cdrom sg crct10dif_pclmul ata_generic pata_acpi crc32_pclmul crc32c_intel ghash_clmulni_intel ppdev bochs_drm drm_vram_helper ttm snd_pcm aesni_intel drm_kms_helper crypto_simd ata_piix snd_timer syscopyarea sysfillrect cryptd glue_helper sysimgblt fb_sys_fops snd libata drm soundcore pcspkr joydev serio_raw i2c_piix4 parport_pc floppy parport ip_tables
[ 41.581237] CR2: 0000000000000008
[ 41.582299] ---[ end trace 46a11312ec67d652 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-11851-gb54bfac0e4b30 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 6 months
[mm] 54e1406623: BUG:unable_to_handle_page_fault_for_address
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 54e140662331938de5434a9a49705b7318c58fbf ("mm: memcg/slab: charge individual slab objects instead of pages")
https://github.com/rgushchin/linux.git new_slab.1
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------+------------+------------+
| | d2c043a61e | 54e1406623 |
+----------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 4 | 6 |
| BUG:kernel_NULL_pointer_dereference,address | 2 | |
| Oops:#[##] | 2 | 6 |
| RIP:_raw_spin_trylock | 2 | |
| Kernel_panic-not_syncing:Fatal_exception | 2 | 6 |
| INFO:rcu_sched_self-detected_stall_on_CPU | 2 | |
| RIP:queued_spin_lock_slowpath | 2 | |
| BUG:kernel_hang_in_boot-around-mounting-root_stage | 2 | |
| BUG:unable_to_handle_page_fault_for_address | 0 | 6 |
| RIP:atomic_try_cmpxchg | 0 | 6 |
+----------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 11.768604] BUG: unable to handle page fault for address: ffffffff81bf8872
[ 11.770373] #PF: supervisor write access in kernel mode
[ 11.771860] #PF: error_code(0x0003) - permissions violation
[ 11.773320] PGD 260d067 P4D 260d067 PUD 260e063 PMD 1a001e1
[ 11.774790] Oops: 0003 [#1] SMP PTI
[ 11.775941] CPU: 1 PID: 1 Comm: systemd Not tainted 5.3.0-rc7-mm1-00289-g54e1406623319 #1
[ 11.778343] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 11.780903] RIP: 0010:atomic_try_cmpxchg+0x4/0x12
[ 11.782299] Code: 40 10 00 00 00 00 58 5b 5d 41 5c 41 5d c3 31 c0 48 81 ff b0 b8 bf 81 72 0c 31 c0 48 81 ff c9 bc bf 81 0f 92 c0 c3 8b 0e 89 c8 <f0> 0f b1 17 89 c1 0f 94 c0 74 02 89 0e c3 53 ba 01 00 00 00 48 89
[ 11.786832] RSP: 0018:ffffc90000013c30 EFLAGS: 00010246
[ 11.788226] RAX: 0000000000000000 RBX: ffffffff81bf8872 RCX: 0000000000000000
[ 11.789886] RDX: 0000000000000001 RSI: ffffc90000013c3c RDI: ffffffff81bf8872
[ 11.791634] RBP: ffffffff81bf8872 R08: 0000000000000000 R09: ffffc90000013e50
[ 11.793409] R10: ffffc90000013e48 R11: 0000000000000000 R12: ffffc90000013e50
[ 11.795229] R13: ffff88822a6e8020 R14: 0000000000004041 R15: 0000000000001800
[ 11.797025] FS: 00007f3ed3233940(0000) GS:ffff88823fd00000(0000) knlGS:0000000000000000
[ 11.799557] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 11.801194] CR2: ffffffff81bf8872 CR3: 00000001cc538000 CR4: 00000000000406e0
[ 11.803087] Call Trace:
[ 11.804184] do_raw_spin_lock+0x2f/0x5a
[ 11.805422] ? _cond_resched+0x25/0x29
[ 11.806671] fast_dput+0x31/0x82
[ 11.807815] ? _cond_resched+0x25/0x29
[ 11.809048] dput+0x3c/0x14d
[ 11.810132] path_put+0x12/0x1b
[ 11.811301] terminate_walk+0x48/0x68
[ 11.812508] path_lookupat+0x18d/0x1b3
[ 11.813793] ? slab_free_freelist_hook+0x19/0x68
[ 11.815336] filename_lookup+0x8c/0xfc
[ 11.816590] ? ___might_sleep+0x3a/0x126
[ 11.817845] ? _cond_resched+0x25/0x29
[ 11.819166] ? getname_flags+0x29/0x156
[ 11.820397] ? kmem_cache_alloc+0x103/0x19f
[ 11.821685] ? vfs_statx+0x70/0xcc
[ 11.822887] vfs_statx+0x70/0xcc
[ 11.824039] __do_sys_newfstatat+0x31/0x63
[ 11.825365] ? tracer_hardirqs_off+0x1b/0xfb
[ 11.826702] ? entry_SYSCALL_64_after_hwframe+0x3e/0xbe
[ 11.828260] ? trace_hardirqs_off_caller+0x41/0x43
[ 11.829581] ? tracer_hardirqs_on+0x1b/0xf6
[ 11.831051] do_syscall_64+0x57/0x65
[ 11.832258] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 11.833641] RIP: 0033:0x7f3ed1a4da4a
[ 11.834913] Code: 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 83 ff 01 89 f0 48 89 d6 77 1e 48 63 f8 4d 63 d0 48 89 ca b8 06 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 1a f3 c3 0f 1f 40 00 48 8b 05 11 74 2d 00 64
[ 11.839609] RSP: 002b:00007ffe9e12be48 EFLAGS: 00000246 ORIG_RAX: 0000000000000106
[ 11.842054] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f3ed1a4da4a
[ 11.843880] RDX: 00007ffe9e12c030 RSI: 00005594416198a5 RDI: 0000000000000003
[ 11.845643] RBP: 0000559441c57011 R08: 0000000000001000 R09: 0000000000080000
[ 11.847503] R10: 0000000000001000 R11: 0000000000000246 R12: 0000000000000001
[ 11.849290] R13: 0000000000000400 R14: 00007ffe9e12be58 R15: 00007f3ed3233740
[ 11.851198] Modules linked in:
[ 11.852325] CR2: ffffffff81bf8872
[ 11.853441] ---[ end trace 4813af85c191fcb5 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc7-mm1-00289-g54e1406623319 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 6 months