Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
1 year, 7 months
[lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task load and utilization before placing task on rq")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe
commit:
231678b768da07d19ab5683a39eeb0c250631d02
98d8fd8126676f7ba6e133e65b2ca4b17989d32c
231678b768da07d1 98d8fd8126676f7ba6e133e65b
---------------- --------------------------
%stddev %change %stddev
\ | \
188818 ± 1% -20.8% 149585 ± 1% hackbench.throughput
81712173 ± 4% +211.8% 2.548e+08 ± 1% hackbench.time.involuntary_context_switches
21611286 ± 0% -19.2% 17453366 ± 1% hackbench.time.minor_page_faults
2226 ± 0% +1.3% 2255 ± 0% hackbench.time.percent_of_cpu_this_job_got
12445 ± 0% +2.1% 12704 ± 0% hackbench.time.system_time
2.494e+08 ± 3% +118.5% 5.448e+08 ± 1% hackbench.time.voluntary_context_switches
1097790 ± 0% +50.6% 1653664 ± 1% softirqs.RCU
554877 ± 3% +137.8% 1319318 ± 1% vmstat.system.cs
89017 ± 4% +187.8% 256235 ± 1% vmstat.system.in
1.312e+08 ± 1% -16.0% 1.102e+08 ± 4% numa-numastat.node0.local_node
1.312e+08 ± 1% -16.0% 1.102e+08 ± 4% numa-numastat.node0.numa_hit
1.302e+08 ± 1% -34.9% 84785305 ± 5% numa-numastat.node1.local_node
1.302e+08 ± 1% -34.9% 84785344 ± 5% numa-numastat.node1.numa_hit
302.00 ± 1% -19.2% 244.00 ± 1% time.file_system_outputs
81712173 ± 4% +211.8% 2.548e+08 ± 1% time.involuntary_context_switches
21611286 ± 0% -19.2% 17453366 ± 1% time.minor_page_faults
2.494e+08 ± 3% +118.5% 5.448e+08 ± 1% time.voluntary_context_switches
92.88 ± 0% +1.3% 94.13 ± 0% turbostat.%Busy
2675 ± 0% +1.8% 2723 ± 0% turbostat.Avg_MHz
4.44 ± 1% -24.9% 3.34 ± 2% turbostat.CPU%c1
0.98 ± 2% -32.2% 0.66 ± 3% turbostat.CPU%c3
2.79e+08 ± 4% -25.2% 2.086e+08 ± 6% cpuidle.C1-NHM.time
1.235e+08 ± 4% -28.6% 88251264 ± 7% cpuidle.C1E-NHM.time
243525 ± 4% -21.9% 190252 ± 8% cpuidle.C1E-NHM.usage
1.819e+08 ± 2% -25.8% 1.35e+08 ± 1% cpuidle.C3-NHM.time
260585 ± 1% -20.4% 207474 ± 2% cpuidle.C3-NHM.usage
266207 ± 1% -39.4% 161453 ± 3% cpuidle.C6-NHM.usage
493467 ± 0% +26.5% 624337 ± 0% meminfo.Active
395397 ± 0% +33.0% 525811 ± 0% meminfo.Active(anon)
372719 ± 1% +34.2% 500207 ± 1% meminfo.AnonPages
4543041 ± 1% +37.5% 6248687 ± 1% meminfo.Committed_AS
185265 ± 1% +16.3% 215373 ± 0% meminfo.KernelStack
302233 ± 1% +37.1% 414289 ± 1% meminfo.PageTables
333827 ± 0% +18.6% 396038 ± 0% meminfo.SUnreclaim
380340 ± 0% +16.6% 443518 ± 0% meminfo.Slab
51154 ±143% -100.0% 5.00 ±100% latency_stats.avg.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 30679 ±100% latency_stats.avg.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
7795 ±100% +1304.6% 109497 ± 93% latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall
297190 ±117% -100.0% 23.00 ±100% latency_stats.max.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 97905 ±109% latency_stats.max.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
12901 ±131% -78.9% 2717 ±135% latency_stats.max.wait_on_page_bit.wait_on_page_read.do_read_cache_page.read_cache_page_gfp.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
392778 ±128% -100.0% 75.50 ±100% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
13678 ± 75% -68.1% 4368 ± 67% latency_stats.sum.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_get_by_path.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
19088 ±101% -100.0% 8.67 ±110% latency_stats.sum.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 139824 ±104% latency_stats.sum.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
17538 ± 95% -71.0% 5094 ± 72% latency_stats.sum.wait_on_page_bit.wait_on_page_read.do_read_cache_page.read_cache_page_gfp.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
99221 ± 1% +32.5% 131458 ± 0% proc-vmstat.nr_active_anon
93610 ± 1% +33.5% 124954 ± 0% proc-vmstat.nr_anon_pages
11604 ± 1% +16.4% 13503 ± 0% proc-vmstat.nr_kernel_stack
75770 ± 1% +36.6% 103508 ± 1% proc-vmstat.nr_page_table_pages
83664 ± 0% +18.6% 99265 ± 0% proc-vmstat.nr_slab_unreclaimable
2.615e+08 ± 1% -25.4% 1.95e+08 ± 1% proc-vmstat.numa_hit
2.615e+08 ± 1% -25.4% 1.95e+08 ± 1% proc-vmstat.numa_local
2151 ± 9% +33.7% 2875 ± 7% proc-vmstat.pgactivate
53318467 ± 1% -16.3% 44611139 ± 4% proc-vmstat.pgalloc_dma32
2.124e+08 ± 1% -27.6% 1.538e+08 ± 2% proc-vmstat.pgalloc_normal
21951016 ± 0% -17.9% 18028538 ± 1% proc-vmstat.pgfault
2.656e+08 ± 1% -25.3% 1.983e+08 ± 1% proc-vmstat.pgfree
231019 ± 6% +19.0% 274944 ± 3% numa-meminfo.node0.Active
182158 ± 8% +24.4% 226602 ± 4% numa-meminfo.node0.Active(anon)
4049 ± 0% -100.0% 0.00 ± -1% numa-meminfo.node0.AnonHugePages
171061 ± 5% +28.8% 220397 ± 3% numa-meminfo.node0.AnonPages
253455 ± 4% -5.4% 239791 ± 4% numa-meminfo.node0.FilePages
9402 ± 3% -90.3% 915.75 ± 39% numa-meminfo.node0.Inactive(anon)
10253 ± 0% -50.8% 5041 ± 1% numa-meminfo.node0.Mapped
131300 ± 9% +25.1% 164194 ± 7% numa-meminfo.node0.PageTables
20469 ± 54% -65.5% 7058 ±145% numa-meminfo.node0.Shmem
235929 ± 5% +43.0% 337441 ± 1% numa-meminfo.node1.Active
186748 ± 8% +53.8% 287257 ± 1% numa-meminfo.node1.Active(anon)
175187 ± 5% +52.9% 267826 ± 5% numa-meminfo.node1.AnonPages
1249 ± 39% +676.4% 9697 ± 4% numa-meminfo.node1.Inactive(anon)
105601 ± 9% +39.4% 147194 ± 10% numa-meminfo.node1.KernelStack
5032 ± 1% +103.5% 10238 ± 0% numa-meminfo.node1.Mapped
1028697 ± 5% +40.4% 1444560 ± 5% numa-meminfo.node1.MemUsed
147371 ± 7% +62.4% 239296 ± 7% numa-meminfo.node1.PageTables
185026 ± 7% +43.2% 264909 ± 10% numa-meminfo.node1.SUnreclaim
209508 ± 7% +38.5% 290116 ± 9% numa-meminfo.node1.Slab
45770 ± 7% +24.9% 57169 ± 3% numa-vmstat.node0.nr_active_anon
42981 ± 4% +29.4% 55616 ± 3% numa-vmstat.node0.nr_anon_pages
63378 ± 4% -5.4% 59946 ± 4% numa-vmstat.node0.nr_file_pages
2351 ± 3% -90.3% 228.25 ± 38% numa-vmstat.node0.nr_inactive_anon
2589 ± 1% -51.6% 1253 ± 0% numa-vmstat.node0.nr_mapped
32990 ± 8% +25.6% 41423 ± 7% numa-vmstat.node0.nr_page_table_pages
5131 ± 54% -65.6% 1763 ±145% numa-vmstat.node0.nr_shmem
64745848 ± 2% -13.8% 55814423 ± 2% numa-vmstat.node0.numa_hit
64743896 ± 2% -13.9% 55752339 ± 2% numa-vmstat.node0.numa_local
1951 ± 91% +3081.4% 62084 ± 1% numa-vmstat.node0.numa_other
45977 ± 8% +57.0% 72172 ± 1% numa-vmstat.node1.nr_active_anon
43078 ± 7% +56.1% 67261 ± 3% numa-vmstat.node1.nr_anon_pages
313.50 ± 40% +673.4% 2424 ± 4% numa-vmstat.node1.nr_inactive_anon
6558 ± 11% +39.9% 9175 ± 8% numa-vmstat.node1.nr_kernel_stack
1262 ± 1% +102.2% 2552 ± 0% numa-vmstat.node1.nr_mapped
36358 ± 9% +65.2% 60055 ± 5% numa-vmstat.node1.nr_page_table_pages
45984 ± 9% +43.9% 66189 ± 8% numa-vmstat.node1.nr_slab_unreclaimable
64599981 ± 2% -34.3% 42454481 ± 5% numa-vmstat.node1.numa_hit
64534349 ± 2% -34.2% 42449235 ± 5% numa-vmstat.node1.numa_local
65632 ± 2% -92.0% 5245 ± 23% numa-vmstat.node1.numa_other
148962 ± 0% +32.1% 196766 ± 2% slabinfo.anon_vma.active_objs
3066 ± 0% +32.5% 4062 ± 1% slabinfo.anon_vma.active_slabs
156402 ± 0% +32.5% 207216 ± 1% slabinfo.anon_vma.num_objs
3066 ± 0% +32.5% 4062 ± 1% slabinfo.anon_vma.num_slabs
15321 ± 0% +14.6% 17563 ± 1% slabinfo.files_cache.active_objs
16470 ± 0% +15.1% 18958 ± 1% slabinfo.files_cache.num_objs
8808 ± 0% +17.6% 10359 ± 1% slabinfo.kmalloc-1024.active_objs
9268 ± 0% +16.1% 10758 ± 0% slabinfo.kmalloc-1024.num_objs
22656 ± 1% +9.9% 24899 ± 2% slabinfo.kmalloc-128.num_objs
31867 ± 0% +11.6% 35548 ± 0% slabinfo.kmalloc-192.active_objs
32775 ± 0% +11.4% 36522 ± 0% slabinfo.kmalloc-192.num_objs
15221 ± 0% +23.1% 18731 ± 0% slabinfo.kmalloc-256.active_objs
16380 ± 0% +19.4% 19557 ± 0% slabinfo.kmalloc-256.num_objs
308147 ± 0% +33.0% 409879 ± 2% slabinfo.kmalloc-64.active_objs
6591 ± 1% +17.9% 7770 ± 1% slabinfo.kmalloc-64.active_slabs
421883 ± 1% +17.9% 497347 ± 1% slabinfo.kmalloc-64.num_objs
6591 ± 1% +17.9% 7770 ± 1% slabinfo.kmalloc-64.num_slabs
482.75 ± 11% +39.7% 674.50 ± 7% slabinfo.kmem_cache_node.active_objs
495.50 ± 10% +38.7% 687.25 ± 7% slabinfo.kmem_cache_node.num_objs
9328 ± 0% +29.1% 12045 ± 2% slabinfo.mm_struct.active_objs
612.00 ± 0% +28.6% 787.00 ± 1% slabinfo.mm_struct.active_slabs
10411 ± 0% +28.6% 13390 ± 1% slabinfo.mm_struct.num_objs
612.00 ± 0% +28.6% 787.00 ± 1% slabinfo.mm_struct.num_slabs
12765 ± 1% +15.7% 14765 ± 1% slabinfo.sighand_cache.active_objs
861.75 ± 1% +18.4% 1020 ± 0% slabinfo.sighand_cache.active_slabs
12933 ± 1% +18.4% 15308 ± 0% slabinfo.sighand_cache.num_objs
861.75 ± 1% +18.4% 1020 ± 0% slabinfo.sighand_cache.num_slabs
14455 ± 1% +11.8% 16167 ± 1% slabinfo.signal_cache.active_objs
14698 ± 1% +14.2% 16779 ± 1% slabinfo.signal_cache.num_objs
11628 ± 1% +16.6% 13563 ± 1% slabinfo.task_struct.active_objs
3899 ± 1% +18.5% 4620 ± 1% slabinfo.task_struct.active_slabs
11699 ± 1% +18.5% 13861 ± 1% slabinfo.task_struct.num_objs
3899 ± 1% +18.5% 4620 ± 1% slabinfo.task_struct.num_slabs
224907 ± 0% +34.2% 301780 ± 2% slabinfo.vm_area_struct.active_objs
5290 ± 0% +34.9% 7135 ± 2% slabinfo.vm_area_struct.active_slabs
232815 ± 0% +34.8% 313951 ± 2% slabinfo.vm_area_struct.num_objs
5290 ± 0% +34.9% 7135 ± 2% slabinfo.vm_area_struct.num_slabs
0.30 ± 89% +2528.3% 7.88 ± 13% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
0.02 ± 74% +17735.0% 2.97 ± 19% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
0.02 ±-5000% +21150.0% 4.25 ± 44% perf-profile.cpu-cycles.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
0.00 ± -1% +Inf% 0.64 ± 58% perf-profile.cpu-cycles.__schedule.schedule.prepare_exit_to_usermode.syscall_return_slowpath.int_ret_from_sys_call
7.24 ± 19% +141.5% 17.49 ± 14% perf-profile.cpu-cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.36 ± 84% +926.2% 13.92 ± 16% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write
1.46 ±107% +981.3% 15.76 ± 17% perf-profile.cpu-cycles.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write.sys_write
0.03 ±-3333% +3200.0% 0.99 ± 28% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.sched_ttwu_pending.scheduler_ipi.smp_reschedule_interrupt
0.64 ± 86% +1399.6% 9.65 ± 13% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
1.28 ± 86% +966.4% 13.62 ± 16% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write
1.26 ± 86% +968.6% 13.50 ± 16% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write
0.01 ± 0% +3200.0% 0.33 ±140% perf-profile.cpu-cycles.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
0.16 ± 96% +4396.9% 7.20 ± 12% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.48 ± 86% +1877.1% 9.49 ± 14% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
0.03 ±-3333% +3175.0% 0.98 ± 28% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending.scheduler_ipi
0.64 ± 86% +1396.9% 9.63 ± 13% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
0.02 ±-5000% +4162.5% 0.85 ± 27% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending
0.59 ± 86% +1473.7% 9.29 ± 13% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
111.99 ± 2% -23.5% 85.65 ± 7% perf-profile.cpu-cycles.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.77 ± 51% perf-profile.cpu-cycles.int_ret_from_sys_call
3.68 ± 31% +309.3% 15.06 ± 17% perf-profile.cpu-cycles.pipe_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.03 ±-3333% +19350.0% 5.83 ± 38% perf-profile.cpu-cycles.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
10.52 ± 23% +109.9% 22.09 ± 18% perf-profile.cpu-cycles.pipe_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.68 ± 56% perf-profile.cpu-cycles.prepare_exit_to_usermode.syscall_return_slowpath.int_ret_from_sys_call
0.10 ± 96% +6080.0% 6.18 ± 13% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.16 ± 97% +4329.6% 7.24 ± 13% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
0.02 ±-5000% +22300.0% 4.48 ± 43% perf-profile.cpu-cycles.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
0.00 ± -1% +Inf% 0.62 ± 62% perf-profile.cpu-cycles.schedule.prepare_exit_to_usermode.syscall_return_slowpath.int_ret_from_sys_call
41.93 ± 3% -21.8% 32.80 ± 4% perf-profile.cpu-cycles.sys_read.entry_SYSCALL_64_fastpath
0.01 ± 0% +3225.0% 0.33 ±140% perf-profile.cpu-cycles.sys_wait4.entry_SYSCALL_64_fastpath
65.72 ± 3% -25.2% 49.18 ± 9% perf-profile.cpu-cycles.sys_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.72 ± 52% perf-profile.cpu-cycles.syscall_return_slowpath.int_ret_from_sys_call
1.28 ± 86% +961.1% 13.58 ± 16% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
0.04 ±-2500% +2512.5% 1.04 ± 28% perf-profile.cpu-cycles.ttwu_do_activate.constprop.85.sched_ttwu_pending.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt
0.70 ± 88% +1343.2% 10.10 ± 13% perf-profile.cpu-cycles.ttwu_do_activate.constprop.85.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
34.35 ± 3% -16.2% 28.77 ± 6% perf-profile.cpu-cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath
58.47 ± 4% -26.7% 42.85 ± 9% perf-profile.cpu-cycles.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.01 ± 70% +4475.0% 0.30 ±139% perf-profile.cpu-cycles.wait_consider_task.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
121.25 ± 40% -43.9% 68.00 ± 4% sched_debug.cfs_rq[0]:/.load_avg
7372993 ± 2% +59.5% 11756998 ± 6% sched_debug.cfs_rq[0]:/.min_vruntime
2526 ± 4% -36.8% 1596 ± 10% sched_debug.cfs_rq[0]:/.tg_load_avg
96.00 ± 8% -29.2% 68.00 ± 4% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
91.75 ± 23% -40.1% 55.00 ± 12% sched_debug.cfs_rq[10]:/.load_avg
8761646 ± 6% +202.7% 26523962 ± 11% sched_debug.cfs_rq[10]:/.min_vruntime
8.00 ± 56% +1043.8% 91.50 ± 95% sched_debug.cfs_rq[10]:/.nr_spread_over
1375813 ± 32% +967.5% 14686694 ± 19% sched_debug.cfs_rq[10]:/.spread0
2218 ± 8% -22.1% 1729 ± 9% sched_debug.cfs_rq[10]:/.tg_load_avg
92.25 ± 23% -40.1% 55.25 ± 12% sched_debug.cfs_rq[10]:/.tg_load_avg_contrib
83.00 ± 8% -28.6% 59.25 ± 26% sched_debug.cfs_rq[11]:/.load_avg
8281705 ± 5% +221.0% 26587558 ± 12% sched_debug.cfs_rq[11]:/.min_vruntime
12.75 ± 76% +488.2% 75.00 ± 57% sched_debug.cfs_rq[11]:/.nr_spread_over
895534 ± 36% +1546.8% 14747704 ± 20% sched_debug.cfs_rq[11]:/.spread0
2181 ± 7% -24.0% 1657 ± 7% sched_debug.cfs_rq[11]:/.tg_load_avg
83.00 ± 8% -28.6% 59.25 ± 26% sched_debug.cfs_rq[11]:/.tg_load_avg_contrib
7597123 ± 4% +64.9% 12525605 ± 7% sched_debug.cfs_rq[12]:/.min_vruntime
2144 ± 8% -18.9% 1739 ± 8% sched_debug.cfs_rq[12]:/.tg_load_avg
7974727 ± 1% +76.7% 14092249 ± 8% sched_debug.cfs_rq[13]:/.min_vruntime
587700 ± 33% +282.6% 2248438 ± 24% sched_debug.cfs_rq[13]:/.spread0
2161 ± 8% -18.0% 1771 ± 6% sched_debug.cfs_rq[13]:/.tg_load_avg
9111334 ± 6% +49.2% 13594919 ± 7% sched_debug.cfs_rq[14]:/.min_vruntime
2155 ± 10% -17.9% 1768 ± 7% sched_debug.cfs_rq[14]:/.tg_load_avg
91.00 ± 11% -28.3% 65.25 ± 24% sched_debug.cfs_rq[15]:/.load_avg
7755071 ± 5% +80.3% 13985714 ± 7% sched_debug.cfs_rq[15]:/.min_vruntime
9.75 ± 15% +520.5% 60.50 ±128% sched_debug.cfs_rq[15]:/.nr_spread_over
367395 ± 78% +481.9% 2137747 ± 16% sched_debug.cfs_rq[15]:/.spread0
2175 ± 12% -22.1% 1694 ± 7% sched_debug.cfs_rq[15]:/.tg_load_avg
91.00 ± 11% -28.3% 65.25 ± 24% sched_debug.cfs_rq[15]:/.tg_load_avg_contrib
7790185 ± 3% +81.0% 14103194 ± 6% sched_debug.cfs_rq[16]:/.min_vruntime
402239 ± 43% +460.5% 2254357 ± 12% sched_debug.cfs_rq[16]:/.spread0
2196 ± 9% -22.8% 1694 ± 6% sched_debug.cfs_rq[16]:/.tg_load_avg
8545892 ± 6% +64.3% 14041885 ± 7% sched_debug.cfs_rq[17]:/.min_vruntime
1157569 ± 38% +89.4% 2192052 ± 15% sched_debug.cfs_rq[17]:/.spread0
2168 ± 9% -21.2% 1709 ± 3% sched_debug.cfs_rq[17]:/.tg_load_avg
8083785 ± 3% +194.3% 23786635 ± 12% sched_debug.cfs_rq[18]:/.min_vruntime
7.25 ± 69% +286.2% 28.00 ± 51% sched_debug.cfs_rq[18]:/.nr_spread_over
695207 ± 68% +1616.3% 11932029 ± 23% sched_debug.cfs_rq[18]:/.spread0
2134 ± 10% -18.9% 1731 ± 5% sched_debug.cfs_rq[18]:/.tg_load_avg
123.00 ± 23% -50.8% 60.50 ± 18% sched_debug.cfs_rq[19]:/.load_avg
8843043 ± 4% +212.8% 27657482 ± 14% sched_debug.cfs_rq[19]:/.min_vruntime
11.50 ± 72% +1156.5% 144.50 ± 89% sched_debug.cfs_rq[19]:/.nr_spread_over
1454237 ± 28% +986.6% 15802087 ± 22% sched_debug.cfs_rq[19]:/.spread0
2121 ± 10% -18.7% 1724 ± 7% sched_debug.cfs_rq[19]:/.tg_load_avg
121.00 ± 21% -50.0% 60.50 ± 18% sched_debug.cfs_rq[19]:/.tg_load_avg_contrib
101.50 ± 14% -37.2% 63.75 ± 8% sched_debug.cfs_rq[1]:/.load_avg
8066420 ± 2% +67.1% 13476103 ± 7% sched_debug.cfs_rq[1]:/.min_vruntime
689384 ± 53% +147.3% 1704860 ± 22% sched_debug.cfs_rq[1]:/.spread0
2514 ± 4% -39.0% 1533 ± 6% sched_debug.cfs_rq[1]:/.tg_load_avg
101.75 ± 14% -37.3% 63.75 ± 8% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
91.00 ± 8% -23.1% 70.00 ± 10% sched_debug.cfs_rq[20]:/.load_avg
9057239 ± 7% +200.9% 27252092 ± 13% sched_debug.cfs_rq[20]:/.min_vruntime
1668188 ± 44% +822.9% 15396001 ± 23% sched_debug.cfs_rq[20]:/.spread0
2153 ± 9% -20.7% 1707 ± 5% sched_debug.cfs_rq[20]:/.tg_load_avg
91.00 ± 8% -24.2% 69.00 ± 11% sched_debug.cfs_rq[20]:/.tg_load_avg_contrib
8505030 ± 8% +224.1% 27566380 ± 13% sched_debug.cfs_rq[21]:/.min_vruntime
15.50 ± 51% +575.8% 104.75 ± 42% sched_debug.cfs_rq[21]:/.nr_spread_over
1115371 ± 70% +1308.4% 15708882 ± 21% sched_debug.cfs_rq[21]:/.spread0
2129 ± 9% -19.2% 1720 ± 5% sched_debug.cfs_rq[21]:/.tg_load_avg
8352879 ± 2% +232.1% 27739662 ± 13% sched_debug.cfs_rq[22]:/.min_vruntime
13.00 ± 45% +1930.8% 264.00 ± 38% sched_debug.cfs_rq[22]:/.nr_spread_over
962764 ± 8% +1549.5% 15880365 ± 22% sched_debug.cfs_rq[22]:/.spread0
2119 ± 8% -14.5% 1811 ± 10% sched_debug.cfs_rq[22]:/.tg_load_avg
8119642 ± 4% +241.9% 27759824 ± 13% sched_debug.cfs_rq[23]:/.min_vruntime
729257 ± 37% +2080.1% 15898645 ± 21% sched_debug.cfs_rq[23]:/.spread0
2087 ± 7% -16.0% 1753 ± 3% sched_debug.cfs_rq[23]:/.tg_load_avg
101.75 ± 19% -31.7% 69.50 ± 10% sched_debug.cfs_rq[2]:/.load_avg
9427522 ± 14% +44.3% 13605129 ± 7% sched_debug.cfs_rq[2]:/.min_vruntime
2441 ± 7% -35.2% 1583 ± 10% sched_debug.cfs_rq[2]:/.tg_load_avg
102.50 ± 19% -32.2% 69.50 ± 10% sched_debug.cfs_rq[2]:/.tg_load_avg_contrib
7664612 ± 6% +76.4% 13520491 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
283759 ±142% +509.0% 1728055 ± 23% sched_debug.cfs_rq[3]:/.spread0
2355 ± 8% -32.8% 1582 ± 12% sched_debug.cfs_rq[3]:/.tg_load_avg
118.75 ± 20% -33.7% 78.75 ± 12% sched_debug.cfs_rq[4]:/.load_avg
7770292 ± 5% +73.0% 13442540 ± 6% sched_debug.cfs_rq[4]:/.min_vruntime
388453 ±139% +322.2% 1640216 ± 19% sched_debug.cfs_rq[4]:/.spread0
2286 ± 8% -29.9% 1603 ± 10% sched_debug.cfs_rq[4]:/.tg_load_avg
119.00 ± 20% -33.2% 79.50 ± 12% sched_debug.cfs_rq[4]:/.tg_load_avg_contrib
41.00 ± 12% +72.0% 70.50 ± 58% sched_debug.cfs_rq[5]:/.load
8361817 ± 5% +59.9% 13374083 ± 7% sched_debug.cfs_rq[5]:/.min_vruntime
2265 ± 8% -29.0% 1608 ± 10% sched_debug.cfs_rq[5]:/.tg_load_avg
8064101 ± 5% +170.9% 21848536 ± 12% sched_debug.cfs_rq[6]:/.min_vruntime
12.25 ± 48% +81.6% 22.25 ± 28% sched_debug.cfs_rq[6]:/.nr_spread_over
680647 ± 89% +1373.8% 10031232 ± 26% sched_debug.cfs_rq[6]:/.spread0
2298 ± 8% -29.7% 1615 ± 8% sched_debug.cfs_rq[6]:/.tg_load_avg
94.25 ± 16% -38.2% 58.25 ± 19% sched_debug.cfs_rq[7]:/.load_avg
8303387 ± 6% +218.5% 26442227 ± 14% sched_debug.cfs_rq[7]:/.min_vruntime
40.25 ± 9% -25.5% 30.00 ± 17% sched_debug.cfs_rq[7]:/.runnable_load_avg
919200 ± 58% +1490.1% 14616571 ± 24% sched_debug.cfs_rq[7]:/.spread0
2277 ± 7% -28.1% 1638 ± 12% sched_debug.cfs_rq[7]:/.tg_load_avg
94.50 ± 16% -38.4% 58.25 ± 19% sched_debug.cfs_rq[7]:/.tg_load_avg_contrib
93.50 ± 19% -38.2% 57.75 ± 18% sched_debug.cfs_rq[8]:/.load_avg
8657132 ± 6% +206.7% 26552197 ± 12% sched_debug.cfs_rq[8]:/.min_vruntime
10.00 ± 49% +2720.0% 282.00 ± 56% sched_debug.cfs_rq[8]:/.nr_spread_over
1272282 ± 40% +1057.2% 14722281 ± 21% sched_debug.cfs_rq[8]:/.spread0
2256 ± 8% -25.2% 1688 ± 9% sched_debug.cfs_rq[8]:/.tg_load_avg
88.25 ± 18% -33.7% 58.50 ± 18% sched_debug.cfs_rq[8]:/.tg_load_avg_contrib
89.25 ± 10% -43.4% 50.50 ± 18% sched_debug.cfs_rq[9]:/.load_avg
8573840 ± 11% +212.1% 26757495 ± 13% sched_debug.cfs_rq[9]:/.min_vruntime
13.00 ± 70% +909.6% 131.25 ± 46% sched_debug.cfs_rq[9]:/.nr_spread_over
1188401 ± 86% +1155.7% 14923175 ± 23% sched_debug.cfs_rq[9]:/.spread0
2235 ± 7% -27.0% 1630 ± 9% sched_debug.cfs_rq[9]:/.tg_load_avg
89.25 ± 10% -43.4% 50.50 ± 18% sched_debug.cfs_rq[9]:/.tg_load_avg_contrib
13660 ± 26% +25.6% 17164 ± 7% sched_debug.cpu#0.curr->pid
25.75 ± 28% +564.1% 171.00 ± 31% sched_debug.cpu#0.nr_running
6234824 ± 3% +79.9% 11214928 ± 11% sched_debug.cpu#0.nr_switches
92.25 ± 41% +101.4% 185.75 ± 37% sched_debug.cpu#0.nr_uninterruptible
10264120 ± 2% +48.8% 15270454 ± 8% sched_debug.cpu#0.sched_count
49574 ± 5% -12.4% 43430 ± 6% sched_debug.cpu#0.sched_goidle
5147188 ± 4% +79.7% 9249436 ± 12% sched_debug.cpu#0.ttwu_count
2269312 ± 2% +62.5% 3688685 ± 10% sched_debug.cpu#0.ttwu_local
23.25 ± 28% +589.2% 160.25 ± 36% sched_debug.cpu#1.nr_running
6569750 ± 2% +83.6% 12058832 ± 9% sched_debug.cpu#1.nr_switches
6570052 ± 2% +83.6% 12059572 ± 9% sched_debug.cpu#1.sched_count
4992425 ± 1% +109.0% 10435243 ± 3% sched_debug.cpu#1.ttwu_count
2463897 ± 1% +74.8% 4307284 ± 9% sched_debug.cpu#1.ttwu_local
13.00 ± 44% +303.8% 52.50 ± 42% sched_debug.cpu#10.nr_running
6572956 ± 2% +196.2% 19469907 ± 4% sched_debug.cpu#10.nr_switches
6573272 ± 2% +196.2% 19471457 ± 4% sched_debug.cpu#10.sched_count
5113245 ± 2% +125.5% 11531340 ± 2% sched_debug.cpu#10.ttwu_count
2449382 ± 2% +146.9% 6046615 ± 4% sched_debug.cpu#10.ttwu_local
500000 ± 0% +14.1% 570712 ± 8% sched_debug.cpu#11.max_idle_balance_cost
15.00 ± 46% +246.7% 52.00 ± 42% sched_debug.cpu#11.nr_running
6631320 ± 2% +189.1% 19172684 ± 4% sched_debug.cpu#11.nr_switches
6631668 ± 2% +189.1% 19174234 ± 4% sched_debug.cpu#11.sched_count
5054950 ± 2% +120.6% 11152494 ± 4% sched_debug.cpu#11.ttwu_count
2405487 ± 2% +145.7% 5910910 ± 3% sched_debug.cpu#11.ttwu_local
12.00 ± 59% +791.7% 107.00 ± 75% sched_debug.cpu#12.nr_running
6356857 ± 4% +95.9% 12451675 ± 8% sched_debug.cpu#12.nr_switches
134.25 ± 46% +58.7% 213.00 ± 22% sched_debug.cpu#12.nr_uninterruptible
6357220 ± 4% +95.9% 12452542 ± 8% sched_debug.cpu#12.sched_count
46934 ± 6% -16.9% 38993 ± 10% sched_debug.cpu#12.sched_goidle
5089230 ± 6% +99.1% 10134621 ± 4% sched_debug.cpu#12.ttwu_count
2416053 ± 2% +79.9% 4346652 ± 6% sched_debug.cpu#12.ttwu_local
6657066 ± 5% +86.1% 12387203 ± 9% sched_debug.cpu#13.nr_switches
94.50 ± 71% +109.8% 198.25 ± 19% sched_debug.cpu#13.nr_uninterruptible
6657360 ± 5% +86.1% 12387844 ± 9% sched_debug.cpu#13.sched_count
5089824 ± 1% +103.3% 10347591 ± 6% sched_debug.cpu#13.ttwu_count
2613812 ± 2% +77.8% 4646155 ± 10% sched_debug.cpu#13.ttwu_local
14.25 ± 64% +761.4% 122.75 ± 73% sched_debug.cpu#14.nr_running
7217227 ± 7% +73.5% 12520898 ± 7% sched_debug.cpu#14.nr_switches
-109.00 ±-154% -226.6% 138.00 ± 34% sched_debug.cpu#14.nr_uninterruptible
7217548 ± 7% +73.5% 12521622 ± 7% sched_debug.cpu#14.sched_count
4933024 ± 2% +99.8% 9853790 ± 5% sched_debug.cpu#14.ttwu_count
2627711 ± 3% +76.7% 4643465 ± 5% sched_debug.cpu#14.ttwu_local
11.50 ± 88% +995.7% 126.00 ± 75% sched_debug.cpu#15.nr_running
6705165 ± 4% +91.4% 12831218 ± 9% sched_debug.cpu#15.nr_switches
41.50 ± 82% +256.0% 147.75 ± 27% sched_debug.cpu#15.nr_uninterruptible
6705518 ± 4% +91.4% 12831891 ± 9% sched_debug.cpu#15.sched_count
5124902 ± 2% +102.0% 10351785 ± 4% sched_debug.cpu#15.ttwu_count
2537246 ± 2% +84.4% 4679721 ± 9% sched_debug.cpu#15.ttwu_local
59.75 ± 72% +116.7% 129.50 ± 63% sched_debug.cpu#16.load
12.00 ± 91% +991.7% 131.00 ± 75% sched_debug.cpu#16.nr_running
6807914 ± 3% +88.7% 12847644 ± 6% sched_debug.cpu#16.nr_switches
35.75 ±243% +416.8% 184.75 ± 28% sched_debug.cpu#16.nr_uninterruptible
6808195 ± 3% +88.7% 12848273 ± 6% sched_debug.cpu#16.sched_count
4965300 ± 5% +109.5% 10400978 ± 2% sched_debug.cpu#16.ttwu_count
2587259 ± 3% +84.3% 4769137 ± 4% sched_debug.cpu#16.ttwu_local
7343797 ± 4% +71.0% 12556479 ± 9% sched_debug.cpu#17.nr_switches
-21.50 ±-291% -637.2% 115.50 ± 9% sched_debug.cpu#17.nr_uninterruptible
7344102 ± 4% +71.0% 12557075 ± 9% sched_debug.cpu#17.sched_count
48302 ± 10% -21.2% 38075 ± 9% sched_debug.cpu#17.sched_goidle
4860214 ± 2% +105.2% 9973186 ± 3% sched_debug.cpu#17.ttwu_count
2631813 ± 1% +77.3% 4667413 ± 6% sched_debug.cpu#17.ttwu_local
10.50 ± 70% +361.9% 48.50 ± 34% sched_debug.cpu#18.nr_running
6423142 ± 0% +193.8% 18871804 ± 6% sched_debug.cpu#18.nr_switches
6423521 ± 0% +193.8% 18873844 ± 6% sched_debug.cpu#18.sched_count
4996106 ± 2% +103.7% 10174733 ± 5% sched_debug.cpu#18.ttwu_count
2472857 ± 2% +120.8% 5460849 ± 4% sched_debug.cpu#18.ttwu_local
13.00 ± 54% +267.3% 47.75 ± 47% sched_debug.cpu#19.nr_running
6685332 ± 4% +198.9% 19980393 ± 5% sched_debug.cpu#19.nr_switches
-66.25 ±-66% +202.6% -200.50 ± -2% sched_debug.cpu#19.nr_uninterruptible
6685659 ± 4% +198.9% 19981845 ± 5% sched_debug.cpu#19.sched_count
4916266 ± 4% +151.1% 12346570 ± 5% sched_debug.cpu#19.ttwu_count
2554700 ± 4% +163.4% 6729723 ± 4% sched_debug.cpu#19.ttwu_local
13552 ± 10% +39.4% 18891 ± 15% sched_debug.cpu#2.curr->pid
17.25 ± 67% +784.1% 152.50 ± 42% sched_debug.cpu#2.nr_running
7014128 ± 5% +72.9% 12125114 ± 8% sched_debug.cpu#2.nr_switches
7014454 ± 5% +72.9% 12125842 ± 8% sched_debug.cpu#2.sched_count
4929757 ± 3% +102.3% 9971509 ± 2% sched_debug.cpu#2.ttwu_count
2473376 ± 3% +75.9% 4350629 ± 7% sched_debug.cpu#2.ttwu_local
9.50 ± 58% +365.8% 44.25 ± 48% sched_debug.cpu#20.nr_running
7094564 ± 7% +180.5% 19900502 ± 5% sched_debug.cpu#20.nr_switches
-22.50 ±-193% +866.7% -217.50 ±-35% sched_debug.cpu#20.nr_uninterruptible
7094941 ± 7% +180.5% 19901947 ± 5% sched_debug.cpu#20.sched_count
4847005 ± 2% +150.6% 12148790 ± 4% sched_debug.cpu#20.ttwu_count
2596984 ± 4% +162.1% 6806600 ± 5% sched_debug.cpu#20.ttwu_local
8.50 ± 50% +400.0% 42.50 ± 52% sched_debug.cpu#21.nr_running
6734635 ± 6% +197.1% 20005787 ± 4% sched_debug.cpu#21.nr_switches
6734978 ± 6% +197.1% 20007174 ± 4% sched_debug.cpu#21.sched_count
4954934 ± 2% +152.5% 12510106 ± 7% sched_debug.cpu#21.ttwu_count
2548363 ± 3% +169.5% 6867282 ± 4% sched_debug.cpu#21.ttwu_local
10.00 ± 53% +365.0% 46.50 ± 40% sched_debug.cpu#22.nr_running
6793937 ± 1% +192.2% 19850213 ± 4% sched_debug.cpu#22.nr_switches
6794279 ± 1% +192.2% 19851667 ± 4% sched_debug.cpu#22.sched_count
4999277 ± 2% +147.2% 12359089 ± 5% sched_debug.cpu#22.ttwu_count
2575092 ± 1% +159.1% 6671652 ± 5% sched_debug.cpu#22.ttwu_local
9.50 ± 47% +355.3% 43.25 ± 39% sched_debug.cpu#23.nr_running
6760476 ± 3% +194.8% 19928574 ± 4% sched_debug.cpu#23.nr_switches
6760836 ± 3% +194.8% 19929942 ± 4% sched_debug.cpu#23.sched_count
5057550 ± 0% +142.4% 12258087 ± 4% sched_debug.cpu#23.ttwu_count
2590172 ± 1% +159.7% 6726524 ± 4% sched_debug.cpu#23.ttwu_local
17.00 ± 59% +764.7% 147.00 ± 46% sched_debug.cpu#3.nr_running
6553148 ± 3% +89.1% 12389631 ± 9% sched_debug.cpu#3.nr_switches
-2.50 ±-3542% -7430.0% 183.25 ± 22% sched_debug.cpu#3.nr_uninterruptible
6553515 ± 3% +89.1% 12390332 ± 9% sched_debug.cpu#3.sched_count
5061548 ± 3% +105.1% 10380529 ± 5% sched_debug.cpu#3.ttwu_count
2374084 ± 3% +82.0% 4321429 ± 9% sched_debug.cpu#3.ttwu_local
822869 ± 10% -12.4% 720748 ± 11% sched_debug.cpu#4.avg_idle
14.50 ± 66% +886.2% 143.00 ± 52% sched_debug.cpu#4.nr_running
6627944 ± 4% +85.8% 12313003 ± 8% sched_debug.cpu#4.nr_switches
6628260 ± 4% +85.8% 12313670 ± 8% sched_debug.cpu#4.sched_count
5029009 ± 4% +105.9% 10353607 ± 4% sched_debug.cpu#4.ttwu_count
2417262 ± 3% +79.5% 4339802 ± 7% sched_debug.cpu#4.ttwu_local
41.25 ± 13% +72.1% 71.00 ± 59% sched_debug.cpu#5.load
17.50 ± 68% +691.4% 138.50 ± 47% sched_debug.cpu#5.nr_running
6997533 ± 3% +73.8% 12164617 ± 8% sched_debug.cpu#5.nr_switches
78.00 ± 48% +184.9% 222.25 ± 21% sched_debug.cpu#5.nr_uninterruptible
6997845 ± 3% +73.8% 12165273 ± 8% sched_debug.cpu#5.sched_count
4969036 ± 2% +94.9% 9684195 ± 1% sched_debug.cpu#5.ttwu_count
2483310 ± 3% +71.0% 4247364 ± 5% sched_debug.cpu#5.ttwu_local
831260 ± 12% -17.7% 683789 ± 9% sched_debug.cpu#6.avg_idle
16.75 ± 36% +334.3% 72.75 ± 45% sched_debug.cpu#6.nr_running
6163396 ± 2% +169.8% 16626723 ± 4% sched_debug.cpu#6.nr_switches
54.50 ±162% -311.5% -115.25 ±-34% sched_debug.cpu#6.nr_uninterruptible
6163891 ± 2% +169.8% 16629352 ± 4% sched_debug.cpu#6.sched_count
5128410 ± 3% +77.0% 9074888 ± 4% sched_debug.cpu#6.ttwu_count
2309853 ± 3% +87.4% 4328580 ± 2% sched_debug.cpu#6.ttwu_local
40.25 ± 9% -26.1% 29.75 ± 18% sched_debug.cpu#7.cpu_load[0]
40.25 ± 9% -28.6% 28.75 ± 22% sched_debug.cpu#7.cpu_load[1]
40.00 ± 8% -28.8% 28.50 ± 24% sched_debug.cpu#7.cpu_load[2]
39.75 ± 8% -28.9% 28.25 ± 25% sched_debug.cpu#7.cpu_load[3]
39.75 ± 8% -28.3% 28.50 ± 24% sched_debug.cpu#7.cpu_load[4]
19.25 ± 41% +231.2% 63.75 ± 48% sched_debug.cpu#7.nr_running
6566332 ± 5% +186.9% 18836315 ± 5% sched_debug.cpu#7.nr_switches
15.50 ±158% -1111.3% -156.75 ±-42% sched_debug.cpu#7.nr_uninterruptible
6566642 ± 5% +186.9% 18837866 ± 5% sched_debug.cpu#7.sched_count
5014516 ± 5% +127.1% 11386703 ± 3% sched_debug.cpu#7.ttwu_count
2467246 ± 2% +138.0% 5872007 ± 3% sched_debug.cpu#7.ttwu_local
6802328 ± 5% +181.3% 19136656 ± 4% sched_debug.cpu#8.nr_switches
-76.25 ±-113% +200.0% -228.75 ±-38% sched_debug.cpu#8.nr_uninterruptible
6802722 ± 5% +181.3% 19138285 ± 4% sched_debug.cpu#8.sched_count
4959323 ± 2% +128.0% 11305581 ± 3% sched_debug.cpu#8.ttwu_count
2437353 ± 1% +146.6% 6011685 ± 3% sched_debug.cpu#8.ttwu_local
500000 ± 0% +14.7% 573462 ± 10% sched_debug.cpu#9.max_idle_balance_cost
10.75 ± 56% +409.3% 54.75 ± 57% sched_debug.cpu#9.nr_running
6544270 ± 6% +188.6% 18884484 ± 4% sched_debug.cpu#9.nr_switches
55.50 ±103% -369.8% -149.75 ±-45% sched_debug.cpu#9.nr_uninterruptible
6544607 ± 6% +188.6% 18885996 ± 4% sched_debug.cpu#9.sched_count
5030467 ± 4% +122.3% 11181045 ± 5% sched_debug.cpu#9.ttwu_count
2417027 ± 4% +138.4% 5762968 ± 4% sched_debug.cpu#9.ttwu_local
0.40 ±172% -99.8% 0.00 ± 85% sched_debug.rt_rq[16]:/.rt_time
0.00 ± 49% +73814.5% 0.80 ±100% sched_debug.rt_rq[9]:/.rt_time
lkp-ws02: Westmere-EP
Memory: 16G
hackbench.time.involuntary_context_switches
3e+08 ++------------------------O---------------------------------------+
O O O O O |
2.5e+08 ++ O O O O O
| O O O O |
| O O O O |
2e+08 ++ O |
| O |
1.5e+08 ++ |
| |
1e+08 ++ |
| .*...*..*..*...*..* |
*..*...*..*..*...*.. .*.. ..*..*. |
5e+07 ++ *. *. |
| |
0 ++----------------------------------------------------------------+
vmstat.system.in
300000 ++-----------------------------------------------------------------+
| O O O |
O O O O O |
250000 ++ O O O O
| O |
| O O O O O O |
200000 ++ O O |
| |
150000 ++ |
| |
| |
100000 ++ |
| .*... .*..*...*..*..*...* |
*..*...*..*. *.. .*... .*...*. |
50000 ++------------------*------*---------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 3 months
[lkp] [drm/i915] 437b15b801: [drm:gen8_irq_handler [i915]] *ERROR* The master control interrupt lied (SDE)!
by kernel test robot
FYI, we noticed the below changes on
git://anongit.freedesktop.org/drm-intel for-linux-next
commit 437b15b8017e0d946453c10794b0c5d4591cf180 ("drm/i915: use pch backlight override on hsw too")
<4>[ 25.650730] ------------[ cut here ]------------
<4>[ 25.650752] WARNING: CPU: 2 PID: 80 at drivers/gpu/drm/i915/intel_display.c:9234 hsw_enable_pc8+0x60b/0x740 [i915]()
<4>[ 25.650753] CPU PWM1 enabled
<4>[ 25.650773] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver snd_hda_codec_hdmi x86_pkg_temp_thermal coretemp kvm_intel snd_hda_intel kvm crct10dif_pclmul snd_hda_codec snd_hda_core crc32_pclmul snd_hwdep crc32c_intel i915 ghash_clmulni_intel snd_pcm drm_kms_helper snd_timer syscopyarea aesni_intel sysfillrect sdhci_acpi sysimgblt lrw fb_sys_fops gf128mul glue_helper ablk_helper cryptd ppdev drm shpchp snd microcode serio_raw pcspkr soundcore i2c_i801 winbond_cir rc_core i2c_hid sdhci video dw_dmac dw_dmac_core mmc_core parport_pc parport i2c_designware_platform i2c_designware_core acpi_pad spi_pxa2xx_platform
<4>[ 25.650775] CPU: 2 PID: 80 Comm: kworker/u16:1 Not tainted 4.2.0-rc8-01085-g437b15b #1
<4>[ 25.650776] Hardware name: Intel Corporation Broadwell Client platform/WhiteTip Mountain 1, BIOS BDW-E1R1.86C.0120.R00.1504020241 04/02/2015
<4>[ 25.650779] Workqueue: events_unbound async_run_entry_fn
<4>[ 25.650781] ffffffffa0433070 ffff880032b63be8 ffffffff8189e2e9 ffffffff81cf4238
<4>[ 25.650783] ffff880032b63c38 ffff880032b63c28 ffffffff8107348a 00000000fffbca4d
<4>[ 25.650784] ffff88006bc10000 ffff88006d184b70 ffff88006d184b80 ffff88006d184800
<4>[ 25.650784] Call Trace:
<4>[ 25.650788] [<ffffffff8189e2e9>] dump_stack+0x4c/0x65
<4>[ 25.650791] [<ffffffff8107348a>] warn_slowpath_common+0x8a/0xc0
<4>[ 25.650792] [<ffffffff81073506>] warn_slowpath_fmt+0x46/0x50
<4>[ 25.650808] [<ffffffffa03d64eb>] hsw_enable_pc8+0x60b/0x740 [i915]
<4>[ 25.650814] [<ffffffffa035a3cb>] intel_suspend_complete+0x65b/0x6e0 [i915]
<4>[ 25.650819] [<ffffffffa035a472>] i915_drm_suspend_late+0x22/0x80 [i915]
<4>[ 25.650825] [<ffffffffa035a5c0>] ? i915_pm_poweroff_late+0x30/0x30 [i915]
<4>[ 25.650831] [<ffffffffa035a5e9>] i915_pm_suspend_late+0x29/0x30 [i915]
<4>[ 25.650833] [<ffffffff81551c1c>] dpm_run_callback+0x4c/0x120
<4>[ 25.650835] [<ffffffff815524c9>] __device_suspend_late+0xa9/0x180
<4>[ 25.650837] [<ffffffff815525bf>] async_suspend_late+0x1f/0xa0
<4>[ 25.650838] [<ffffffff810944ca>] async_run_entry_fn+0x4a/0x140
<4>[ 25.650841] [<ffffffff8108b6e7>] process_one_work+0x157/0x420
<4>[ 25.650843] [<ffffffff8108c1c9>] worker_thread+0x69/0x4a0
<4>[ 25.650844] [<ffffffff8108c160>] ? rescuer_thread+0x380/0x380
<4>[ 25.650846] [<ffffffff8108c160>] ? rescuer_thread+0x380/0x380
<4>[ 25.650847] [<ffffffff8109197f>] kthread+0xef/0x110
<4>[ 25.650849] [<ffffffff81091890>] ? kthread_create_on_node+0x180/0x180
<4>[ 25.650851] [<ffffffff818a621f>] ret_from_fork+0x3f/0x70
<4>[ 25.650852] [<ffffffff81091890>] ? kthread_create_on_node+0x180/0x180
<4>[ 25.650853] ---[ end trace 597b97aad370829b ]---
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 3 months
[lkp] [nfsd] 4aac1bf05b: -2.9% fsmark.files_per_sec
by kernel test robot
FYI, we noticed the below changes on
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/fs2/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
lkp-ne04/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/xfs/nfsv4/5K/400M/fsyncBeforeClose/16d/256fpd
commit:
cd2d35ff27c4fda9ba73b0aa84313e8e20ce4d2c
4aac1bf05b053a201a4b392dd9a684fb2b7e6103
cd2d35ff27c4fda9 4aac1bf05b053a201a4b392dd9
---------------- --------------------------
%stddev %change %stddev
\ | \
14415356 ± 0% +2.6% 14788625 ± 1% fsmark.app_overhead
441.60 ± 0% -2.9% 428.80 ± 0% fsmark.files_per_sec
185.78 ± 0% +2.9% 191.26 ± 0% fsmark.time.elapsed_time
185.78 ± 0% +2.9% 191.26 ± 0% fsmark.time.elapsed_time.max
97472 ± 0% -2.8% 94713 ± 0% fsmark.time.involuntary_context_switches
3077117 ± 95% +251.2% 10805440 ±112% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
12999 ± 0% +32.9% 17276 ± 0% proc-vmstat.nr_slab_unreclaimable
64568 ± 4% -14.8% 55032 ± 0% softirqs.RCU
51999 ± 0% +32.9% 69111 ± 0% meminfo.SUnreclaim
159615 ± 0% +13.5% 181115 ± 0% meminfo.Slab
3.75 ± 0% +3.3% 3.88 ± 1% turbostat.%Busy
77.25 ± 0% +6.5% 82.25 ± 0% turbostat.Avg_MHz
30813025 ± 2% -14.5% 26338527 ± 9% cpuidle.C1E-NHM.time
164180 ± 0% -28.9% 116758 ± 7% cpuidle.C1E-NHM.usage
1738 ± 2% -81.2% 326.75 ± 4% cpuidle.POLL.usage
29979 ± 2% +44.3% 43273 ± 4% numa-meminfo.node0.SUnreclaim
94889 ± 0% +19.8% 113668 ± 2% numa-meminfo.node0.Slab
22033 ± 3% +17.3% 25835 ± 7% numa-meminfo.node1.SUnreclaim
7404 ± 1% -2.7% 7206 ± 0% vmstat.io.bo
27121 ± 0% -4.8% 25817 ± 0% vmstat.system.cs
3025 ± 0% -13.5% 2615 ± 0% vmstat.system.in
50126 ± 1% +11.5% 55893 ± 1% numa-vmstat.node0.nr_dirtied
7494 ± 2% +44.3% 10818 ± 4% numa-vmstat.node0.nr_slab_unreclaimable
50088 ± 1% +11.6% 55900 ± 1% numa-vmstat.node0.nr_written
5507 ± 3% +17.3% 6458 ± 7% numa-vmstat.node1.nr_slab_unreclaimable
7164 ± 2% +275.2% 26885 ± 0% slabinfo.kmalloc-16.active_objs
7164 ± 2% +275.3% 26885 ± 0% slabinfo.kmalloc-16.num_objs
7367 ± 1% +787.7% 65401 ± 0% slabinfo.kmalloc-192.active_objs
179.00 ± 1% +771.8% 1560 ± 0% slabinfo.kmalloc-192.active_slabs
7537 ± 1% +770.0% 65572 ± 0% slabinfo.kmalloc-192.num_objs
179.00 ± 1% +771.8% 1560 ± 0% slabinfo.kmalloc-192.num_slabs
3631 ± 7% +522.3% 22600 ± 0% slabinfo.kmalloc-256.active_objs
145.50 ± 4% +398.1% 724.75 ± 0% slabinfo.kmalloc-256.active_slabs
4667 ± 4% +397.3% 23210 ± 0% slabinfo.kmalloc-256.num_objs
145.50 ± 4% +398.1% 724.75 ± 0% slabinfo.kmalloc-256.num_slabs
17448 ± 2% +75.6% 30643 ± 0% slabinfo.kmalloc-32.active_objs
137.50 ± 2% +76.5% 242.75 ± 0% slabinfo.kmalloc-32.active_slabs
17651 ± 2% +76.4% 31139 ± 0% slabinfo.kmalloc-32.num_objs
137.50 ± 2% +76.5% 242.75 ± 0% slabinfo.kmalloc-32.num_slabs
2387 ± 3% -10.7% 2132 ± 8% slabinfo.kmalloc-512.active_objs
491.25 ± 3% +33.9% 658.00 ± 11% slabinfo.numa_policy.active_objs
491.25 ± 3% +33.9% 658.00 ± 11% slabinfo.numa_policy.num_objs
2128 ± 9% +59.3% 3391 ± 34% sched_debug.cfs_rq[10]:/.exec_clock
18088 ± 17% +47.0% 26582 ± 29% sched_debug.cfs_rq[10]:/.min_vruntime
4326 ± 11% -22.2% 3368 ± 18% sched_debug.cfs_rq[5]:/.exec_clock
1459 ± 1% -10.8% 1302 ± 3% sched_debug.cpu#0.nr_uninterruptible
122217 ± 7% -18.6% 99447 ± 2% sched_debug.cpu#1.nr_switches
122732 ± 8% -18.5% 99972 ± 2% sched_debug.cpu#1.sched_count
45603 ± 10% -20.1% 36442 ± 2% sched_debug.cpu#1.sched_goidle
27004 ± 3% -18.9% 21895 ± 5% sched_debug.cpu#1.ttwu_local
15469 ± 5% +17.2% 18132 ± 6% sched_debug.cpu#10.nr_load_updates
78564 ± 8% +26.6% 99492 ± 5% sched_debug.cpu#10.nr_switches
78605 ± 8% +26.7% 99557 ± 4% sched_debug.cpu#10.sched_count
27470 ± 9% +24.7% 34268 ± 7% sched_debug.cpu#10.sched_goidle
38215 ± 1% +37.4% 52499 ± 13% sched_debug.cpu#10.ttwu_count
14816 ± 5% +22.8% 18196 ± 2% sched_debug.cpu#10.ttwu_local
19690 ± 21% -29.9% 13802 ± 15% sched_debug.cpu#11.nr_switches
54.25 ± 2% -47.5% 28.50 ± 25% sched_debug.cpu#11.nr_uninterruptible
19721 ± 21% -29.9% 13828 ± 15% sched_debug.cpu#11.sched_count
14545 ± 2% +15.4% 16779 ± 4% sched_debug.cpu#12.nr_load_updates
72087 ± 11% +27.9% 92204 ± 7% sched_debug.cpu#12.nr_switches
72126 ± 11% +28.1% 92422 ± 7% sched_debug.cpu#12.sched_count
25418 ± 13% +24.4% 31626 ± 7% sched_debug.cpu#12.sched_goidle
33399 ± 15% +38.5% 46255 ± 13% sched_debug.cpu#12.ttwu_count
51.25 ± 10% -39.0% 31.25 ± 21% sched_debug.cpu#13.nr_uninterruptible
2593 ± 11% -21.8% 2028 ± 10% sched_debug.cpu#13.ttwu_local
71266 ± 3% +20.1% 85620 ± 5% sched_debug.cpu#14.nr_switches
71306 ± 3% +20.4% 85827 ± 5% sched_debug.cpu#14.sched_count
24634 ± 3% +18.8% 29259 ± 4% sched_debug.cpu#14.sched_goidle
34625 ± 11% +19.9% 41506 ± 11% sched_debug.cpu#14.ttwu_count
13866 ± 3% +20.6% 16726 ± 5% sched_debug.cpu#14.ttwu_local
12683 ± 4% -14.7% 10817 ± 2% sched_debug.cpu#15.nr_load_updates
49.75 ± 6% -46.2% 26.75 ± 28% sched_debug.cpu#15.nr_uninterruptible
3374 ± 12% -28.1% 2427 ± 18% sched_debug.cpu#15.ttwu_local
186563 ± 5% -12.1% 163975 ± 4% sched_debug.cpu#2.nr_switches
-1324 ± -2% -16.0% -1111 ± -1% sched_debug.cpu#2.nr_uninterruptible
187499 ± 5% -11.2% 166447 ± 4% sched_debug.cpu#2.sched_count
67465 ± 7% -13.6% 58308 ± 6% sched_debug.cpu#2.sched_goidle
36525 ± 4% -14.6% 31193 ± 1% sched_debug.cpu#2.ttwu_local
23697 ± 5% -13.2% 20572 ± 9% sched_debug.cpu#3.nr_load_updates
128070 ± 1% -22.9% 98687 ± 5% sched_debug.cpu#3.nr_switches
129859 ± 2% -23.5% 99357 ± 4% sched_debug.cpu#3.sched_count
48833 ± 1% -23.7% 37243 ± 6% sched_debug.cpu#3.sched_goidle
61622 ± 3% -24.2% 46694 ± 5% sched_debug.cpu#3.ttwu_count
27510 ± 7% -20.6% 21840 ± 8% sched_debug.cpu#3.ttwu_local
81675 ± 7% -13.6% 70536 ± 1% sched_debug.cpu#4.ttwu_count
34076 ± 3% -12.9% 29683 ± 1% sched_debug.cpu#4.ttwu_local
124470 ± 4% -14.1% 106865 ± 8% sched_debug.cpu#5.sched_count
62502 ± 3% -20.8% 49519 ± 9% sched_debug.cpu#5.ttwu_count
26562 ± 0% -17.7% 21853 ± 10% sched_debug.cpu#5.ttwu_local
181661 ± 10% -15.1% 154229 ± 6% sched_debug.cpu#6.nr_switches
181937 ± 10% -13.5% 157379 ± 6% sched_debug.cpu#6.sched_count
66672 ± 14% -16.6% 55632 ± 9% sched_debug.cpu#6.sched_goidle
78296 ± 2% -10.2% 70346 ± 6% sched_debug.cpu#6.ttwu_count
33536 ± 1% -14.4% 28696 ± 1% sched_debug.cpu#6.ttwu_local
131463 ± 6% -17.0% 109140 ± 4% sched_debug.cpu#7.nr_switches
-32.25 ±-58% -100.8% 0.25 ±9467% sched_debug.cpu#7.nr_uninterruptible
133606 ± 7% -17.2% 110671 ± 4% sched_debug.cpu#7.sched_count
50986 ± 7% -16.6% 42525 ± 6% sched_debug.cpu#7.sched_goidle
61388 ± 2% -19.8% 49213 ± 5% sched_debug.cpu#7.ttwu_count
26637 ± 2% -21.8% 20837 ± 3% sched_debug.cpu#7.ttwu_local
12312 ± 3% +9.4% 13474 ± 4% sched_debug.cpu#8.nr_load_updates
53.50 ± 6% -44.9% 29.50 ± 27% sched_debug.cpu#9.nr_uninterruptible
2724 ± 15% -23.7% 2078 ± 26% sched_debug.cpu#9.ttwu_local
lkp-ne04: Nehalem-EP
Memory: 12G
cpuidle.POLL.usage
1800 ++----------*-----------------------------*-----*-----*--------*--*--+
*..*..*..*. .*..*..*..*..*..*..*..*. *. *. *..*. *
1600 ++ *. |
1400 ++ |
| |
1200 ++ |
| |
1000 ++ |
| |
800 ++ |
600 ++ |
| |
400 O+ |
| O O O O O O O O O O O O O O O O O O O O O O |
200 ++-------------------------------------------------------------------+
cpuidle.C1E-NHM.usage
190000 ++-----------------------------------------------------*-----------+
| : : |
180000 ++ : : |
170000 ++ .*. : : |
| .*.. .*.. .*. .*.. .*..*..*..*. * *..*..*..*
160000 *+ *. *. *..*..*. *..*..*. |
150000 ++ |
| |
140000 ++ |
130000 ++ O |
| |
120000 ++ O O O O |
110000 ++ O O O O O O O O O |
O O O O O O O O O |
100000 ++-----------------------------------------------------------------+
fsmark.files_per_sec
446 ++--------------------------------------------------------------------+
444 ++ *.. |
| .. |
442 *+.*..*..*..*..*..*.. *..*..*..*...*..*..*..*..*..*..* *..*..*..*
440 ++ .. |
438 ++ * |
436 ++ |
| |
434 ++ |
432 ++ |
430 ++ |
428 ++ O O O O O O O O O O O O O O O O O O |
| |
426 O+ O O O O |
424 ++--------------------------------------------------------------------+
fsmark.time.elapsed_time
193 ++--------------------------------------------------------------------+
O O O |
192 ++ O O O O O O O O O O |
191 ++ O O O O O O O O |
| O O |
190 ++ |
189 ++ |
| |
188 ++ |
187 ++ |
| .*.. |
186 *+. .*..*..*..*..*. *.. .*.. ..*.. .*.. .*.. .*.. *..*..*..*
185 ++ *. *. *. *. *. *. .. |
| * |
184 ++--------------------------------------------------------------------+
fsmark.time.elapsed_time.max
193 ++--------------------------------------------------------------------+
O O O |
192 ++ O O O O O O O O O O |
191 ++ O O O O O O O O |
| O O |
190 ++ |
189 ++ |
| |
188 ++ |
187 ++ |
| .*.. |
186 *+. .*..*..*..*..*. *.. .*.. ..*.. .*.. .*.. .*.. *..*..*..*
185 ++ *. *. *. *. *. *. .. |
| * |
184 ++--------------------------------------------------------------------+
fsmark.time.involuntary_context_switches
98500 ++------------------------------------------------------------------+
98000 ++ .* |
| *.. .*. + *.. .*..*..* *
97500 ++. *..*..*. + .. *..*.*..*..*..*..*..*..*. + +|
97000 *+ *..* + + |
| * |
96500 ++ |
96000 ++ |
95500 ++ |
| |
95000 ++ O O O O O O O |
94500 ++ O O O O O O O O O |
O O O O O |
94000 ++ O O |
93500 ++------------------------------------------------------------------+
vmstat.system.in
3100 ++-------------------------------------------------------------------+
3050 *+. .*.. .*.. .*..*.. .*..*.. |
| .*..*.. .*..*. *. *. *.. .*..*. .*..*..*..*
3000 ++ *. *. *. *. |
2950 ++ |
2900 ++ |
2850 ++ |
| |
2800 ++ |
2750 ++ |
2700 ++ |
2650 ++ O O |
| O O O O O O O O O O O |
2600 O+ O O O O O O O O O |
2550 ++-------------------------------------------------------------------+
numa-vmstat.node0.nr_slab_unreclaimable
12000 ++------------------------------------------------------------------+
| O |
11000 ++ O O O O O O |
O O O O O |
| O O O O O O O O |
10000 ++ O O O |
| |
9000 ++ |
| |
8000 ++ *.. |
| .*..*.. + .*.. .*
| .*.. .*.. .*.*. + *..*..*. *..*. |
7000 *+.*. .*. .*..*. * |
| *..*. *..*. |
6000 ++------------------------------------------------------------------+
numa-vmstat.node0.nr_dirtied
58000 ++------------------------------------------------------------------+
57000 ++ O |
O O O O |
56000 ++ O O O O O O |
55000 ++ O O O O O O O O O O |
| O O |
54000 ++ |
53000 ++ |
52000 ++ |
| *.. * |
51000 +++ .*.. .*.. .. + .*.. *..|
50000 ++ *. *..*..*.. .* * + .*.. *. *.. .. *
* *.. .*..*. *. .. * |
49000 ++ *. * |
48000 ++------------------------------------------------------------------+
numa-vmstat.node0.nr_written
58000 ++------------------------------------------------------------------+
57000 ++ O |
O O O O O |
56000 ++ O O O O O O |
55000 ++ O O O O O O O O O |
| O O |
54000 ++ |
53000 ++ |
52000 ++ |
| * |
51000 ++ *.. .*.. .*.. .. + * .*.. *..|
50000 ++. *. *..*..*.. .* * + .. + *. *.. .. *
* *.. .*..*. * + .. * |
49000 ++ *. * |
48000 ++------------------------------------------------------------------+
numa-meminfo.node0.SUnreclaim
50000 ++------------------------------------------------------------------+
| |
| O |
45000 ++ O O O O O O |
O O O O O |
| O O O O O O O O |
40000 ++ O O O |
| |
35000 ++ |
| |
| *.. *.. |
30000 ++ .. *.. .. *.. .*.. .*
*..*..*.. .*.. .*..*.* * *..*. *..*. |
| *..*..*. .*. |
25000 ++-------------------*--*-------------------------------------------+
proc-vmstat.nr_slab_unreclaimable
17500 ++-O--O-----O--O-------------------O--O--------O-----O-----O-----O--+
17000 O+ O O O O O O O O O O O O |
| |
16500 ++ |
16000 ++ |
| |
15500 ++ |
15000 ++ |
14500 ++ |
| |
14000 ++ |
13500 ++ |
| |
13000 *+.*..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*..*..*..*..*..*..*..*
12500 ++------------------------------------------------------------------+
meminfo.Slab
185000 ++-----------------------------------------------------------------+
| O O O O O |
180000 O+ O O O O O O O O O O O O O O O O O |
| |
| |
175000 ++ |
| |
170000 ++ |
| |
165000 ++ |
| |
| |
160000 *+.*..*..*..*..*.*..*.. .*..*..*..*..*.. .*.. .*..*..*..*..*..*
| *..*. *. * |
155000 ++-----------------------------------------------------------------+
meminfo.SUnreclaim
70000 ++-O--O-----O--O-------------------O--O--------O-----O-----O-----O--+
68000 O+ O O O O O O O O O O O O |
| |
66000 ++ |
64000 ++ |
| |
62000 ++ |
60000 ++ |
58000 ++ |
| |
56000 ++ |
54000 ++ |
| |
52000 *+.*..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*..*..*..*..*..*..*..*
50000 ++------------------------------------------------------------------+
slabinfo.kmalloc-256.active_objs
25000 ++------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O O O O |
O O |
20000 ++ |
| |
| |
15000 ++ |
| |
10000 ++ |
| |
| |
5000 ++ |
*..*..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*..*..*..*..*..*..*..*
| |
0 ++------------------------------------------------------------------+
slabinfo.kmalloc-256.num_objs
24000 ++-O--O-----O--O--------O-----O--O-O--O--O--O--O--O--O-----O-----O--+
22000 O+ O O O O O O |
| |
20000 ++ |
18000 ++ |
| |
16000 ++ |
14000 ++ |
12000 ++ |
| |
10000 ++ |
8000 ++ |
| |
6000 ++ .*.. .*..*..*.. .*.. .*..|
4000 *+-*--*--*--*--*--*--*--*--*--*--*-*-----*-----------*--*-----*-----*
slabinfo.kmalloc-256.active_slabs
800 ++--------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O |
700 ++ O O O O |
| |
600 ++ |
| |
500 ++ |
| |
400 ++ |
| |
300 ++ |
| |
200 ++ |
*..*..*..*..*..*..*..*..*..*..*..*...*..*..*..*..*..*..*..*..*..*..*..*
100 ++--------------------------------------------------------------------+
slabinfo.kmalloc-256.num_slabs
800 ++--------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O |
700 ++ O O O O |
| |
600 ++ |
| |
500 ++ |
| |
400 ++ |
| |
300 ++ |
| |
200 ++ |
*..*..*..*..*..*..*..*..*..*..*..*...*..*..*..*..*..*..*..*..*..*..*..*
100 ++--------------------------------------------------------------------+
slabinfo.kmalloc-192.active_objs
70000 ++------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O |
60000 ++ |
| |
50000 ++ |
| |
40000 ++ |
| |
30000 ++ |
| |
20000 ++ |
| |
10000 *+. .*.. .*..|
| *..*..*..*..*..*..*..*..*..*..*.*..*..*..*..*. *..*..*..*. *
0 ++------------------------------------------------------------------+
slabinfo.kmalloc-192.num_objs
70000 ++------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O |
60000 ++ |
| |
50000 ++ |
| |
40000 ++ |
| |
30000 ++ |
| |
20000 ++ |
| |
10000 *+. .*.. .*..*..*. .*.. .*.. .*.. .*.. .*..*
| *. *..*..*..*..*..*. *. *. *. *..*. *. |
0 ++------------------------------------------------------------------+
slabinfo.kmalloc-192.active_slabs
1600 O+-O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--+
| |
1400 ++ |
1200 ++ |
| |
1000 ++ |
| |
800 ++ |
| |
600 ++ |
400 ++ |
| |
200 *+.*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*
| |
0 ++-------------------------------------------------------------------+
slabinfo.kmalloc-192.num_slabs
1600 O+-O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--+
| |
1400 ++ |
1200 ++ |
| |
1000 ++ |
| |
800 ++ |
| |
600 ++ |
400 ++ |
| |
200 *+.*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*..*
| |
0 ++-------------------------------------------------------------------+
slabinfo.kmalloc-32.active_objs
55000 ++------------------------------------------------------------------+
| O |
50000 O+ O O O |
45000 ++ |
| |
40000 ++ |
| |
35000 ++ |
| O O O O O O O O O |
30000 ++ O O O O O O O O O |
25000 ++ |
| |
20000 ++ |
*..*..*..*.. .*..*..*..*..*..*..*.*..*..*..*.. .*..*..*..*..*..*..*
15000 ++----------*----------------------------------*--------------------+
slabinfo.kmalloc-32.num_objs
55000 ++------------------------------------------------------------------+
O O O O |
50000 ++ O |
45000 ++ |
| |
40000 ++ |
| |
35000 ++ |
| O O O O O O O O O O O O O O O O |
30000 ++ O O |
25000 ++ |
| |
20000 ++ |
*..*..*..*.. .*..*..*..*..*..*..*.*..*..*..*.. .*..*..*..*..*..*..*
15000 ++----------*----------------------------------*--------------------+
slabinfo.kmalloc-32.active_slabs
400 O+-O--O--O--O---------------------------------------------------------+
| |
350 ++ |
| |
| |
300 ++ |
| |
250 ++ O O O O O O O O O O O O O O O O |
| O O |
200 ++ |
| |
| |
150 *+.*..*..*.. .*..*..*.. .*..*..*...*..*.. .*.. .*..*.. .*..*..*..*
| *. *. *. *. *. |
100 ++--------------------------------------------------------------------+
slabinfo.kmalloc-32.num_slabs
400 O+-O--O--O--O---------------------------------------------------------+
| |
350 ++ |
| |
| |
300 ++ |
| |
250 ++ O O O O O O O O O O O O O O O O |
| O O |
200 ++ |
| |
| |
150 *+.*..*..*.. .*..*..*.. .*..*..*...*..*.. .*.. .*..*.. .*..*..*..*
| *. *. *. *. *. |
100 ++--------------------------------------------------------------------+
kmsg.usb_usb7:can_t_set_config___error
1 ++--------------------------------------------------------------------*
| |
| :|
0.8 ++ :|
| :|
| :|
0.6 ++ : |
| : |
0.4 ++ : |
| : |
| : |
0.2 ++ : |
| : |
| : |
0 *+--*---*---*---*----*---*---*---*---*---*---*---*----*---*---*---*---+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 3 months
[lkp] [sched/fair] 482eaa50ff: INFO: suspicious RCU usage. ]
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree revert-482eaa50ff81046e1e9f95af94176953d0743ec9-482eaa50ff81046e1e9f95af94176953d0743ec9
commit 482eaa50ff81046e1e9f95af94176953d0743ec9 ("sched/fair: Skip wake_affine() for core siblings")
+-------------------------------------------------------+----------+------------+
| | v4.3-rc3 | 482eaa50ff |
+-------------------------------------------------------+----------+------------+
| boot_successes | 54 | 0 |
| boot_failures | 10 | 13 |
| IP-Config:Auto-configuration_of_network_failed | 10 | |
| INFO:suspicious_RCU_usage | 0 | 13 |
| BUG:scheduling_while_atomic | 0 | 13 |
| INFO:lockdep_is_turned_off | 0 | 13 |
| kernel_BUG_at_kernel/sched/core.c | 0 | 13 |
| invalid_opcode:#[##]SMP_DEBUG_PAGEALLOC | 0 | 13 |
| EIP_is_at__sched_setscheduler | 0 | 13 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 13 |
| backtrace:spawn_ksoftirqd | 0 | 13 |
| backtrace:kernel_init_freeable | 0 | 13 |
| backtrace:schedule | 0 | 13 |
+-------------------------------------------------------+----------+------------+
[ 0.074005] Failed to access perfctr msr (MSR c2 is 0)
[ 0.075108]
[ 0.075427] ===============================
[ 0.076000] [ INFO: suspicious RCU usage. ]
[ 0.076000] 4.3.0-rc3-00001-g482eaa5 #282 Not tainted
[ 0.076000] -------------------------------
[ 0.076000] kernel/sched/fair.c:4796 suspicious rcu_dereference_check() usage!
[ 0.076000]
[ 0.076000] other info that might help us debug this:
[ 0.076000]
[ 0.076000]
[ 0.076000] rcu_scheduler_active = 1, debug_locks = 0
[ 0.076000] 3 locks held by swapper/0/1:
[ 0.076000] #0: (cpu_hotplug.lock){.+.+.+}, at: [<c104408a>] get_online_cpus+0x27/0x62
[ 0.076000] #1: (smpboot_threads_lock){+.+.+.}, at: [<c105ae50>] smpboot_register_percpu_thread_cpumask+0x24/0xa1
[ 0.076000] #2: (&p->pi_lock){......}, at: [<c105e0e8>] try_to_wake_up+0x1d/0x19c
[ 0.076000]
[ 0.076000] stack backtrace:
[ 0.076000] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc3-00001-g482eaa5 #282
[ 0.076000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.076000] 00000000 00000000 d3081da8 c1219245 00000001 d3081dc4 c1070a55 c1aeb4d8
[ 0.076000] d3078000 00000000 00000000 00000000 d3081e1c c106444c 00000001 00000010
[ 0.076000] 480d0069 00000000 d307f198 00000046 00000046 d3081e18 c107064c 00000002
[ 0.076000] Call Trace:
[ 0.076000] [<c1219245>] dump_stack+0x48/0x60
[ 0.076000] [<c1070a55>] lockdep_rcu_suspicious+0xd4/0xdd
[ 0.076000] [<c106444c>] select_task_rq_fair+0x2d8/0x64d
[ 0.076000] [<c107064c>] ? lock_acquire+0x72/0x7d
[ 0.076000] [<c106d2e8>] ? __lock_is_held+0x2e/0x44
[ 0.076000] [<c105d71f>] select_task_rq+0x3c/0x8f
[ 0.076000] [<c105e19f>] try_to_wake_up+0xd4/0x19c
[ 0.076000] [<c105e290>] wake_up_process+0x29/0x2c
[ 0.076000] [<c105848c>] kthread_create_on_node+0x95/0x104
[ 0.076000] [<c10585a6>] kthread_create_on_cpu+0x14/0x44
[ 0.076000] [<c105a9f3>] ? cpumask_next+0x26/0x26
[ 0.076000] [<c105acbb>] __smpboot_create_thread+0x4e/0xb0
[ 0.076000] [<c105ae70>] smpboot_register_percpu_thread_cpumask+0x44/0xa1
[ 0.076000] [<c1d4cc3b>] ? cpu_hotplug_pm_sync_init+0x11/0x11
[ 0.076000] [<c1d4cc58>] spawn_ksoftirqd+0x1d/0x27
[ 0.076000] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 0.076000] [<c1076213>] ? vprintk_default+0x12/0x14
[ 0.076000] [<c10a6426>] ? printk+0x12/0x14
[ 0.076000] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 0.076000] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 0.076000] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 0.076000] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 0.076000] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 0.076000] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 0.076000] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 0.076008] BUG: scheduling while atomic: swapper/0/1/0x00000000
[ 0.077003] INFO: lockdep is turned off.
[ 0.078006] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc3-00001-g482eaa5 #282
[ 0.079004] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.080004] 00000000 00000000 d3081db0 c1219245 d3078000 d3081dbc c105c084 00000000
[ 0.082003] d3081de0 c17aedc2 00000000 d307f188 d3081e84 d3078000 d3082000 d3081e70
[ 0.083805] d3078000 d3081dec c17af235 7fffffff d3081e24 c17b1d4c 00000000 00000000
[ 0.085648] Call Trace:
[ 0.086008] [<c1219245>] dump_stack+0x48/0x60
[ 0.087008] [<c105c084>] __schedule_bug+0x52/0x63
[ 0.088007] [<c17aedc2>] __schedule+0x5a/0x436
[ 0.089006] [<c17af235>] schedule+0x64/0x78
[ 0.090007] [<c17b1d4c>] schedule_timeout+0x15/0xb1
[ 0.091006] [<c17af938>] ? __wait_for_common+0xd4/0x105
[ 0.092007] [<c106ea1e>] ? trace_hardirqs_on+0xb/0xd
[ 0.093006] [<c17b2357>] ? _raw_spin_unlock_irq+0x27/0x31
[ 0.094006] [<c17af93f>] __wait_for_common+0xdb/0x105
[ 0.095007] [<c17b1d37>] ? console_conditional_schedule+0x24/0x24
[ 0.096007] [<c105e974>] ? get_parent_ip+0x31/0x31
[ 0.097007] [<c17afa10>] wait_for_completion_killable+0x17/0x2c
[ 0.098007] [<c1058493>] kthread_create_on_node+0x9c/0x104
[ 0.099006] [<c10585a6>] kthread_create_on_cpu+0x14/0x44
[ 0.100005] [<c105a9f3>] ? cpumask_next+0x26/0x26
[ 0.101005] [<c105acbb>] __smpboot_create_thread+0x4e/0xb0
[ 0.102005] [<c105ae70>] smpboot_register_percpu_thread_cpumask+0x44/0xa1
[ 0.103007] [<c1d4cc3b>] ? cpu_hotplug_pm_sync_init+0x11/0x11
[ 0.104005] [<c1d4cc58>] spawn_ksoftirqd+0x1d/0x27
[ 0.105006] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 0.106006] [<c1076213>] ? vprintk_default+0x12/0x14
[ 0.107010] [<c10a6426>] ? printk+0x12/0x14
[ 0.107916] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 0.108006] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 0.109007] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 0.110006] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 0.111008] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 0.112007] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 0.113006] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 0.114092] BUG: scheduling while atomic: kthreadd/3/0x00000000
[ 0.115003] INFO: lockdep is turned off.
[ 0.116006] CPU: 0 PID: 3 Comm: kthreadd Tainted: G W 4.3.0-rc3-00001-g482eaa5 #282
[ 0.117005] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.118005] 00000000 00000000 d3093ef4 c1219245 d3078dc0 d3093f00 c105c084 00000000
[ 0.120293] d3093f24 c17aedc2 00000282 d3093f1c c17b2325 d3078dc0 d3094000 d3058c00
[ 0.122072] c105a9f3 d3093f30 c17af235 d3059dd0 d3093fac c105830a d3093f74 00000000
[ 0.124069] Call Trace:
[ 0.124610] [<c1219245>] dump_stack+0x48/0x60
[ 0.125007] [<c105c084>] __schedule_bug+0x52/0x63
[ 0.126008] [<c17aedc2>] __schedule+0x5a/0x436
[ 0.127007] [<c17b2325>] ? _raw_spin_unlock_irqrestore+0x3f/0x4a
[ 0.128006] [<c105a9f3>] ? cpumask_next+0x26/0x26
[ 0.129006] [<c17af235>] schedule+0x64/0x78
[ 0.130006] [<c105830a>] kthread+0x87/0xa5
[ 0.131007] [<c107064c>] ? lock_acquire+0x72/0x7d
[ 0.132007] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 0.133007] [<c1058283>] ? __kthread_parkme+0x83/0x83
[ 0.134020] ------------[ cut here ]------------
[ 0.135000] kernel BUG at kernel/sched/core.c:3740!
[ 0.135000] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
[ 0.135000] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.3.0-rc3-00001-g482eaa5 #282
[ 0.135000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 0.135000] task: d3078000 ti: d3080000 task.ti: d3080000
[ 0.135000] EIP: 0060:[<c105ec63>] EFLAGS: 00010206 CPU: 0
[ 0.135000] EIP is at __sched_setscheduler+0x39/0x6b8
[ 0.135000] EAX: 7fffffff EBX: 00000000 ECX: 00000000 EDX: 00000063
[ 0.135000] ESI: d3078dc0 EDI: d3081e18 EBP: d3081e0c ESP: d3081dc4
[ 0.135000] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
[ 0.135000] CR0: 8005003b CR2: ffffffff CR3: 01dd3000 CR4: 000006b0
[ 0.135000] Stack:
[ 0.135000] 00000000 00000000 d3081e84 00000086 d3081ddc 00000063 d3081e18 c1070700
[ 0.135000] 00000001 00000001 00010000 d30790ac 00000030 00000000 00000086 d3078dc0
[ 0.135000] c17bcf28 d3081e48 d3081e54 c105f344 00000000 00000000 00000000 00000000
[ 0.135000] Call Trace:
[ 0.135000] [<c1070700>] ? lock_release+0xa9/0x29a
[ 0.135000] [<c105f344>] _sched_setscheduler+0x62/0x6a
[ 0.135000] [<c105f533>] sched_setscheduler_nocheck+0xa/0xc
[ 0.135000] [<c10584dd>] kthread_create_on_node+0xe6/0x104
[ 0.135000] [<c10585a6>] kthread_create_on_cpu+0x14/0x44
[ 0.135000] [<c105a9f3>] ? cpumask_next+0x26/0x26
[ 0.135000] [<c105acbb>] __smpboot_create_thread+0x4e/0xb0
[ 0.135000] [<c105ae70>] smpboot_register_percpu_thread_cpumask+0x44/0xa1
[ 0.135000] [<c1d4cc3b>] ? cpu_hotplug_pm_sync_init+0x11/0x11
[ 0.135000] [<c1d4cc58>] spawn_ksoftirqd+0x1d/0x27
[ 0.135000] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 0.135000] [<c1076213>] ? vprintk_default+0x12/0x14
[ 0.135000] [<c10a6426>] ? printk+0x12/0x14
[ 0.135000] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 0.135000] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 0.135000] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 0.135000] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 0.135000] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 0.135000] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 0.135000] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 0.135000] Code: 42 04 88 4d d4 c7 45 cc ff ff ff ff 83 f8 06 74 0b ba 63 00 00 00 2b 57 14 89 55 cc 89 c3 64 a1 04 28 dc c1 a9 00 ff 1f 00 74 3d <0f> 0b 8b 8e 3c 01 00 00 39 cb 0f 84 27 02 00 00 80 7d d4 00 0f
[ 0.135000] EIP: [<c105ec63>] __sched_setscheduler+0x39/0x6b8 SS:ESP 0068:d3081dc4
[ 0.135012] ---[ end trace d248a7baa3fff262 ]---
[ 0.136006] Kernel panic - not syncing: Fatal exception in interrupt
Thanks,
Ying Huang
5 years, 3 months
[lkp] [arch/x86] 2657eee793: BUG: kernel boot hang
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree revert-a580b73412da93a2194037e54342980f2452520d-2657eee793e8b13334860e7953d5aa6e49227521
commit 2657eee793e8b13334860e7953d5aa6e49227521 ("arch/x86: enable task isolation functionality")
+------------------------------------------------+------------+------------+
| | 5f7bb45a98 | 2657eee793 |
+------------------------------------------------+------------+------------+
| boot_successes | 15 | 0 |
| boot_failures | 4 | 15 |
| IP-Config:Auto-configuration_of_network_failed | 4 | |
| BUG:kernel_boot_hang | 0 | 15 |
+------------------------------------------------+------------+------------+
[ 14.953363] debug: unmapping init [mem 0x41bd5000-0x41c9bfff]
[ 14.953952] Write protecting the kernel text: 8216k
[ 14.954381] Write protecting the kernel read-only data: 3068k
[ 14.954849] NX-protecting the kernel data: 6120k
Elapsed time: 750
BUG: kernel boot hang
qemu-system-i386 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/i386-randconfig-s0-201539/gcc-4.9/2657eee793e8b13334860e7953d5aa6e49227521/vmlinuz-4.3.0-rc3-00007-g2657eee -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-i386-30/bisect_boot-1-quantal-core-i386.cgz-i386-randconfig-s0-201539-2657eee793e8b13334860e7953d5aa6e49227521-20150929-62209-qa4zpx-0.yaml ARCH=i386 kconfig=i386-randconfig-s0-201539 branch=linux-review/Chris-Metcalf/support-task_isolated-mode-for-nohz_full commit=2657eee793e8b13334860e7953d5aa6e49227521 BOOT_IMAGE=/pkg/linux/i386-randconfig-s0-201539/gcc-4.9/2657eee793e8b13334860e7953d5aa6e49227521/vmlinuz-4.3.0-rc3-00007-g2657eee max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-i386/quantal-core-i386.cgz/i386-randconfig-s0-201539/gcc-4.9/2657eee793e8b13334860e7953d5aa6e49227521/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-i386-30::dhcp drbd.minor_count=8' -initrd /fs/sde1/initrd-vm-vp-quantal-i386-30 -m 360 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-i386-30 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-i386-30 -daemonize -display none -monitor null
Thanks,
Ying Huang
5 years, 3 months
[lkp] [ACPI] 73a092e801:
by kernel test robot
FYI, we noticed the below changes on
git://linux-arm.org/linux-skn acpi_lpi
commit 73a092e801e9938496b71acc9434fb33a9d65d34 ("ACPI: tables: simplify acpi_parse_entries")
[ 0.000000] ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20150818/tbfadt-704)
...
[ 0.000000] ACPI: [APIC:0x05] Invalid zero length
[ 0.000000] ACPI: Error parsing LAPIC address override entry
[ 0.000000] ACPI: Invalid BIOS MADT, disabling ACPI
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 3 months
[sched] 8787dbe5ef: BUG: scheduling while atomic: swapper/0/1/0x00200001
by Fengguang Wu
FYI, we noticed the below changes on
commit 8787dbe5ef7e61385140c771fc6fe8f1689f11fa ("sched: Simplify preempt_count tests")
+--------------------------------------------------+----------+------------+
| | v4.3-rc3 | 8787dbe5ef |
+--------------------------------------------------+----------+------------+
| boot_successes | 54 | 0 |
| boot_failures | 10 | 34 |
| IP-Config:Auto-configuration_of_network_failed | 10 | |
| BUG:scheduling_while_atomic | 0 | 34 |
| backtrace:spawn_ksoftirqd | 0 | 34 |
| backtrace:kernel_init_freeable | 0 | 34 |
| backtrace:kasprintf | 0 | 34 |
| backtrace:create_worker | 0 | 34 |
| backtrace:init_workqueues | 0 | 34 |
| backtrace:alloc_workqueue_attrs | 0 | 34 |
| backtrace:__alloc_workqueue_key | 0 | 34 |
| backtrace:kthread_create_on_node | 0 | 34 |
| backtrace:rcu_spawn_all_nocb_kthreads | 0 | 34 |
| backtrace:smpboot_register_percpu_thread_cpumask | 0 | 32 |
| backtrace:cpu_stop_init | 0 | 32 |
| backtrace:watchdog_enable_all_cpus | 0 | 34 |
| backtrace:lockup_detector_init | 0 | 34 |
| backtrace:fork_idle | 0 | 2 |
| backtrace:idle_threads_init | 0 | 2 |
| backtrace:smp_init | 0 | 33 |
| backtrace:cpu_up | 0 | 2 |
| backtrace:set_mtrr | 0 | 33 |
| backtrace:mtrr_aps_init | 0 | 33 |
| backtrace:native_smp_cpus_done | 0 | 33 |
| backtrace:page_alloc_init_late | 0 | 33 |
| backtrace:devtmpfs_init | 0 | 33 |
| backtrace:driver_init | 0 | 33 |
| backtrace:_cond_resched | 0 | 7 |
| backtrace:pm_init | 0 | 33 |
| backtrace:kobject_create_and_add | 0 | 33 |
| backtrace:cgroup_wq_init | 0 | 33 |
| backtrace:perf_workqueue_init | 0 | 33 |
| backtrace:cpuidle_register_governor | 0 | 2 |
| backtrace:init_ladder | 0 | 2 |
| backtrace:blk_dev_init | 0 | 32 |
| backtrace:genhd_device_init | 0 | 32 |
| backtrace:acpi_os_initialize1 | 0 | 32 |
| backtrace:acpi_init | 0 | 32 |
| backtrace:md_init | 0 | 27 |
| backtrace:hpet_cpuhp_notify | 0 | 1 |
| backtrace:hpet_late_init | 0 | 1 |
| backtrace:clocksource_done_booting | 0 | 23 |
| backtrace:acpi_get_devices | 0 | 1 |
| backtrace:pnpacpi_init | 0 | 1 |
| backtrace:device_create | 0 | 22 |
| backtrace:chr_dev_init | 0 | 22 |
| backtrace:vfs_write | 0 | 16 |
| backtrace:SyS_write | 0 | 16 |
| backtrace:populate_rootfs | 0 | 17 |
| backtrace:do_sys_open | 0 | 15 |
| backtrace:SyS_open | 0 | 15 |
| backtrace:SYSC_symlinkat | 0 | 12 |
| backtrace:SyS_symlink | 0 | 12 |
| backtrace:kset_create_and_add | 0 | 15 |
| backtrace:devices_init | 0 | 17 |
| backtrace:hung_task_init | 0 | 31 |
| backtrace:default_bdi_init | 0 | 31 |
| backtrace:kmem_cache_alloc | 0 | 30 |
| backtrace:bdi_init | 0 | 30 |
| backtrace:crypto_wq_init | 0 | 30 |
| backtrace:bioset_create | 0 | 30 |
| backtrace:init_bio | 0 | 30 |
| backtrace:sched_init_smp | 0 | 31 |
| backtrace:register_sched_domain_sysctl | 0 | 31 |
| backtrace:wait_for_completion | 0 | 31 |
| backtrace:kern_mount_data | 0 | 2 |
| backtrace:shmem_init | 0 | 2 |
| backtrace:_do_fork | 0 | 5 |
| backtrace:kthreadd | 0 | 5 |
| backtrace:misc_register | 0 | 28 |
| backtrace:vga_arb_device_init | 0 | 28 |
| backtrace:tifm_init | 0 | 28 |
| backtrace:bus_register | 0 | 26 |
| backtrace:edac_init | 0 | 24 |
| backtrace:mmc_init | 0 | 24 |
| backtrace:devfreq_init | 0 | 24 |
| backtrace:fscache_init | 0 | 22 |
| backtrace:cachefiles_init | 0 | 21 |
| backtrace:tty_init | 0 | 15 |
| backtrace:device_create_with_groups | 0 | 15 |
| backtrace:user_path_create | 0 | 16 |
| backtrace:SyS_mkdirat | 0 | 16 |
| backtrace:SyS_mkdir | 0 | 16 |
| backtrace:SYSC_fchownat | 0 | 7 |
| backtrace:SyS_lchown | 0 | 6 |
| backtrace:classes_init | 0 | 1 |
| backtrace:vfs_lstat | 0 | 13 |
| backtrace:SyS_newlstat | 0 | 13 |
| backtrace:SyS_mknodat | 0 | 4 |
| backtrace:SyS_mknod | 0 | 4 |
| backtrace:device_register | 0 | 4 |
| backtrace:platform_bus_init | 0 | 6 |
| backtrace:unshare_nsproxy_namespaces | 0 | 3 |
| backtrace:SyS_unshare | 0 | 3 |
| backtrace:devtmpfsd | 0 | 3 |
| backtrace:subsys_system_register | 0 | 1 |
| backtrace:memory_dev_init | 0 | 1 |
| backtrace:user_path_at_empty | 0 | 2 |
| backtrace:SyS_fchmodat | 0 | 3 |
| backtrace:SyS_chmod | 0 | 3 |
| backtrace:pcpu_balance_workfn | 0 | 1 |
| backtrace:do_sys_ftruncate | 0 | 3 |
| backtrace:SyS_ftruncate | 0 | 3 |
| backtrace:mnt_want_write_file | 0 | 2 |
| backtrace:SyS_fchown | 0 | 3 |
| backtrace:chmod_common | 0 | 2 |
| backtrace:SyS_fchmod | 0 | 2 |
| backtrace:chown_common | 0 | 1 |
| backtrace:do_mount | 0 | 1 |
| backtrace:SyS_mount | 0 | 1 |
| backtrace:SyS_chown | 0 | 2 |
| backtrace:firmware_init | 0 | 2 |
| backtrace:vfs_mknod | 0 | 1 |
+--------------------------------------------------+----------+------------+
[ 0.831646] TSC deadline timer enabled
[ 0.832230] smpboot: CPU0: Intel Core Processor (Haswell) (family: 0x6, model: 0x3c, stepping: 0x1)
[ 0.835984] Performance Events: unsupported p6 CPU model 60 no PMU driver, software events only.
[ 0.837205] BUG: scheduling while atomic: swapper/0/1/0x00200001
[ 0.837964] 2 locks held by swapper/0/1:
[ 0.838444] #0: (cpu_hotplug.lock){.+.+.+}, at: [<c104408a>] get_online_cpus+0x27/0x62
[ 0.839534] #1: (smpboot_threads_lock){+.+.+.}, at: [<c105ae50>] smpboot_register_percpu_thread_cpumask+0x24/0xa1
[ 0.840902] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc3-00001-g8787dbe #364
[ 0.857965] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.859204] 00000000 00000000 d4463de0 c1219245 d4478000 d4463dec c105c084 00000000
[ 0.860238] d4463e10 c17aedc2 c1060ee7 00000000 d4463e70 d4478000 d4464000 d4463e70
[ 0.861270] d4463e9c d4463e1c c17af1b9 7fffffff d4463e24 c17af2d6 d4463e58 c17af88a
[ 0.862299] Call Trace:
[ 0.862597] [<c1219245>] dump_stack+0x48/0x60
[ 0.863120] [<c105c084>] __schedule_bug+0x52/0x63
[ 0.863689] [<c17aedc2>] __schedule+0x5a/0x436
[ 0.864225] [<c1060ee7>] ? ___might_sleep+0xaa/0x197
[ 0.864834] [<c17af1b9>] preempt_schedule_common+0x1b/0x33
[ 0.865502] [<c17af2d6>] _cond_resched+0x15/0x1c
[ 0.882163] [<c17af88a>] __wait_for_common+0x26/0x105
[ 0.882809] [<c17b2325>] ? _raw_spin_unlock_irqrestore+0x3f/0x4a
[ 0.900558] [<c17b1d37>] ? console_conditional_schedule+0x24/0x24
[ 0.901448] [<c105e25d>] ? try_to_wake_up+0x192/0x19c
[ 0.902201] [<c17afa10>] wait_for_completion_killable+0x17/0x2c
[ 0.903058] [<c1058493>] kthread_create_on_node+0x9c/0x104
[ 0.903867] [<c10585a6>] kthread_create_on_cpu+0x14/0x44
[ 0.904632] [<c105a9f3>] ? cpumask_next+0x26/0x26
[ 0.905236] [<c105acbb>] __smpboot_create_thread+0x4e/0xb0
[ 0.905934] [<c105ae70>] smpboot_register_percpu_thread_cpumask+0x44/0xa1
[ 0.924920] [<c1d4cc3b>] ? cpu_hotplug_pm_sync_init+0x11/0x11
[ 0.925605] [<c1d4cc58>] spawn_ksoftirqd+0x1d/0x27
[ 0.926195] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 0.926778] [<c10761f8>] ? vprintk_default+0x12/0x14
[ 0.927385] [<c10a640b>] ? printk+0x12/0x14
[ 0.927905] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 0.928496] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 0.929084] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 0.929794] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 0.930424] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 0.930960] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 0.931600] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 0.954388] BUG: scheduling while atomic: swapper/0/1/0x00200001
[ 0.955157] 2 locks held by swapper/0/1:
[ 0.955650] #0: (cpu_hotplug.lock){.+.+.+}, at: [<c104408a>] get_online_cpus+0x27/0x62
[ 0.956723] #1: (smpboot_threads_lock){+.+.+.}, at: [<c105ae50>] smpboot_register_percpu_thread_cpumask+0x24/0xa1
[ 0.958173] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.3.0-rc3-00001-g8787dbe #364
[ 0.959432] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.960700] 00000000 00000000 d4463e58 c1219245 d4478000 d4463e64 c105c084 00000000
[ 0.985862] d4463e88 c17aedc2 c1060ee7 00000000 d4489f48 d4478000 d4464000 d4489f48
[ 0.986895] d445cc00 d4463e94 c17af1b9 7fffffff d4463e9c c17af2d6 d4463ed0 c17af88a
[ 0.987927] Call Trace:
[ 0.988233] [<c1219245>] dump_stack+0x48/0x60
[ 0.988756] [<c105c084>] __schedule_bug+0x52/0x63
[ 0.989331] [<c17aedc2>] __schedule+0x5a/0x436
[ 0.989877] [<c1060ee7>] ? ___might_sleep+0xaa/0x197
[ 0.990475] [<c17af1b9>] preempt_schedule_common+0x1b/0x33
[ 0.991126] [<c17af2d6>] _cond_resched+0x15/0x1c
[ 0.991682] [<c17af88a>] __wait_for_common+0x26/0x105
[ 1.007929] [<c17b2325>] ? _raw_spin_unlock_irqrestore+0x3f/0x4a
[ 1.008676] [<c17b1d37>] ? console_conditional_schedule+0x24/0x24
[ 1.009446] [<c105e25d>] ? try_to_wake_up+0x192/0x19c
[ 1.010086] [<c17af97d>] wait_for_completion+0x14/0x17
[ 1.010742] [<c105858d>] kthread_park+0x3f/0x44
[ 1.011314] [<c10585cd>] kthread_create_on_cpu+0x3b/0x44
[ 1.011988] [<c105acbb>] __smpboot_create_thread+0x4e/0xb0
[ 1.012666] [<c105ae70>] smpboot_register_percpu_thread_cpumask+0x44/0xa1
[ 1.024690] [<c1d4cc3b>] ? cpu_hotplug_pm_sync_init+0x11/0x11
[ 1.025526] [<c1d4cc58>] spawn_ksoftirqd+0x1d/0x27
[ 1.030249] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 1.030887] [<c10761f8>] ? vprintk_default+0x12/0x14
[ 1.031482] [<c10a640b>] ? printk+0x12/0x14
[ 1.031985] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 1.032563] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 1.033145] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 1.037969] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 1.038750] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 1.039413] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 1.040200] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 1.040913] BUG: scheduling while atomic: swapper/0/1/0x00200001
[ 1.043374] no locks held by swapper/0/1.
[ 1.043893] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.3.0-rc3-00001-g8787dbe #364
[ 1.045035] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 1.050426] 00000000 00000000 d4463e8c c1219245 d4478000 d4463e98 c105c084 00000000
[ 1.051523] d4463ebc c17aedc2 c1060ee7 00000000 d4401c80 d4478000 d4464000 d4401c80
[ 1.052603] d4463f38 d4463ec8 c17af1b9 00000010 d4463ed0 c17af2d6 d4463ee0 c10cc3a7
[ 1.053818] Call Trace:
[ 1.054170] [<c1219245>] dump_stack+0x48/0x60
[ 1.054805] [<c105c084>] __schedule_bug+0x52/0x63
[ 1.055475] [<c17aedc2>] __schedule+0x5a/0x436
[ 1.056121] [<c1060ee7>] ? ___might_sleep+0xaa/0x197
[ 1.056825] [<c17af1b9>] preempt_schedule_common+0x1b/0x33
[ 1.065690] [<c17af2d6>] _cond_resched+0x15/0x1c
[ 1.066251] [<c10cc3a7>] slab_pre_alloc_hook+0x31/0x37
[ 1.066873] [<c10d0635>] __kmalloc_track_caller+0x4b/0xef
[ 1.067506] [<c1223992>] ? kasprintf+0x11/0x13
[ 1.068049] [<c1223962>] kvasprintf+0x27/0x46
[ 1.068565] [<c1d4d167>] ? wq_sysfs_init+0x24/0x24
[ 1.069144] [<c1223992>] kasprintf+0x11/0x13
[ 1.073726] [<c1d39b1e>] do_one_initcall+0x22/0x14e
[ 1.074316] [<c1d4d167>] ? wq_sysfs_init+0x24/0x24
[ 1.074902] [<c10761f8>] ? vprintk_default+0x12/0x14
[ 1.075491] [<c10a640b>] ? printk+0x12/0x14
[ 1.076007] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 1.076584] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 1.077177] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 1.081944] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 1.082573] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 1.083108] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 1.083745] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 1.084520] BUG: scheduling while atomic: swapper/0/1/0x00200001
[ 1.085269] no locks held by swapper/0/1.
[ 1.089847] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.3.0-rc3-00001-g8787dbe #364
[ 1.090965] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 1.092228] 00000000 00000000 d4463ddc c1219245 d4478000 d4463de8 c105c084 00000000
[ 1.093489] d4463e0c c17aedc2 c1060ee7 00000000 d4463e6c d4478000 d4464000 d4463e6c
[ 1.099534] d4463e98 d4463e18 c17af1b9 7fffffff d4463e20 c17af2d6 d4463e54 c17af88a
[ 1.100544] Call Trace:
[ 1.100851] [<c1219245>] dump_stack+0x48/0x60
[ 1.101364] [<c105c084>] __schedule_bug+0x52/0x63
[ 1.101925] [<c17aedc2>] __schedule+0x5a/0x436
[ 1.102445] [<c1060ee7>] ? ___might_sleep+0xaa/0x197
[ 1.111144] [<c17af1b9>] preempt_schedule_common+0x1b/0x33
[ 1.111945] [<c17af2d6>] _cond_resched+0x15/0x1c
[ 1.112648] [<c17af88a>] __wait_for_common+0x26/0x105
[ 1.113399] [<c17b2325>] ? _raw_spin_unlock_irqrestore+0x3f/0x4a
[ 1.114293] [<c17b1d37>] ? console_conditional_schedule+0x24/0x24
[ 1.130691] [<c105e25d>] ? try_to_wake_up+0x192/0x19c
[ 1.131437] [<c17afa10>] wait_for_completion_killable+0x17/0x2c
[ 1.132309] [<c1058493>] kthread_create_on_node+0x9c/0x104
[ 1.133104] [<c1053106>] create_worker+0xa5/0x121
[ 1.133803] [<c1053e2d>] ? process_scheduled_works+0x21/0x21
[ 1.134613] [<c12271e8>] ? find_next_bit+0xa/0xd
[ 1.135314] [<c1d4d2d4>] init_workqueues+0x16d/0x303
[ 1.136045] [<c1d4d167>] ? wq_sysfs_init+0x24/0x24
[ 1.136758] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 1.137477] [<c10761f8>] ? vprintk_default+0x12/0x14
[ 1.138234] [<c10a640b>] ? printk+0x12/0x14
[ 1.153892] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 1.154504] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 1.155138] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 1.155864] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 1.156530] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 1.157082] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 1.157800] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 1.158602] BUG: scheduling while atomic: swapper/0/1/0x00200001
[ 1.159485] no locks held by swapper/0/1.
[ 1.160065] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.3.0-rc3-00001-g8787dbe #364
[ 1.183377] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 1.184642] 00000000 00000000 d4463e2c c1219245 d4478000 d4463e38 c105c084 00000000
[ 1.185886] d4463e5c c17aedc2 c1060ee7 00000000 d4406b40 d4478000 d4464000 d4406b40
[ 1.186968] d4e3f0c8 d4463e68 c17af1b9 00008010 d4463e70 c17af2d6 d4463e80 c10cc3a7
[ 1.188046] Call Trace:
[ 1.188354] [<c1219245>] dump_stack+0x48/0x60
[ 1.188911] [<c105c084>] __schedule_bug+0x52/0x63
[ 1.189497] [<c17aedc2>] __schedule+0x5a/0x436
[ 1.212129] [<c1060ee7>] ? ___might_sleep+0xaa/0x197
[ 1.212757] [<c17af1b9>] preempt_schedule_common+0x1b/0x33
[ 1.213454] [<c17af2d6>] _cond_resched+0x15/0x1c
[ 1.214114] [<c10cc3a7>] slab_pre_alloc_hook+0x31/0x37
[ 1.214862] [<c121a436>] ? ida_pre_get+0x2b/0x93
[ 1.215513] [<c10cef28>] kmem_cache_alloc+0x17/0xbb
[ 1.216223] [<c121a436>] ? ida_pre_get+0x2b/0x93
[ 1.216882] [<c121a436>] ida_pre_get+0x2b/0x93
[ 1.217522] [<c121a709>] ida_simple_get+0x34/0x90
[ 1.218142] [<c1053083>] create_worker+0x22/0x121
[ 1.218712] [<c12271e8>] ? find_next_bit+0xa/0xd
[ 1.250333] [<c1d4d2d4>] init_workqueues+0x16d/0x303
[ 1.250928] [<c1d4d167>] ? wq_sysfs_init+0x24/0x24
[ 1.251502] [<c1d39bcc>] do_one_initcall+0xd0/0x14e
[ 1.252085] [<c10761f8>] ? vprintk_default+0x12/0x14
[ 1.252674] [<c10a640b>] ? printk+0x12/0x14
[ 1.253177] [<c100dbba>] ? print_cpu_info+0x8e/0xab
[ 1.253753] [<c100dbd0>] ? print_cpu_info+0xa4/0xab
[ 1.254332] [<c1d47377>] ? native_smp_prepare_cpus+0x223/0x25e
[ 1.255026] [<c1d39ca7>] kernel_init_freeable+0x5d/0x172
[ 1.255649] [<c17aaaf8>] kernel_init+0x8/0xb5
[ 1.256178] [<c17b29c1>] ret_from_kernel_thread+0x21/0x30
[ 1.272053] [<c17aaaf0>] ? rest_init+0x116/0x116
[ 1.272701] BUG: scheduling while atomic: swapper/0/1/0x00200001
[ 1.273423] no locks held by swapper/0/1.
[ 1.282074] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 4.3.0-rc3-00001-g8787dbe #364
[ 1.283264] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 1.284348] 00000000 00000000 d4463ddc c1219245 d4478000 d4463de8 c105c084 00000000
[ 1.285430] d4463e0c c17aedc2 c1060ee7 00000000 d4463e6c d4478000 d4464000 d4463e6c
[ 1.305973] d4463e98 d4463e18 c17af1b9 7fffffff d4463e20 c17af2d6 d4463e54 c17af88a
[ 1.307232] Call Trace:
[ 1.307608] [<c1219245>] dump_stack+0x48/0x60
[ 1.308252] [<c105c084>] __schedule_bug+0x52/0x63
[ 1.308953] [<c17aedc2>] __schedule+0x5a/0x436
[ 1.309510] [<c1060ee7>] ? ___might_sleep+0xaa/0x197
[ 1.310140] [<c17af1b9>] preempt_schedule_common+0x1b/0x33
[ 1.313683] [<c17af2d6>] _cond_resched+0x15/0x1c
[ 1.314271] [<c17af88a>] __wait_for_common+0x26/0x105
[ 1.325859] [<c17b2325>] ? _raw_spin_unlock_irqrestore+0x3f/0x4a
[ 1.326568] [<c17b1d37>] ? console_conditional_schedule+0x24/0x24
[ 1.327298] [<c105e25d>] ? try_to_wake_up+0x192/0x19c
[ 1.327915] [<c17afa10>] wait_for_completion_killable+0x17/0x2c
[ 1.328611] [<c1058493>] kthread_create_on_node+0x9c/0x104
[ 1.329257] [<c1053106>] create_worker+0xa5/0x121
[ 1.329825] [<c1053e2d>] ? process_scheduled_works+0x21/0x21
[ 1.330485] [<c1227100>] ? __ctzsi2+0x3/0x9
Thanks,
Fengguang Wu
5 years, 3 months
[lkp] [net] 192132b9a0: -17.5% netperf.Throughput_tps
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 192132b9a034d87566294be0fba5f8f75c2cf16b ("net: Add support for VRFs to inetpeer cache")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/nr_threads/cluster/test:
lkp-sbx04/netperf/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/200%/cs-localhost/TCP_CRR
commit:
5345c2e12d41f815c1009c9dee72f3d5fcfd4282
192132b9a034d87566294be0fba5f8f75c2cf16b
5345c2e12d41f815 192132b9a034d87566294be0fb
---------------- --------------------------
%stddev %change %stddev
\ | \
2841 ± 2% -17.5% 2344 ± 1% netperf.Throughput_tps
1.095e+08 ± 2% -17.3% 90493497 ± 1% netperf.time.involuntary_context_switches
1.093e+08 ± 2% -17.5% 90186076 ± 1% netperf.time.minor_page_faults
4612 ± 0% +9.8% 5062 ± 1% netperf.time.percent_of_cpu_this_job_got
13149 ± 0% +11.7% 14686 ± 1% netperf.time.system_time
943.49 ± 3% -17.1% 781.88 ± 1% netperf.time.user_time
1.091e+08 ± 2% -17.5% 90055371 ± 1% netperf.time.voluntary_context_switches
4.367e+08 ± 2% -17.5% 3.604e+08 ± 1% softirqs.NET_RX
320.02 ± 0% -2.2% 312.95 ± 0% turbostat.CorWatt
375.96 ± 0% -1.9% 368.88 ± 0% turbostat.PkgWatt
1428128 ± 2% -17.3% 1180769 ± 1% vmstat.system.cs
68804 ± 0% +1.2% 69635 ± 0% vmstat.system.in
23930 ±148% -97.8% 522.50 ± 21% numa-meminfo.node0.Shmem
1358 ±141% +301.8% 5457 ± 19% numa-meminfo.node2.AnonHugePages
118910 ± 3% +40.5% 167013 ± 25% numa-meminfo.node2.FilePages
2519 ±143% +279.0% 9548 ± 9% numa-meminfo.node2.Inactive(anon)
2604 ±140% +1850.5% 50791 ± 82% numa-meminfo.node2.Shmem
1.095e+08 ± 2% -17.3% 90493497 ± 1% time.involuntary_context_switches
1.093e+08 ± 2% -17.5% 90186076 ± 1% time.minor_page_faults
4612 ± 0% +9.8% 5062 ± 1% time.percent_of_cpu_this_job_got
13149 ± 0% +11.7% 14686 ± 1% time.system_time
943.49 ± 3% -17.1% 781.88 ± 1% time.user_time
1.091e+08 ± 2% -17.5% 90055371 ± 1% time.voluntary_context_switches
0.00 ± -1% +Inf% 12974495 ±167% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1.092e+08 ± 2% -17.5% 90102386 ± 1% latency_stats.hits.inet_csk_accept.inet_accept.SYSC_accept4.SyS_accept.entry_SYSCALL_64_fastpath
2.183e+08 ± 2% -17.5% 1.801e+08 ± 1% latency_stats.hits.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.SyS_recvfrom.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 13170498 ±164% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
13924 ± 43% +112.0% 29515 ± 95% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
1.794e+10 ± 2% +16.6% 2.092e+10 ± 2% latency_stats.sum.inet_csk_accept.inet_accept.SYSC_accept4.SyS_accept.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 13298537 ±162% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1.281e+10 ± 0% -5.8% 1.207e+10 ± 0% latency_stats.sum.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.SyS_recvfrom.entry_SYSCALL_64_fastpath
1.126e+08 ± 2% -17.4% 93008884 ± 1% proc-vmstat.numa_hit
1.126e+08 ± 2% -17.4% 93008682 ± 1% proc-vmstat.numa_local
3937 ± 4% -11.5% 3483 ± 7% proc-vmstat.numa_pages_migrated
6363343 ± 2% -18.4% 5191906 ± 2% proc-vmstat.pgalloc_dma32
1.077e+08 ± 2% -17.2% 89162443 ± 1% proc-vmstat.pgalloc_normal
1.101e+08 ± 2% -17.3% 91044527 ± 1% proc-vmstat.pgfault
1.14e+08 ± 2% -17.3% 94305253 ± 1% proc-vmstat.pgfree
3937 ± 4% -11.5% 3483 ± 7% proc-vmstat.pgmigrate_success
28042466 ± 2% -17.5% 23130696 ± 2% numa-numastat.node0.local_node
28042466 ± 2% -17.5% 23130763 ± 2% numa-numastat.node0.numa_hit
0.00 ± 40% +Inf% 67.50 ± 8% numa-numastat.node0.other_node
28130811 ± 3% -15.6% 23732122 ± 1% numa-numastat.node1.local_node
28130861 ± 3% -15.6% 23732168 ± 1% numa-numastat.node1.numa_hit
28109208 ± 2% -18.4% 22948672 ± 2% numa-numastat.node2.local_node
28109647 ± 2% -18.4% 22948710 ± 2% numa-numastat.node2.numa_hit
28283999 ± 2% -18.0% 23200094 ± 1% numa-numastat.node3.local_node
28284539 ± 2% -18.0% 23200610 ± 1% numa-numastat.node3.numa_hit
89720 ± 2% -10.5% 80327 ± 1% slabinfo.Acpi-State.active_objs
1766 ± 2% -10.4% 1581 ± 1% slabinfo.Acpi-State.active_slabs
90097 ± 2% -10.4% 80684 ± 1% slabinfo.Acpi-State.num_objs
1766 ± 2% -10.4% 1581 ± 1% slabinfo.Acpi-State.num_slabs
1165 ± 4% +23.6% 1440 ± 9% slabinfo.blkdev_requests.active_objs
1165 ± 4% +23.6% 1440 ± 9% slabinfo.blkdev_requests.num_objs
45272 ± 5% -27.6% 32776 ± 3% slabinfo.kmalloc-256.active_objs
792.50 ± 5% -29.8% 556.50 ± 4% slabinfo.kmalloc-256.active_slabs
50753 ± 5% -29.7% 35654 ± 4% slabinfo.kmalloc-256.num_objs
792.50 ± 5% -29.8% 556.50 ± 4% slabinfo.kmalloc-256.num_slabs
78268 ± 3% -11.2% 69534 ± 1% slabinfo.kmalloc-64.active_objs
1289 ± 2% -13.1% 1120 ± 1% slabinfo.kmalloc-64.active_slabs
82539 ± 2% -13.1% 71749 ± 1% slabinfo.kmalloc-64.num_objs
1289 ± 2% -13.1% 1120 ± 1% slabinfo.kmalloc-64.num_slabs
152.50 ± 37% -72.0% 42.67 ± 85% numa-vmstat.node0.nr_dirtied
5982 ±148% -97.8% 130.00 ± 21% numa-vmstat.node0.nr_shmem
148.75 ± 37% -71.5% 42.33 ± 86% numa-vmstat.node0.nr_written
14143937 ± 2% -17.2% 11706912 ± 2% numa-vmstat.node0.numa_hit
14109627 ± 2% -17.3% 11671407 ± 2% numa-vmstat.node0.numa_local
32.00 ±119% +244.5% 110.25 ± 75% numa-vmstat.node1.nr_dirtied
30.67 ±121% +252.2% 108.00 ± 75% numa-vmstat.node1.nr_written
14248863 ± 3% -15.3% 12069600 ± 1% numa-vmstat.node1.numa_hit
14238070 ± 3% -15.3% 12059120 ± 1% numa-vmstat.node1.numa_local
29727 ± 3% +40.5% 41752 ± 25% numa-vmstat.node2.nr_file_pages
629.25 ±144% +279.3% 2386 ± 9% numa-vmstat.node2.nr_inactive_anon
650.50 ±140% +1851.9% 12697 ± 82% numa-vmstat.node2.nr_shmem
14166575 ± 2% -18.0% 11610237 ± 2% numa-vmstat.node2.numa_hit
14124116 ± 2% -18.1% 11569737 ± 2% numa-vmstat.node2.numa_local
14255093 ± 2% -17.7% 11737181 ± 1% numa-vmstat.node3.numa_hit
14214429 ± 2% -17.7% 11695545 ± 1% numa-vmstat.node3.numa_local
4.22 ± 7% -27.1% 3.07 ± 6% perf-profile.cpu-cycles.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
2.12 ± 3% -24.3% 1.60 ± 4% perf-profile.cpu-cycles.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
25.95 ± 1% +21.2% 31.45 ± 3% perf-profile.cpu-cycles.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
10.69 ± 2% -20.7% 8.48 ± 2% perf-profile.cpu-cycles.SYSC_recvfrom.sys_recvfrom.entry_SYSCALL_64_fastpath
23.42 ± 6% -37.4% 14.65 ± 9% perf-profile.cpu-cycles.SYSC_sendto.sys_sendto.entry_SYSCALL_64_fastpath
15.88 ± 4% +67.7% 26.64 ± 5% perf-profile.cpu-cycles.____fput.task_work_run.do_notify_resume.int_signal
21.77 ± 12% -47.6% 11.40 ± 15% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
3.14 ± 14% -23.2% 2.41 ± 4% perf-profile.cpu-cycles.__dentry_kill.dput.__fput.____fput.task_work_run
2.52 ± 6% -59.8% 1.01 ± 36% perf-profile.cpu-cycles.__destroy_inode.destroy_inode.evict.iput.__dentry_kill
3.01 ± 2% -31.9% 2.05 ± 3% perf-profile.cpu-cycles.__dev_queue_xmit.dev_queue_xmit_sk.ip_finish_output2.ip_finish_output.ip_output
12.36 ± 6% +12.4% 13.89 ± 1% perf-profile.cpu-cycles.__do_softirq.do_softirq_own_stack.do_softirq.__local_bh_enable_ip._raw_spin_unlock_bh
29.10 ± 4% -13.3% 25.23 ± 2% perf-profile.cpu-cycles.__do_softirq.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2
2.16 ± 16% -43.8% 1.21 ± 3% perf-profile.cpu-cycles.__do_softirq.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.tcp_prequeue_process
15.70 ± 4% +68.6% 26.47 ± 4% perf-profile.cpu-cycles.__fput.____fput.task_work_run.do_notify_resume.int_signal
0.67 ± 17% +69.9% 1.14 ± 5% perf-profile.cpu-cycles.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
25.34 ± 1% +22.3% 30.99 ± 3% perf-profile.cpu-cycles.__inet_stream_connect.inet_stream_connect.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
3.18 ± 2% -20.8% 2.52 ± 3% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
12.44 ± 6% +12.0% 13.94 ± 1% perf-profile.cpu-cycles.__local_bh_enable_ip._raw_spin_unlock_bh.release_sock.__inet_stream_connect.inet_stream_connect
29.46 ± 4% -13.3% 25.54 ± 2% perf-profile.cpu-cycles.__local_bh_enable_ip.ip_finish_output2.ip_finish_output.ip_output.ip_local_out_sk
2.40 ± 15% -47.1% 1.27 ± 4% perf-profile.cpu-cycles.__local_bh_enable_ip.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg.sock_recvmsg
1.49 ± 3% -15.6% 1.26 ± 1% perf-profile.cpu-cycles.__schedule.schedule.schedule_timeout.sk_wait_data.tcp_recvmsg
1.49 ± 4% -12.4% 1.31 ± 1% perf-profile.cpu-cycles.__sock_create.sys_socket.entry_SYSCALL_64_fastpath
21.55 ± 7% -38.8% 13.18 ± 10% perf-profile.cpu-cycles.__tcp_push_pending_frames.tcp_push.tcp_sendmsg.inet_sendmsg.sock_sendmsg
7.71 ± 3% +56.6% 12.09 ± 4% perf-profile.cpu-cycles.__tcp_push_pending_frames.tcp_send_fin.tcp_close.inet_release.sock_release
2.66 ± 2% -21.8% 2.08 ± 6% perf-profile.cpu-cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
8.51 ± 10% -44.9% 4.70 ± 13% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.sock_def_readable.tcp_child_process.tcp_v4_do_rcv
16.79 ± 10% -43.8% 9.44 ± 12% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.tcp_prequeue.tcp_v4_rcv.ip_local_deliver_finish
8.60 ± 11% -44.8% 4.75 ± 12% perf-profile.cpu-cycles.__wake_up_sync_key.sock_def_readable.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv
16.96 ± 10% -43.6% 9.56 ± 12% perf-profile.cpu-cycles.__wake_up_sync_key.tcp_prequeue.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
0.82 ± 24% -42.1% 0.47 ± 6% perf-profile.cpu-cycles._raw_spin_lock.inode_doinit_with_dentry.selinux_d_instantiate.security_d_instantiate.d_instantiate
1.74 ± 20% -52.2% 0.83 ± 6% perf-profile.cpu-cycles._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode.destroy_inode
0.00 ± -1% +Inf% 6.25 ± 17% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_finish_connect.tcp_rcv_state_process
0.00 ± -1% +Inf% 5.69 ± 12% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_rcv_state_process.tcp_child_process
0.00 ± -1% +Inf% 8.21 ± 12% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_rcv_state_process.tcp_v4_do_rcv
0.00 ± -1% +Inf% 5.65 ± 13% perf-profile.cpu-cycles._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_time_wait.tcp_fin
14.89 ± 19% -60.2% 5.92 ± 26% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
12.46 ± 6% +11.8% 13.92 ± 1% perf-profile.cpu-cycles._raw_spin_unlock_bh.release_sock.__inet_stream_connect.inet_stream_connect.SYSC_connect
23.55 ± 11% -45.6% 12.82 ± 13% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
8.49 ± 11% -45.0% 4.67 ± 13% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable.tcp_child_process
16.75 ± 10% -43.9% 9.40 ± 12% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.tcp_prequeue.tcp_v4_rcv
1.82 ± 1% -23.9% 1.39 ± 1% perf-profile.cpu-cycles.copy_page_to_iter.generic_file_read_iter.__vfs_read.vfs_read.sys_read
1.29 ± 7% -42.3% 0.75 ± 26% perf-profile.cpu-cycles.d_instantiate.sock_alloc_file.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
8.47 ± 11% -44.9% 4.67 ± 12% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable
16.71 ± 10% -44.0% 9.36 ± 12% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.tcp_prequeue
1.12 ± 5% -28.6% 0.80 ± 11% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__schedule.schedule.schedule_timeout
2.62 ± 5% -45.4% 1.43 ± 25% perf-profile.cpu-cycles.destroy_inode.evict.iput.__dentry_kill.dput
1.44 ± 3% -16.1% 1.21 ± 2% perf-profile.cpu-cycles.dev_hard_start_xmit.__dev_queue_xmit.dev_queue_xmit_sk.ip_finish_output2.ip_finish_output
3.15 ± 3% -30.5% 2.18 ± 4% perf-profile.cpu-cycles.dev_queue_xmit_sk.ip_finish_output2.ip_finish_output.ip_output.ip_local_out_sk
2.07 ± 7% -32.2% 1.40 ± 3% perf-profile.cpu-cycles.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
1.65 ± 3% -17.7% 1.36 ± 2% perf-profile.cpu-cycles.do_mmap_pgoff.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.42 ± 2% -25.6% 1.80 ± 5% perf-profile.cpu-cycles.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
16.19 ± 4% +65.9% 26.85 ± 4% perf-profile.cpu-cycles.do_notify_resume.int_signal
12.40 ± 6% +12.3% 13.92 ± 1% perf-profile.cpu-cycles.do_softirq.part.13.__local_bh_enable_ip._raw_spin_unlock_bh.release_sock.__inet_stream_connect
29.32 ± 4% -13.3% 25.43 ± 2% perf-profile.cpu-cycles.do_softirq.part.13.__local_bh_enable_ip.ip_finish_output2.ip_finish_output.ip_output
2.36 ± 15% -47.0% 1.25 ± 3% perf-profile.cpu-cycles.do_softirq.part.13.__local_bh_enable_ip.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg
12.38 ± 6% +12.4% 13.91 ± 1% perf-profile.cpu-cycles.do_softirq_own_stack.do_softirq.__local_bh_enable_ip._raw_spin_unlock_bh.release_sock
29.23 ± 4% -13.3% 25.34 ± 2% perf-profile.cpu-cycles.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_finish_output
2.27 ± 16% -45.7% 1.23 ± 3% perf-profile.cpu-cycles.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.tcp_prequeue_process.tcp_recvmsg
2.44 ± 2% -30.5% 1.70 ± 6% perf-profile.cpu-cycles.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
3.60 ± 15% -31.8% 2.46 ± 1% perf-profile.cpu-cycles.dput.__fput.____fput.task_work_run.do_notify_resume
5.97 ± 3% -24.1% 4.53 ± 4% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
22.90 ± 12% -46.4% 12.28 ± 14% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
23.53 ± 11% -45.6% 12.80 ± 13% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
23.30 ± 11% -46.0% 12.59 ± 13% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
81.30 ± 1% -12.6% 71.09 ± 1% perf-profile.cpu-cycles.entry_SYSCALL_64_fastpath
2.79 ± 9% -26.7% 2.05 ± 13% perf-profile.cpu-cycles.evict.iput.__dentry_kill.dput.__fput
2.63 ± 1% -24.7% 1.98 ± 5% perf-profile.cpu-cycles.generic_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.25 ± 1% -11.6% 1.10 ± 6% perf-profile.cpu-cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter
1.61 ± 4% -17.0% 1.34 ± 2% perf-profile.cpu-cycles.inet_accept.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
1.08 ± 4% -27.3% 0.79 ± 5% perf-profile.cpu-cycles.inet_bind.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
1.42 ± 4% -9.7% 1.28 ± 1% perf-profile.cpu-cycles.inet_csk_accept.inet_accept.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
10.13 ± 2% -20.2% 8.09 ± 3% perf-profile.cpu-cycles.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.sys_recvfrom.entry_SYSCALL_64_fastpath
10.96 ± 2% +112.9% 23.33 ± 6% perf-profile.cpu-cycles.inet_release.sock_release.sock_close.__fput.____fput
23.08 ± 6% -37.9% 14.34 ± 9% perf-profile.cpu-cycles.inet_sendmsg.sock_sendmsg.SYSC_sendto.sys_sendto.entry_SYSCALL_64_fastpath
25.43 ± 1% +22.3% 31.09 ± 3% perf-profile.cpu-cycles.inet_stream_connect.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
1.18 ± 13% -70.0% 0.35 ± 45% perf-profile.cpu-cycles.inode_doinit_with_dentry.selinux_d_instantiate.security_d_instantiate.d_instantiate.sock_alloc_file
16.22 ± 4% +65.8% 26.88 ± 4% perf-profile.cpu-cycles.int_signal
32.31 ± 4% -13.9% 27.82 ± 2% perf-profile.cpu-cycles.ip_finish_output.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb
32.07 ± 3% -13.9% 27.61 ± 2% perf-profile.cpu-cycles.ip_finish_output2.ip_finish_output.ip_output.ip_local_out_sk.ip_queue_xmit
5.52 ± 2% -19.1% 4.46 ± 2% perf-profile.cpu-cycles.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_connect.tcp_v4_connect
26.33 ± 5% -12.7% 22.97 ± 2% perf-profile.cpu-cycles.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames
5.46 ± 2% -19.2% 4.41 ± 2% perf-profile.cpu-cycles.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_connect
1.52 ± 11% -28.4% 1.09 ± 4% perf-profile.cpu-cycles.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_send_ack
26.05 ± 5% -12.6% 22.77 ± 2% perf-profile.cpu-cycles.ip_output.ip_local_out_sk.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit
6.14 ± 2% -19.2% 4.96 ± 2% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_connect.tcp_v4_connect.__inet_stream_connect
1.29 ± 10% -17.7% 1.06 ± 2% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_send_ack.__tcp_ack_snd_check.tcp_rcv_established
20.46 ± 7% -40.0% 12.29 ± 10% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_push
6.16 ± 3% +76.8% 10.88 ± 5% perf-profile.cpu-cycles.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_send_fin
2.96 ± 13% -23.7% 2.25 ± 7% perf-profile.cpu-cycles.iput.__dentry_kill.dput.__fput.____fput
1.23 ± 5% -25.3% 0.92 ± 9% perf-profile.cpu-cycles.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
1.18 ± 3% -23.6% 0.90 ± 4% perf-profile.cpu-cycles.loopback_xmit.dev_hard_start_xmit.__dev_queue_xmit.dev_queue_xmit_sk.ip_finish_output2
1.40 ± 23% -79.7% 0.28 ± 17% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode
0.00 ± -1% +Inf% 6.06 ± 17% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_finish_connect
0.00 ± -1% +Inf% 5.52 ± 12% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_init_metrics.tcp_rcv_state_process
0.00 ± -1% +Inf% 7.96 ± 12% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_rcv_state_process
0.00 ± -1% +Inf% 5.49 ± 14% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_bh.tcp_get_metrics.tcp_update_metrics.tcp_time_wait
14.34 ± 20% -61.6% 5.50 ± 28% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
1.91 ± 7% -29.3% 1.35 ± 3% perf-profile.cpu-cycles.path_openat.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
5.42 ± 4% -24.0% 4.12 ± 2% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
15.36 ± 4% +49.3% 22.93 ± 5% perf-profile.cpu-cycles.release_sock.__inet_stream_connect.inet_stream_connect.SYSC_connect.sys_connect
1.47 ± 4% +572.7% 9.86 ± 9% perf-profile.cpu-cycles.release_sock.tcp_close.inet_release.sock_release.sock_close
5.99 ± 3% -23.9% 4.56 ± 4% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
0.94 ± 3% -21.5% 0.74 ± 3% perf-profile.cpu-cycles.schedule.schedule_timeout.inet_csk_accept.inet_accept.SYSC_accept4
1.55 ± 4% -17.0% 1.28 ± 1% perf-profile.cpu-cycles.schedule.schedule_timeout.sk_wait_data.tcp_recvmsg.inet_recvmsg
0.96 ± 3% -22.1% 0.75 ± 3% perf-profile.cpu-cycles.schedule_timeout.inet_csk_accept.inet_accept.SYSC_accept4.sys_accept
1.58 ± 3% -18.3% 1.29 ± 1% perf-profile.cpu-cycles.schedule_timeout.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg
1.25 ± 7% -57.6% 0.53 ± 40% perf-profile.cpu-cycles.security_d_instantiate.d_instantiate.sock_alloc_file.SYSC_accept4.sys_accept
2.41 ± 9% -73.2% 0.65 ± 47% perf-profile.cpu-cycles.security_inode_free.__destroy_inode.destroy_inode.evict.iput
1.01 ± 8% -36.1% 0.64 ± 6% perf-profile.cpu-cycles.security_sock_rcv_skb.sk_filter.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
1.21 ± 2% -20.3% 0.96 ± 1% perf-profile.cpu-cycles.security_socket_bind.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
1.24 ± 8% -58.7% 0.51 ± 42% perf-profile.cpu-cycles.selinux_d_instantiate.security_d_instantiate.d_instantiate.sock_alloc_file.SYSC_accept4
2.38 ± 9% -74.9% 0.60 ± 47% perf-profile.cpu-cycles.selinux_inode_free_security.security_inode_free.__destroy_inode.destroy_inode.evict
1.19 ± 3% -23.6% 0.91 ± 2% perf-profile.cpu-cycles.selinux_socket_bind.security_socket_bind.SYSC_bind.sys_bind.entry_SYSCALL_64_fastpath
1.14 ± 4% -19.7% 0.92 ± 3% perf-profile.cpu-cycles.sk_filter.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv_finish
2.45 ± 2% -12.9% 2.14 ± 5% perf-profile.cpu-cycles.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom
1.48 ± 11% -18.1% 1.21 ± 3% perf-profile.cpu-cycles.sock_alloc_file.SYSC_accept4.sys_accept.entry_SYSCALL_64_fastpath
11.04 ± 2% +112.7% 23.48 ± 6% perf-profile.cpu-cycles.sock_close.__fput.____fput.task_work_run.do_notify_resume
8.65 ± 10% -44.5% 4.80 ± 12% perf-profile.cpu-cycles.sock_def_readable.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
10.42 ± 2% -20.3% 8.30 ± 3% perf-profile.cpu-cycles.sock_recvmsg.SYSC_recvfrom.sys_recvfrom.entry_SYSCALL_64_fastpath
11.02 ± 2% +113.0% 23.48 ± 6% perf-profile.cpu-cycles.sock_release.sock_close.__fput.____fput.task_work_run
23.26 ± 6% -37.7% 14.49 ± 9% perf-profile.cpu-cycles.sock_sendmsg.SYSC_sendto.sys_sendto.entry_SYSCALL_64_fastpath
4.31 ± 7% -26.6% 3.17 ± 6% perf-profile.cpu-cycles.sys_accept.entry_SYSCALL_64_fastpath
2.16 ± 3% -25.0% 1.62 ± 4% perf-profile.cpu-cycles.sys_bind.entry_SYSCALL_64_fastpath
26.02 ± 1% +21.0% 31.48 ± 3% perf-profile.cpu-cycles.sys_connect.entry_SYSCALL_64_fastpath
2.09 ± 4% -27.8% 1.51 ± 3% perf-profile.cpu-cycles.sys_mmap.entry_SYSCALL_64_fastpath
2.07 ± 4% -27.4% 1.50 ± 3% perf-profile.cpu-cycles.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.53 ± 2% -21.6% 1.98 ± 5% perf-profile.cpu-cycles.sys_munmap.entry_SYSCALL_64_fastpath
2.49 ± 3% -28.9% 1.77 ± 7% perf-profile.cpu-cycles.sys_open.entry_SYSCALL_64_fastpath
2.96 ± 3% -20.2% 2.37 ± 2% perf-profile.cpu-cycles.sys_read.entry_SYSCALL_64_fastpath
10.74 ± 2% -20.7% 8.52 ± 2% perf-profile.cpu-cycles.sys_recvfrom.entry_SYSCALL_64_fastpath
23.48 ± 6% -37.4% 14.69 ± 9% perf-profile.cpu-cycles.sys_sendto.entry_SYSCALL_64_fastpath
2.43 ± 2% -23.8% 1.85 ± 6% perf-profile.cpu-cycles.sys_socket.entry_SYSCALL_64_fastpath
16.06 ± 4% +66.8% 26.79 ± 5% perf-profile.cpu-cycles.task_work_run.do_notify_resume.int_signal
1.94 ± 5% -24.0% 1.48 ± 2% perf-profile.cpu-cycles.tcp_check_req.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
9.42 ± 9% +21.5% 11.45 ± 1% perf-profile.cpu-cycles.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
10.82 ± 2% +114.2% 23.18 ± 6% perf-profile.cpu-cycles.tcp_close.inet_release.sock_release.sock_close.__fput
2.44 ± 2% -24.6% 1.84 ± 4% perf-profile.cpu-cycles.tcp_conn_request.tcp_v4_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv
7.43 ± 2% -19.2% 6.00 ± 2% perf-profile.cpu-cycles.tcp_connect.tcp_v4_connect.__inet_stream_connect.inet_stream_connect.SYSC_connect
1.16 ± 6% -35.6% 0.74 ± 8% perf-profile.cpu-cycles.tcp_create_openreq_child.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_do_rcv.tcp_v4_rcv
2.52 ± 4% +215.6% 7.96 ± 9% perf-profile.cpu-cycles.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
1.14 ± 7% -26.3% 0.84 ± 3% perf-profile.cpu-cycles.tcp_done.tcp_time_wait.tcp_fin.tcp_data_queue.tcp_rcv_state_process
2.41 ± 3% +224.9% 7.82 ± 9% perf-profile.cpu-cycles.tcp_fin.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv
1.10 ± 2% +565.6% 7.30 ± 15% perf-profile.cpu-cycles.tcp_finish_connect.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.__inet_stream_connect
0.11 ± 19% +5669.6% 6.63 ± 17% perf-profile.cpu-cycles.tcp_get_metrics.tcp_init_metrics.tcp_finish_connect.tcp_rcv_state_process.tcp_v4_do_rcv
0.10 ± 15% +5890.0% 5.99 ± 11% perf-profile.cpu-cycles.tcp_get_metrics.tcp_init_metrics.tcp_rcv_state_process.tcp_child_process.tcp_v4_do_rcv
0.10 ± 15% +8738.5% 8.62 ± 11% perf-profile.cpu-cycles.tcp_get_metrics.tcp_update_metrics.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock
0.11 ± 18% +5345.5% 5.99 ± 12% perf-profile.cpu-cycles.tcp_get_metrics.tcp_update_metrics.tcp_time_wait.tcp_fin.tcp_data_queue
0.03 ± 35% +25137.5% 6.73 ± 16% perf-profile.cpu-cycles.tcp_init_metrics.tcp_finish_connect.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock
0.02 ± 53% +25935.7% 6.08 ± 11% perf-profile.cpu-cycles.tcp_init_metrics.tcp_rcv_state_process.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv
17.50 ± 9% -43.1% 9.95 ± 12% perf-profile.cpu-cycles.tcp_prequeue.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv_finish
7.24 ± 3% -19.9% 5.80 ± 3% perf-profile.cpu-cycles.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom
21.65 ± 7% -39.0% 13.21 ± 10% perf-profile.cpu-cycles.tcp_push.tcp_sendmsg.inet_sendmsg.sock_sendmsg.SYSC_sendto
3.75 ± 3% -30.0% 2.62 ± 2% perf-profile.cpu-cycles.tcp_rcv_established.tcp_v4_do_rcv.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg
0.49 ± 8% +1221.9% 6.48 ± 10% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_child_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
2.71 ± 2% +226.3% 8.84 ± 12% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.__inet_stream_connect.inet_stream_connect
1.37 ± 4% +610.4% 9.75 ± 9% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.tcp_close.inet_release
6.49 ± 3% +71.7% 11.15 ± 5% perf-profile.cpu-cycles.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
10.02 ± 3% -20.1% 8.01 ± 3% perf-profile.cpu-cycles.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.sys_recvfrom
2.40 ± 1% -14.1% 2.06 ± 3% perf-profile.cpu-cycles.tcp_send_ack.__tcp_ack_snd_check.tcp_rcv_established.tcp_v4_do_rcv.tcp_prequeue_process
8.41 ± 2% +47.8% 12.42 ± 4% perf-profile.cpu-cycles.tcp_send_fin.tcp_close.inet_release.sock_release.sock_close
22.87 ± 6% -38.2% 14.14 ± 9% perf-profile.cpu-cycles.tcp_sendmsg.inet_sendmsg.sock_sendmsg.SYSC_sendto.sys_sendto
1.43 ± 3% +400.0% 7.14 ± 10% perf-profile.cpu-cycles.tcp_time_wait.tcp_fin.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv
6.51 ± 2% -19.0% 5.27 ± 2% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_connect.tcp_v4_connect.__inet_stream_connect.inet_stream_connect
1.71 ± 4% -19.6% 1.37 ± 2% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_send_ack.__tcp_ack_snd_check.tcp_rcv_established.tcp_v4_do_rcv
20.86 ± 7% -39.4% 12.65 ± 10% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_push.tcp_sendmsg
6.46 ± 2% +72.5% 11.13 ± 5% perf-profile.cpu-cycles.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_send_fin.tcp_close
0.06 ± 48% +14437.5% 8.72 ± 11% perf-profile.cpu-cycles.tcp_update_metrics.tcp_rcv_state_process.tcp_v4_do_rcv.release_sock.tcp_close
0.07 ± 57% +8066.7% 6.12 ± 12% perf-profile.cpu-cycles.tcp_update_metrics.tcp_time_wait.tcp_fin.tcp_data_queue.tcp_rcv_state_process
2.53 ± 1% -22.5% 1.96 ± 3% perf-profile.cpu-cycles.tcp_v4_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
8.76 ± 2% -19.5% 7.05 ± 2% perf-profile.cpu-cycles.tcp_v4_connect.__inet_stream_connect.inet_stream_connect.SYSC_connect.sys_connect
2.81 ± 2% +216.7% 8.91 ± 12% perf-profile.cpu-cycles.tcp_v4_do_rcv.release_sock.__inet_stream_connect.inet_stream_connect.SYSC_connect
1.40 ± 4% +602.0% 9.79 ± 9% perf-profile.cpu-cycles.tcp_v4_do_rcv.release_sock.tcp_close.inet_release.sock_release
3.92 ± 2% -64.3% 1.40 ± 18% perf-profile.cpu-cycles.tcp_v4_do_rcv.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg.sock_recvmsg
19.08 ± 3% +31.2% 25.04 ± 2% perf-profile.cpu-cycles.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv_finish
1.24 ± 1% -13.7% 1.07 ± 5% perf-profile.cpu-cycles.tcp_v4_send_synack.tcp_conn_request.tcp_v4_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv
1.48 ± 2% -11.1% 1.32 ± 1% perf-profile.cpu-cycles.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
21.47 ± 7% -39.0% 13.11 ± 9% perf-profile.cpu-cycles.tcp_write_xmit.__tcp_push_pending_frames.tcp_push.tcp_sendmsg.inet_sendmsg
7.65 ± 2% +57.3% 12.04 ± 5% perf-profile.cpu-cycles.tcp_write_xmit.__tcp_push_pending_frames.tcp_send_fin.tcp_close.inet_release
0.43 ± 20% +135.6% 1.03 ± 7% perf-profile.cpu-cycles.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
0.51 ± 18% +107.4% 1.06 ± 7% perf-profile.cpu-cycles.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
25.09 ± 10% -44.3% 13.97 ± 12% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
23.97 ± 11% -45.4% 13.10 ± 13% perf-profile.cpu-cycles.ttwu_do_activate.constprop.83.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
0.39 ± 24% +153.9% 0.98 ± 8% perf-profile.cpu-cycles.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
2.88 ± 3% -19.8% 2.31 ± 4% perf-profile.cpu-cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.01 ± 4% -26.9% 1.47 ± 2% perf-profile.cpu-cycles.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.50 ± 2% -24.4% 1.89 ± 5% perf-profile.cpu-cycles.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
175.75 ± 26% -67.3% 57.50 ± 62% sched_debug.cfs_rq[0]:/.blocked_load_avg
195.75 ± 24% -61.9% 74.50 ± 45% sched_debug.cfs_rq[0]:/.tg_load_contrib
1267483 ± 58% +241.8% 4331730 ± 32% sched_debug.cfs_rq[10]:/.MIN_vruntime
106.50 ± 67% +167.1% 284.50 ± 43% sched_debug.cfs_rq[10]:/.blocked_load_avg
20.25 ± 21% +95.1% 39.50 ± 9% sched_debug.cfs_rq[10]:/.load
1267483 ± 58% +241.8% 4331730 ± 32% sched_debug.cfs_rq[10]:/.max_vruntime
7064228 ± 4% +24.8% 8816370 ± 6% sched_debug.cfs_rq[10]:/.min_vruntime
124.00 ± 57% +150.2% 310.25 ± 39% sched_debug.cfs_rq[10]:/.tg_load_contrib
7354915 ± 10% +30.2% 9574721 ± 15% sched_debug.cfs_rq[11]:/.min_vruntime
562.00 ± 14% +29.7% 729.00 ± 9% sched_debug.cfs_rq[11]:/.utilization_load_avg
328.50 ± 84% -74.7% 83.00 ±120% sched_debug.cfs_rq[17]:/.blocked_load_avg
348.75 ± 79% -70.8% 102.00 ± 94% sched_debug.cfs_rq[17]:/.tg_load_contrib
550.25 ± 7% +17.9% 648.50 ± 9% sched_debug.cfs_rq[20]:/.utilization_load_avg
3859894 ± 33% -84.1% 613604 ±133% sched_debug.cfs_rq[21]:/.MIN_vruntime
3859894 ± 33% -84.1% 613604 ±133% sched_debug.cfs_rq[21]:/.max_vruntime
3393438 ± 42% -100.0% 0.00 ± 0% sched_debug.cfs_rq[23]:/.MIN_vruntime
3393438 ± 42% -100.0% 0.00 ± 0% sched_debug.cfs_rq[23]:/.max_vruntime
6969709 ± 6% +9.1% 7603127 ± 4% sched_debug.cfs_rq[23]:/.min_vruntime
20.25 ± 18% -38.3% 12.50 ± 4% sched_debug.cfs_rq[23]:/.runnable_load_avg
21.50 ± 19% +445.3% 117.25 ± 68% sched_debug.cfs_rq[27]:/.load
16.50 ± 10% +31.8% 21.75 ± 5% sched_debug.cfs_rq[27]:/.runnable_load_avg
784.00 ± 91% -87.7% 96.75 ± 92% sched_debug.cfs_rq[28]:/.blocked_load_avg
800.25 ± 89% -85.6% 115.25 ± 75% sched_debug.cfs_rq[28]:/.tg_load_contrib
6864371 ± 11% +28.0% 8788136 ± 10% sched_debug.cfs_rq[2]:/.min_vruntime
-160064 ±-318% -624.8% 839954 ± 61% sched_debug.cfs_rq[2]:/.spread0
91.50 ± 28% -62.0% 34.75 ± 78% sched_debug.cfs_rq[30]:/.nr_spread_over
488.75 ± 18% +29.1% 631.00 ± 10% sched_debug.cfs_rq[30]:/.utilization_load_avg
84.00 ± 36% -54.2% 38.50 ± 60% sched_debug.cfs_rq[31]:/.nr_spread_over
6916568 ± 11% +13.9% 7879054 ± 6% sched_debug.cfs_rq[32]:/.min_vruntime
116.50 ±125% -82.6% 20.25 ± 42% sched_debug.cfs_rq[34]:/.load
6871979 ± 12% +28.6% 8834078 ± 11% sched_debug.cfs_rq[34]:/.min_vruntime
23.25 ± 11% -37.6% 14.50 ± 24% sched_debug.cfs_rq[34]:/.runnable_load_avg
-153189 ±-326% -677.7% 885021 ± 64% sched_debug.cfs_rq[34]:/.spread0
7.75 ± 27% +377.4% 37.00 ± 57% sched_debug.cfs_rq[35]:/.blocked_load_avg
6977299 ± 10% +21.0% 8439329 ± 15% sched_debug.cfs_rq[35]:/.min_vruntime
6848733 ± 11% +22.6% 8396951 ± 10% sched_debug.cfs_rq[37]:/.min_vruntime
-176509 ±-214% -353.7% 447796 ±108% sched_debug.cfs_rq[37]:/.spread0
6783146 ± 8% +25.7% 8525859 ± 14% sched_debug.cfs_rq[38]:/.min_vruntime
511.50 ± 9% +37.9% 705.25 ± 3% sched_debug.cfs_rq[38]:/.utilization_load_avg
6750617 ± 4% +24.5% 8403331 ± 11% sched_debug.cfs_rq[39]:/.min_vruntime
6957525 ± 8% +20.8% 8406635 ± 15% sched_debug.cfs_rq[3]:/.min_vruntime
7280418 ± 2% +22.5% 8920591 ± 14% sched_debug.cfs_rq[40]:/.min_vruntime
7111998 ± 5% +25.1% 8894040 ± 7% sched_debug.cfs_rq[42]:/.min_vruntime
7354642 ± 10% +30.2% 9579368 ± 15% sched_debug.cfs_rq[43]:/.min_vruntime
48.50 ±103% +123.7% 108.50 ± 60% sched_debug.cfs_rq[43]:/.nr_spread_over
7446762 ± 7% +21.6% 9051878 ± 17% sched_debug.cfs_rq[44]:/.min_vruntime
435.50 ± 22% +43.4% 624.50 ± 12% sched_debug.cfs_rq[46]:/.utilization_load_avg
489.00 ± 7% +34.3% 656.50 ± 8% sched_debug.cfs_rq[48]:/.utilization_load_avg
21.75 ± 51% +55.2% 33.75 ± 14% sched_debug.cfs_rq[49]:/.nr_spread_over
282.00 ± 81% -83.5% 46.50 ±117% sched_debug.cfs_rq[4]:/.blocked_load_avg
302.25 ± 75% -79.2% 62.75 ± 88% sched_debug.cfs_rq[4]:/.tg_load_contrib
568.50 ± 19% +24.7% 709.00 ± 1% sched_debug.cfs_rq[4]:/.utilization_load_avg
47.75 ± 69% +703.1% 383.50 ± 78% sched_debug.cfs_rq[50]:/.blocked_load_avg
62.75 ± 52% +541.4% 402.50 ± 75% sched_debug.cfs_rq[50]:/.tg_load_contrib
516.00 ± 13% +34.1% 691.75 ± 12% sched_debug.cfs_rq[50]:/.utilization_load_avg
6918551 ± 5% +9.8% 7594242 ± 5% sched_debug.cfs_rq[55]:/.min_vruntime
70.00 ± 45% -49.6% 35.25 ± 67% sched_debug.cfs_rq[58]:/.nr_spread_over
518.25 ± 22% +33.3% 691.00 ± 6% sched_debug.cfs_rq[58]:/.utilization_load_avg
482.00 ± 15% +30.7% 630.00 ± 5% sched_debug.cfs_rq[59]:/.utilization_load_avg
6893442 ± 10% +22.3% 8433961 ± 9% sched_debug.cfs_rq[5]:/.min_vruntime
561.50 ± 5% +18.1% 663.00 ± 9% sched_debug.cfs_rq[5]:/.utilization_load_avg
2118027 ± 62% -82.9% 361211 ±109% sched_debug.cfs_rq[61]:/.MIN_vruntime
2118027 ± 62% -82.9% 361211 ±109% sched_debug.cfs_rq[61]:/.max_vruntime
3804794 ± 35% -60.4% 1508156 ± 71% sched_debug.cfs_rq[62]:/.MIN_vruntime
3804794 ± 35% -60.4% 1508156 ± 71% sched_debug.cfs_rq[62]:/.max_vruntime
3234850 ± 51% -64.2% 1157307 ±145% sched_debug.cfs_rq[6]:/.MIN_vruntime
3234850 ± 51% -64.2% 1157307 ±145% sched_debug.cfs_rq[6]:/.max_vruntime
6791201 ± 8% +26.7% 8601638 ± 15% sched_debug.cfs_rq[6]:/.min_vruntime
6657096 ± 4% +26.2% 8398917 ± 11% sched_debug.cfs_rq[7]:/.min_vruntime
558.75 ± 11% +29.8% 725.00 ± 11% sched_debug.cfs_rq[7]:/.utilization_load_avg
7286102 ± 1% +23.1% 8965833 ± 15% sched_debug.cfs_rq[8]:/.min_vruntime
3383122 ± 2% -17.9% 2775877 ± 1% sched_debug.cpu#0.nr_switches
3392389 ± 2% -17.9% 2785232 ± 1% sched_debug.cpu#0.sched_count
3380345 ± 2% -19.0% 2737980 ± 1% sched_debug.cpu#0.ttwu_count
3364390 ± 2% -18.8% 2733371 ± 1% sched_debug.cpu#0.ttwu_local
34.25 ± 59% -56.9% 14.75 ± 12% sched_debug.cpu#1.cpu_load[0]
28.50 ± 42% -48.2% 14.75 ± 10% sched_debug.cpu#1.cpu_load[1]
26.75 ± 36% -43.9% 15.00 ± 12% sched_debug.cpu#1.cpu_load[2]
25.25 ± 34% -40.6% 15.00 ± 9% sched_debug.cpu#1.cpu_load[3]
22.50 ± 34% -44.4% 12.50 ± 8% sched_debug.cpu#1.cpu_load[4]
3369428 ± 2% -16.2% 2824862 ± 1% sched_debug.cpu#1.nr_switches
3371094 ± 2% -16.1% 2827260 ± 1% sched_debug.cpu#1.sched_count
3389319 ± 2% -17.1% 2810445 ± 1% sched_debug.cpu#1.ttwu_count
3353366 ± 2% -16.3% 2807598 ± 1% sched_debug.cpu#1.ttwu_local
3335017 ± 2% -15.4% 2820801 ± 1% sched_debug.cpu#10.nr_switches
3335956 ± 2% -15.4% 2821090 ± 1% sched_debug.cpu#10.sched_count
3336430 ± 2% -15.9% 2806740 ± 1% sched_debug.cpu#10.ttwu_count
3318441 ± 2% -15.5% 2804699 ± 1% sched_debug.cpu#10.ttwu_local
3370289 ± 3% -16.3% 2821155 ± 2% sched_debug.cpu#11.nr_switches
3371290 ± 3% -16.3% 2821629 ± 2% sched_debug.cpu#11.sched_count
3361232 ± 3% -15.8% 2829002 ± 2% sched_debug.cpu#11.ttwu_count
3358106 ± 3% -16.4% 2807327 ± 2% sched_debug.cpu#11.ttwu_local
16.50 ± 13% -25.8% 12.25 ± 15% sched_debug.cpu#12.cpu_load[4]
3398141 ± 3% -17.9% 2791418 ± 2% sched_debug.cpu#12.nr_switches
3400097 ± 3% -17.9% 2793004 ± 2% sched_debug.cpu#12.sched_count
1646 ± 32% +66.4% 2739 ± 24% sched_debug.cpu#12.sched_goidle
3395185 ± 3% -17.7% 2792849 ± 3% sched_debug.cpu#12.ttwu_count
3392814 ± 3% -18.5% 2766292 ± 2% sched_debug.cpu#12.ttwu_local
3384418 ± 4% -16.9% 2813855 ± 1% sched_debug.cpu#13.nr_switches
3385991 ± 4% -16.8% 2817516 ± 1% sched_debug.cpu#13.sched_count
3385348 ± 3% -17.2% 2804182 ± 2% sched_debug.cpu#13.ttwu_count
3369621 ± 4% -16.9% 2800765 ± 2% sched_debug.cpu#13.ttwu_local
3377257 ± 3% -16.1% 2833788 ± 2% sched_debug.cpu#14.nr_switches
3379864 ± 3% -16.1% 2836414 ± 2% sched_debug.cpu#14.sched_count
3360760 ± 3% -15.9% 2826049 ± 2% sched_debug.cpu#14.ttwu_count
3357666 ± 3% -15.9% 2823748 ± 2% sched_debug.cpu#14.ttwu_local
3380508 ± 3% -16.2% 2833752 ± 1% sched_debug.cpu#15.nr_switches
3381560 ± 3% -16.2% 2834059 ± 1% sched_debug.cpu#15.sched_count
3381759 ± 3% -16.1% 2838452 ± 2% sched_debug.cpu#15.ttwu_count
3368631 ± 3% -16.4% 2814682 ± 1% sched_debug.cpu#15.ttwu_local
3389967 ± 2% -17.6% 2791917 ± 1% sched_debug.cpu#16.nr_switches
3392333 ± 2% -17.6% 2794492 ± 1% sched_debug.cpu#16.sched_count
3395971 ± 2% -18.1% 2780908 ± 1% sched_debug.cpu#16.ttwu_count
3380594 ± 2% -18.0% 2770502 ± 1% sched_debug.cpu#16.ttwu_local
3379066 ± 2% -17.0% 2804619 ± 2% sched_debug.cpu#17.nr_switches
3379518 ± 2% -17.0% 2806250 ± 2% sched_debug.cpu#17.sched_count
3369568 ± 3% -17.2% 2790359 ± 2% sched_debug.cpu#17.ttwu_count
3366929 ± 3% -17.2% 2788295 ± 2% sched_debug.cpu#17.ttwu_local
3364431 ± 2% -17.1% 2787730 ± 1% sched_debug.cpu#18.nr_switches
3364899 ± 2% -17.1% 2788705 ± 1% sched_debug.cpu#18.sched_count
3358511 ± 2% -17.3% 2776889 ± 1% sched_debug.cpu#18.ttwu_count
3348289 ± 2% -17.6% 2759313 ± 1% sched_debug.cpu#18.ttwu_local
16.75 ± 10% -17.9% 13.75 ± 12% sched_debug.cpu#19.cpu_load[4]
3389035 ± 2% -17.5% 2794874 ± 1% sched_debug.cpu#19.nr_switches
3390379 ± 2% -17.4% 2800760 ± 1% sched_debug.cpu#19.sched_count
3382434 ± 2% -17.7% 2785348 ± 1% sched_debug.cpu#19.ttwu_count
3379051 ± 2% -17.7% 2781538 ± 1% sched_debug.cpu#19.ttwu_local
23.00 ± 21% -38.0% 14.25 ± 7% sched_debug.cpu#2.cpu_load[3]
3360803 ± 2% -16.7% 2800049 ± 1% sched_debug.cpu#2.nr_switches
3361823 ± 2% -16.5% 2805748 ± 1% sched_debug.cpu#2.sched_count
3343040 ± 2% -16.6% 2789182 ± 1% sched_debug.cpu#2.ttwu_count
3338401 ± 2% -16.5% 2787168 ± 1% sched_debug.cpu#2.ttwu_local
3341149 ± 2% -16.8% 2779014 ± 2% sched_debug.cpu#20.nr_switches
3341677 ± 2% -16.8% 2780361 ± 2% sched_debug.cpu#20.sched_count
3317813 ± 1% -16.5% 2771814 ± 2% sched_debug.cpu#20.ttwu_count
3313754 ± 1% -16.9% 2752947 ± 3% sched_debug.cpu#20.ttwu_local
3393487 ± 2% -17.6% 2795372 ± 1% sched_debug.cpu#21.nr_switches
3400093 ± 2% -17.7% 2796868 ± 1% sched_debug.cpu#21.sched_count
3378310 ± 2% -17.5% 2787801 ± 2% sched_debug.cpu#21.ttwu_count
3376315 ± 2% -17.7% 2780178 ± 2% sched_debug.cpu#21.ttwu_local
16.75 ± 2% -14.9% 14.25 ± 7% sched_debug.cpu#22.cpu_load[4]
3387714 ± 2% -17.2% 2803908 ± 2% sched_debug.cpu#22.nr_switches
3388108 ± 2% -17.2% 2805212 ± 2% sched_debug.cpu#22.sched_count
3384725 ± 2% -17.5% 2792197 ± 2% sched_debug.cpu#22.ttwu_count
3382935 ± 2% -17.6% 2788179 ± 2% sched_debug.cpu#22.ttwu_local
21.25 ± 13% -24.7% 16.00 ± 13% sched_debug.cpu#23.cpu_load[3]
18.00 ± 13% -23.6% 13.75 ± 9% sched_debug.cpu#23.cpu_load[4]
3401316 ± 2% -17.5% 2805389 ± 2% sched_debug.cpu#23.nr_switches
3405993 ± 2% -17.6% 2807165 ± 2% sched_debug.cpu#23.sched_count
3390254 ± 2% -17.0% 2814478 ± 1% sched_debug.cpu#23.ttwu_count
3386402 ± 2% -17.5% 2794862 ± 2% sched_debug.cpu#23.ttwu_local
3426699 ± 2% -18.1% 2807177 ± 2% sched_debug.cpu#24.nr_switches
3427534 ± 2% -18.1% 2807540 ± 2% sched_debug.cpu#24.sched_count
3419707 ± 2% -17.2% 2830470 ± 2% sched_debug.cpu#24.ttwu_count
3417657 ± 2% -18.3% 2793422 ± 2% sched_debug.cpu#24.ttwu_local
17.75 ± 7% -15.5% 15.00 ± 4% sched_debug.cpu#25.cpu_load[3]
15.50 ± 9% -16.1% 13.00 ± 5% sched_debug.cpu#25.cpu_load[4]
3390931 ± 3% -16.7% 2825710 ± 2% sched_debug.cpu#25.nr_switches
3394428 ± 3% -16.7% 2826362 ± 2% sched_debug.cpu#25.sched_count
3386135 ± 3% -17.1% 2806667 ± 2% sched_debug.cpu#25.ttwu_count
3374191 ± 3% -17.0% 2800759 ± 2% sched_debug.cpu#25.ttwu_local
3382304 ± 3% -16.8% 2815129 ± 1% sched_debug.cpu#26.nr_switches
3382617 ± 3% -16.8% 2815612 ± 1% sched_debug.cpu#26.sched_count
3391284 ± 2% -17.4% 2800669 ± 2% sched_debug.cpu#26.ttwu_count
3368626 ± 3% -17.0% 2797457 ± 1% sched_debug.cpu#26.ttwu_local
21.75 ± 13% +448.3% 119.25 ±114% sched_debug.cpu#27.load
3377532 ± 2% -17.0% 2802024 ± 1% sched_debug.cpu#27.nr_switches
3377791 ± 2% -17.0% 2803686 ± 1% sched_debug.cpu#27.sched_count
913.75 ± 15% +96.1% 1792 ± 34% sched_debug.cpu#27.sched_goidle
3378183 ± 3% -17.9% 2772945 ± 1% sched_debug.cpu#27.ttwu_count
3361335 ± 3% -17.6% 2769863 ± 1% sched_debug.cpu#27.ttwu_local
126632 ± 46% +52.9% 193604 ± 4% sched_debug.cpu#28.avg_idle
3391259 ± 3% -17.1% 2813047 ± 1% sched_debug.cpu#28.nr_switches
3391775 ± 3% -17.0% 2815887 ± 1% sched_debug.cpu#28.sched_count
3379019 ± 3% -16.9% 2808674 ± 1% sched_debug.cpu#28.ttwu_count
3377041 ± 3% -16.9% 2806132 ± 1% sched_debug.cpu#28.ttwu_local
3374213 ± 3% -16.6% 2812863 ± 1% sched_debug.cpu#29.nr_switches
3374449 ± 3% -16.6% 2815602 ± 1% sched_debug.cpu#29.sched_count
1120 ± 28% +92.2% 2153 ± 33% sched_debug.cpu#29.sched_goidle
3368967 ± 3% -17.4% 2783822 ± 0% sched_debug.cpu#29.ttwu_count
3360964 ± 3% -17.4% 2777215 ± 0% sched_debug.cpu#29.ttwu_local
3380214 ± 2% -17.8% 2777895 ± 1% sched_debug.cpu#3.nr_switches
3380943 ± 2% -17.8% 2779026 ± 1% sched_debug.cpu#3.sched_count
3371958 ± 2% -17.7% 2773613 ± 1% sched_debug.cpu#3.ttwu_count
3368641 ± 2% -17.8% 2770455 ± 1% sched_debug.cpu#3.ttwu_local
3399413 ± 2% -17.2% 2813130 ± 1% sched_debug.cpu#30.nr_switches
3399925 ± 2% -17.3% 2813415 ± 1% sched_debug.cpu#30.sched_count
3388030 ± 2% -17.2% 2805998 ± 1% sched_debug.cpu#30.ttwu_count
3384874 ± 2% -17.2% 2803761 ± 1% sched_debug.cpu#30.ttwu_local
3424347 ± 2% -17.9% 2809733 ± 1% sched_debug.cpu#31.nr_switches
3424975 ± 2% -18.0% 2809973 ± 1% sched_debug.cpu#31.sched_count
3413707 ± 2% -17.9% 2802556 ± 1% sched_debug.cpu#31.ttwu_count
3411722 ± 2% -17.9% 2800372 ± 1% sched_debug.cpu#31.ttwu_local
161244 ± 31% +141.0% 388617 ± 75% sched_debug.cpu#32.avg_idle
3340294 ± 4% -16.8% 2779491 ± 1% sched_debug.cpu#32.nr_switches
3340509 ± 4% -16.8% 2779888 ± 1% sched_debug.cpu#32.sched_count
3336888 ± 3% -16.4% 2788296 ± 2% sched_debug.cpu#32.ttwu_count
3327022 ± 4% -17.1% 2756567 ± 1% sched_debug.cpu#32.ttwu_local
3369761 ± 2% -18.2% 2756152 ± 2% sched_debug.cpu#33.nr_switches
3370085 ± 2% -18.2% 2757175 ± 2% sched_debug.cpu#33.sched_count
3353138 ± 2% -17.9% 2754433 ± 1% sched_debug.cpu#33.ttwu_count
3350768 ± 2% -18.5% 2731749 ± 2% sched_debug.cpu#33.ttwu_local
3353463 ± 2% -17.1% 2778380 ± 1% sched_debug.cpu#34.nr_switches
3353783 ± 2% -17.1% 2778950 ± 1% sched_debug.cpu#34.sched_count
3338913 ± 2% -17.1% 2769303 ± 1% sched_debug.cpu#34.ttwu_count
3336154 ± 2% -17.0% 2767598 ± 1% sched_debug.cpu#34.ttwu_local
19.75 ± 6% -13.9% 17.00 ± 7% sched_debug.cpu#35.cpu_load[1]
19.75 ± 4% -17.7% 16.25 ± 6% sched_debug.cpu#35.cpu_load[2]
19.75 ± 4% -17.7% 16.25 ± 5% sched_debug.cpu#35.cpu_load[3]
17.25 ± 11% -20.3% 13.75 ± 10% sched_debug.cpu#35.cpu_load[4]
3357618 ± 2% -17.1% 2784568 ± 1% sched_debug.cpu#35.nr_switches
3357860 ± 2% -17.0% 2785577 ± 1% sched_debug.cpu#35.sched_count
3350978 ± 2% -17.1% 2779124 ± 1% sched_debug.cpu#35.ttwu_count
3348950 ± 2% -17.1% 2775164 ± 1% sched_debug.cpu#35.ttwu_local
33.50 ± 8% -29.9% 23.50 ± 24% sched_debug.cpu#36.load
2.00 ± 0% -50.0% 1.00 ± 0% sched_debug.cpu#36.nr_running
3378181 ± 2% -17.8% 2776642 ± 2% sched_debug.cpu#36.nr_switches
3378466 ± 2% -17.8% 2777079 ± 2% sched_debug.cpu#36.sched_count
3374695 ± 2% -17.8% 2774072 ± 2% sched_debug.cpu#36.ttwu_count
3370536 ± 2% -18.0% 2763776 ± 2% sched_debug.cpu#36.ttwu_local
17.50 ± 13% -22.9% 13.50 ± 12% sched_debug.cpu#37.cpu_load[1]
3340460 ± 3% -16.8% 2779658 ± 2% sched_debug.cpu#37.nr_switches
3340683 ± 3% -16.8% 2779914 ± 2% sched_debug.cpu#37.sched_count
3341001 ± 2% -17.4% 2760062 ± 3% sched_debug.cpu#37.ttwu_count
3324668 ± 3% -17.1% 2756964 ± 2% sched_debug.cpu#37.ttwu_local
18.75 ± 7% -21.3% 14.75 ± 12% sched_debug.cpu#38.cpu_load[3]
17.50 ± 12% -27.1% 12.75 ± 3% sched_debug.cpu#38.cpu_load[4]
3401218 ± 2% -18.2% 2782005 ± 1% sched_debug.cpu#38.nr_switches
3401410 ± 2% -18.2% 2782445 ± 1% sched_debug.cpu#38.sched_count
3400794 ± 2% -18.1% 2784512 ± 2% sched_debug.cpu#38.ttwu_count
3399813 ± 2% -18.6% 2768586 ± 1% sched_debug.cpu#38.ttwu_local
17.50 ± 11% -22.9% 13.50 ± 15% sched_debug.cpu#39.cpu_load[4]
3392132 ± 2% -17.0% 2815024 ± 2% sched_debug.cpu#39.nr_switches
3392712 ± 2% -17.0% 2815476 ± 2% sched_debug.cpu#39.sched_count
3389117 ± 2% -16.9% 2816247 ± 1% sched_debug.cpu#39.ttwu_count
3387435 ± 2% -17.3% 2802276 ± 2% sched_debug.cpu#39.ttwu_local
165738 ± 10% +22.5% 203085 ± 10% sched_debug.cpu#4.avg_idle
20.00 ± 15% -26.2% 14.75 ± 5% sched_debug.cpu#4.cpu_load[2]
20.00 ± 15% -27.5% 14.50 ± 5% sched_debug.cpu#4.cpu_load[3]
17.75 ± 21% -29.6% 12.50 ± 6% sched_debug.cpu#4.cpu_load[4]
3373163 ± 2% -17.8% 2772508 ± 2% sched_debug.cpu#4.nr_switches
3373753 ± 2% -17.8% 2773251 ± 2% sched_debug.cpu#4.sched_count
3362050 ± 2% -17.9% 2761172 ± 2% sched_debug.cpu#4.ttwu_count
3356987 ± 2% -17.9% 2755498 ± 2% sched_debug.cpu#4.ttwu_local
18.25 ± 9% -17.8% 15.00 ± 0% sched_debug.cpu#40.cpu_load[3]
15.75 ± 12% -22.2% 12.25 ± 3% sched_debug.cpu#40.cpu_load[4]
3387417 ± 3% -16.6% 2826272 ± 1% sched_debug.cpu#40.nr_switches
3388123 ± 3% -16.6% 2826849 ± 1% sched_debug.cpu#40.sched_count
3379250 ± 3% -16.0% 2837310 ± 2% sched_debug.cpu#40.ttwu_count
3375881 ± 3% -16.5% 2817881 ± 1% sched_debug.cpu#40.ttwu_local
3339240 ± 2% -16.2% 2796813 ± 0% sched_debug.cpu#41.nr_switches
4.75 ± 40% -100.0% 0.00 ± 2% sched_debug.cpu#41.nr_uninterruptible
3339755 ± 2% -16.2% 2797077 ± 0% sched_debug.cpu#41.sched_count
3340575 ± 2% -17.0% 2773377 ± 0% sched_debug.cpu#41.ttwu_count
3320958 ± 2% -16.5% 2771410 ± 0% sched_debug.cpu#41.ttwu_local
3383562 ± 2% -16.9% 2811241 ± 1% sched_debug.cpu#42.nr_switches
3383989 ± 2% -16.9% 2811641 ± 1% sched_debug.cpu#42.sched_count
3375932 ± 2% -16.9% 2803836 ± 1% sched_debug.cpu#42.ttwu_count
3373666 ± 2% -16.9% 2801995 ± 1% sched_debug.cpu#42.ttwu_local
1465 ± 2% +19.7% 1753 ± 12% sched_debug.cpu#43.curr->pid
3369174 ± 3% -16.5% 2814479 ± 2% sched_debug.cpu#43.nr_switches
3369600 ± 3% -16.5% 2815000 ± 2% sched_debug.cpu#43.sched_count
3381844 ± 3% -16.8% 2813877 ± 2% sched_debug.cpu#43.ttwu_count
3363319 ± 3% -16.6% 2805345 ± 2% sched_debug.cpu#43.ttwu_local
3373569 ± 3% -16.8% 2805446 ± 2% sched_debug.cpu#44.nr_switches
3374319 ± 3% -16.8% 2806109 ± 2% sched_debug.cpu#44.sched_count
3366509 ± 3% -16.6% 2808499 ± 2% sched_debug.cpu#44.ttwu_count
3364864 ± 3% -17.1% 2788871 ± 2% sched_debug.cpu#44.ttwu_local
19.00 ± 13% -25.0% 14.25 ± 12% sched_debug.cpu#45.cpu_load[0]
3386942 ± 3% -16.9% 2816202 ± 2% sched_debug.cpu#45.nr_switches
3387927 ± 3% -16.8% 2817175 ± 2% sched_debug.cpu#45.sched_count
3379784 ± 4% -17.1% 2802342 ± 2% sched_debug.cpu#45.ttwu_count
3374320 ± 4% -17.0% 2799191 ± 2% sched_debug.cpu#45.ttwu_local
17.50 ± 6% -18.6% 14.25 ± 13% sched_debug.cpu#46.cpu_load[1]
14.50 ± 7% -19.0% 11.75 ± 12% sched_debug.cpu#46.cpu_load[4]
3362941 ± 3% -15.9% 2828995 ± 2% sched_debug.cpu#46.nr_switches
3363152 ± 3% -15.9% 2829614 ± 2% sched_debug.cpu#46.sched_count
3365155 ± 3% -16.0% 2825366 ± 2% sched_debug.cpu#46.ttwu_count
3352628 ± 3% -15.8% 2823128 ± 2% sched_debug.cpu#46.ttwu_local
182936 ± 1% +8.1% 197759 ± 4% sched_debug.cpu#47.avg_idle
3411346 ± 3% -17.3% 2820818 ± 1% sched_debug.cpu#47.nr_switches
3412191 ± 3% -17.3% 2821388 ± 1% sched_debug.cpu#47.sched_count
3405411 ± 3% -17.2% 2819712 ± 1% sched_debug.cpu#47.ttwu_count
3403059 ± 3% -17.3% 2815610 ± 1% sched_debug.cpu#47.ttwu_local
3384910 ± 2% -17.8% 2782329 ± 2% sched_debug.cpu#48.nr_switches
3385314 ± 2% -17.8% 2782524 ± 2% sched_debug.cpu#48.sched_count
3372909 ± 2% -17.2% 2792501 ± 2% sched_debug.cpu#48.ttwu_count
3371515 ± 2% -18.1% 2760527 ± 3% sched_debug.cpu#48.ttwu_local
20.00 ± 10% -25.0% 15.00 ± 16% sched_debug.cpu#49.cpu_load[3]
19.75 ± 16% -32.9% 13.25 ± 16% sched_debug.cpu#49.cpu_load[4]
3361041 ± 1% -16.7% 2799376 ± 1% sched_debug.cpu#49.nr_switches
3361389 ± 2% -16.7% 2799954 ± 1% sched_debug.cpu#49.sched_count
803.25 ± 66% +81.8% 1460 ± 12% sched_debug.cpu#49.sched_goidle
3381487 ± 2% -17.5% 2790957 ± 2% sched_debug.cpu#49.ttwu_count
3353421 ± 1% -16.8% 2788906 ± 2% sched_debug.cpu#49.ttwu_local
15.75 ± 9% +50.8% 23.75 ± 35% sched_debug.cpu#5.cpu_load[0]
3391672 ± 2% -17.9% 2785157 ± 2% sched_debug.cpu#5.nr_switches
3391868 ± 2% -17.9% 2786330 ± 2% sched_debug.cpu#5.sched_count
3383157 ± 2% -18.0% 2774308 ± 2% sched_debug.cpu#5.ttwu_count
3378359 ± 2% -18.0% 2771534 ± 2% sched_debug.cpu#5.ttwu_local
3378831 ± 2% -18.2% 2764907 ± 1% sched_debug.cpu#50.nr_switches
3379167 ± 2% -18.2% 2765068 ± 1% sched_debug.cpu#50.sched_count
3381931 ± 2% -18.4% 2759976 ± 2% sched_debug.cpu#50.ttwu_count
3372100 ± 2% -19.0% 2732913 ± 2% sched_debug.cpu#50.ttwu_local
3391742 ± 2% -17.8% 2787873 ± 2% sched_debug.cpu#51.nr_switches
3393991 ± 3% -17.9% 2788131 ± 2% sched_debug.cpu#51.sched_count
3388915 ± 2% -18.0% 2779415 ± 2% sched_debug.cpu#51.ttwu_count
3386448 ± 2% -18.0% 2776396 ± 2% sched_debug.cpu#51.ttwu_local
3353919 ± 2% -17.8% 2755852 ± 2% sched_debug.cpu#52.nr_switches
3354723 ± 2% -17.8% 2756388 ± 2% sched_debug.cpu#52.sched_count
3341951 ± 2% -18.3% 2731052 ± 3% sched_debug.cpu#52.ttwu_count
3337845 ± 2% -18.5% 2720797 ± 3% sched_debug.cpu#52.ttwu_local
3373578 ± 3% -17.1% 2796911 ± 2% sched_debug.cpu#53.nr_switches
3375925 ± 3% -17.1% 2797166 ± 2% sched_debug.cpu#53.sched_count
3383459 ± 2% -17.5% 2790891 ± 2% sched_debug.cpu#53.ttwu_count
3366616 ± 3% -17.3% 2784981 ± 2% sched_debug.cpu#53.ttwu_local
3390257 ± 3% -17.9% 2782148 ± 2% sched_debug.cpu#54.nr_switches
3391465 ± 3% -18.0% 2782549 ± 2% sched_debug.cpu#54.sched_count
3386304 ± 3% -16.9% 2812809 ± 1% sched_debug.cpu#54.ttwu_count
3384612 ± 3% -18.3% 2766815 ± 3% sched_debug.cpu#54.ttwu_local
22.25 ± 17% -23.6% 17.00 ± 7% sched_debug.cpu#55.cpu_load[1]
21.00 ± 16% -21.4% 16.50 ± 3% sched_debug.cpu#55.cpu_load[2]
19.75 ± 13% -20.3% 15.75 ± 5% sched_debug.cpu#55.cpu_load[3]
3374233 ± 1% -16.8% 2807090 ± 1% sched_debug.cpu#55.nr_switches
3374808 ± 1% -16.8% 2807431 ± 1% sched_debug.cpu#55.sched_count
3366364 ± 1% -17.2% 2786327 ± 1% sched_debug.cpu#55.ttwu_count
3362765 ± 1% -17.3% 2780406 ± 1% sched_debug.cpu#55.ttwu_local
3411822 ± 2% -17.6% 2811122 ± 2% sched_debug.cpu#56.nr_switches
3412183 ± 2% -17.6% 2811594 ± 2% sched_debug.cpu#56.sched_count
3407786 ± 2% -17.6% 2809656 ± 2% sched_debug.cpu#56.ttwu_count
3406197 ± 2% -17.7% 2804844 ± 2% sched_debug.cpu#56.ttwu_local
3397172 ± 3% -17.2% 2811744 ± 2% sched_debug.cpu#57.nr_switches
3397605 ± 3% -17.2% 2812005 ± 2% sched_debug.cpu#57.sched_count
3384695 ± 3% -16.6% 2822416 ± 2% sched_debug.cpu#57.ttwu_count
3382809 ± 3% -17.3% 2798177 ± 2% sched_debug.cpu#57.ttwu_local
188353 ± 1% +77.7% 334775 ± 61% sched_debug.cpu#58.avg_idle
3368337 ± 3% -16.7% 2805233 ± 2% sched_debug.cpu#58.nr_switches
3368535 ± 3% -16.7% 2805585 ± 2% sched_debug.cpu#58.sched_count
3360154 ± 4% -16.7% 2799389 ± 2% sched_debug.cpu#58.ttwu_count
3350965 ± 4% -16.5% 2796475 ± 1% sched_debug.cpu#58.ttwu_local
3386769 ± 3% -17.8% 2784174 ± 1% sched_debug.cpu#59.nr_switches
3387003 ± 3% -17.8% 2784602 ± 1% sched_debug.cpu#59.sched_count
3374107 ± 3% -17.4% 2786119 ± 1% sched_debug.cpu#59.ttwu_count
3371740 ± 3% -17.9% 2767788 ± 1% sched_debug.cpu#59.ttwu_local
3404679 ± 2% -17.7% 2803402 ± 2% sched_debug.cpu#6.nr_switches
3404946 ± 2% -17.6% 2804520 ± 2% sched_debug.cpu#6.sched_count
3396778 ± 2% -17.7% 2794779 ± 2% sched_debug.cpu#6.ttwu_count
3395316 ± 2% -17.8% 2789570 ± 2% sched_debug.cpu#6.ttwu_local
20.50 ± 8% -24.4% 15.50 ± 5% sched_debug.cpu#60.cpu_load[0]
3378881 ± 3% -17.1% 2799854 ± 1% sched_debug.cpu#60.nr_switches
3379401 ± 3% -17.1% 2800303 ± 1% sched_debug.cpu#60.sched_count
3370998 ± 3% -17.4% 2784997 ± 2% sched_debug.cpu#60.ttwu_count
3369479 ± 3% -17.4% 2783285 ± 2% sched_debug.cpu#60.ttwu_local
3389402 ± 2% -18.2% 2771097 ± 1% sched_debug.cpu#61.nr_switches
2.75 ±104% -72.7% 0.75 ±110% sched_debug.cpu#61.nr_uninterruptible
3389770 ± 2% -18.2% 2771881 ± 1% sched_debug.cpu#61.sched_count
448.25 ± 43% +362.6% 2073 ± 18% sched_debug.cpu#61.sched_goidle
3382036 ± 2% -18.4% 2761058 ± 3% sched_debug.cpu#61.ttwu_count
3374996 ± 2% -18.8% 2738991 ± 2% sched_debug.cpu#61.ttwu_local
3401719 ± 2% -17.5% 2806660 ± 1% sched_debug.cpu#62.nr_switches
3401992 ± 2% -17.5% 2807250 ± 1% sched_debug.cpu#62.sched_count
3409442 ± 3% -17.8% 2801102 ± 1% sched_debug.cpu#62.ttwu_count
3396668 ± 2% -17.6% 2799344 ± 1% sched_debug.cpu#62.ttwu_local
188622 ± 1% +82.6% 344360 ± 61% sched_debug.cpu#63.avg_idle
3403259 ± 1% -17.7% 2800949 ± 2% sched_debug.cpu#63.nr_switches
3403635 ± 1% -17.7% 2801160 ± 2% sched_debug.cpu#63.sched_count
1502 ± 36% -60.9% 587.25 ± 38% sched_debug.cpu#63.sched_goidle
3407423 ± 1% -18.1% 2792346 ± 2% sched_debug.cpu#63.ttwu_count
3394668 ± 1% -17.8% 2791028 ± 2% sched_debug.cpu#63.ttwu_local
3395141 ± 2% -17.1% 2815960 ± 1% sched_debug.cpu#7.nr_switches
3395368 ± 2% -17.0% 2817099 ± 1% sched_debug.cpu#7.sched_count
3390104 ± 2% -16.8% 2819988 ± 1% sched_debug.cpu#7.ttwu_count
3388220 ± 2% -17.2% 2806057 ± 2% sched_debug.cpu#7.ttwu_local
20.75 ± 26% -32.5% 14.00 ± 8% sched_debug.cpu#8.cpu_load[3]
19.00 ± 21% -35.5% 12.25 ± 12% sched_debug.cpu#8.cpu_load[4]
3412535 ± 3% -16.9% 2834133 ± 2% sched_debug.cpu#8.nr_switches
3414440 ± 3% -17.0% 2834516 ± 2% sched_debug.cpu#8.sched_count
3400639 ± 3% -16.8% 2828023 ± 2% sched_debug.cpu#8.ttwu_count
3397501 ± 3% -16.9% 2822112 ± 1% sched_debug.cpu#8.ttwu_local
21.75 ± 30% -32.2% 14.75 ± 10% sched_debug.cpu#9.cpu_load[1]
20.50 ± 22% -26.8% 15.00 ± 12% sched_debug.cpu#9.cpu_load[2]
3361173 ± 2% -15.8% 2828970 ± 2% sched_debug.cpu#9.nr_switches
3363065 ± 2% -15.9% 2829633 ± 2% sched_debug.cpu#9.sched_count
3352680 ± 3% -15.8% 2822914 ± 1% sched_debug.cpu#9.ttwu_count
3339358 ± 2% -15.6% 2819319 ± 2% sched_debug.cpu#9.ttwu_local
lkp-sbx04: Sandy Bridge-EX
Memory: 64G
perf-profile.cpu-cycles.sys_connect.entry_SYSCALL_64_fastpath
33 ++---------------------------------------------------------------------+
| O O |
32 ++ O |
31 ++ O |
| O O |
30 O+ O O O O O |
| O O O O O O O |
29 ++ O O O O |
| |
28 ++ |
27 ++ *. .*.* |
| *.*.*. + *. .* + .*
26 ++ * *.*.*.*.*. .*. .*.. .*.*. .*.* *.*. .*.*.*. .* |
* *.*.* * * * * * |
25 ++---------------------------------------------------------------------+
perf-profile.cpu-cycles.SYSC_connect.sys_connect.entry_SYSCALL_64_fastpath
33 ++---------------------------------------------------------------------+
| O |
32 ++ O O |
31 ++ O |
| O O |
30 O+ O O O O O |
| O O O O O O |
29 ++ O O O O O |
| |
28 ++ |
27 ++ *. .*. |
| *.*.*. + *. * *. .*
26 ++ * *.*.*.*.*. * *.. .*.*. .*. + *.*. .*.*.* * |
* *. + + + * * * * + + |
25 ++------------------------*-*---*----------------------------------*---+
perf-profile.cpu-cycles.sock_release.sock_close.__fput.____fput.task_work_run
26 ++-----------------------------------------O---------------------------+
| |
24 ++ O O O |
22 ++ O |
| O O |
20 ++ O |
| |
18 ++ |
| O |
16 O+O O O O |
14 ++ O O O O O O O O O |
| |
12 ++ .*. |
*. .*.*. .*.*.*.*.*.* *.*.*.*.*..*.*.*.*.*.*.*. .*.*.*.*.*.*.*.*.|
10 ++*-*-----*----------------------------------------*-*-----------------*
perf-profile.cpu-cycles.inet_release.sock_release.sock_close.__fput.____fput
26 ++---------------------------------------------------------------------+
| O |
24 ++ O O |
22 ++ O O |
| O O |
20 ++ O |
| |
18 ++ |
| O |
16 O+O O O O |
14 ++ O O O O O O O O O |
| |
12 ++ |
*. .*.*. .*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*. .*.*.*.*.*.*.*.*.|
10 ++*-*-----*----------------------------------------*-*-----------------*
perf-profile.cpu-cycles.tcp_close.inet_release.sock_release.sock_close.__fput
26 ++---------------------------------------------------------------------+
| O |
24 ++ O O O |
22 ++ O |
| O O O |
20 ++ |
18 ++ |
| |
16 O+O O O |
14 ++ O O O O O O O O O O O |
| |
12 *+ .*.*.*. .*. |
10 ++*.*.*.*.*.*.*.*.*.*.* * *..*.*.*.*.*.*.*. .*.*.*.*.*.*.*.*.*.*
| * |
8 ++---------------------------------------------------------------------+
netperf.time.system_time
15000 ++------------------------------------------------------------------+
14800 ++ O O |
| O O |
14600 ++ O |
14400 ++ O O |
| O |
14200 ++ |
14000 ++ |
13800 O+O O O O |
| O O O OO O O O |
13600 ++ O O |
13400 ++ |
| |
13200 *+*.*.*.*.*.*.*.**.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.**.*.*.*.*.*.*.*.*
13000 ++------------------------------------------------------------------+
netperf.time.percent_of_cpu_this_job_got
5200 ++-------------------------------------------------------------------+
| O |
5100 ++ O |
| O O O |
5000 ++ O O |
| O |
4900 ++ |
| O |
4800 O+O O O O O O O |
| O O O O O O |
4700 ++ |
*. .*. .*. |
4600 ++*.*.*.*.*.*.*.*.*.*.*.* *.*.*.**.*.*.* *.*.*.*.*.*.*.*.*.*.*.*.*
| |
4500 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 3 months
[lkp] [drm] bcfe0c0954: WARNING: CPU: 2 PID: 163 at drivers/gpu/drm/drm_drv.c:570 drm_dev_alloc+0x257/0x320 [drm]()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit bcfe0c0954f3336c44993e5ce444e09ad6087637 ("drm: WARN_ON if a modeset driver uses legacy suspend/resume helpers")
<4>[ 79.367388] ------------[ cut here ]------------
<4>[ 79.372114] WARNING: CPU: 2 PID: 163 at drivers/gpu/drm/drm_drv.c:570 drm_dev_alloc+0x257/0x320 [drm]()
<4>[ 79.383475] Modules linked in: x86_pkg_temp_thermal coretemp eeepc_wmi kvm_intel asus_wmi kvm sparse_keymap rfkill ppdev crct10dif_pclmul crc32_pclmul crc32c_intel i915(+) snd_hda_intel snd_hda_codec
<6>[ 79.401550] snd_hda_codec_realtek hdaudioC0D0: autoconfig for ALC892: line_outs=4 (0x14/0x15/0x16/0x17/0x0) type:line
<6>[ 79.401552] snd_hda_codec_realtek hdaudioC0D0: speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
<6>[ 79.401553] snd_hda_codec_realtek hdaudioC0D0: hp_outs=1 (0x1b/0x0/0x0/0x0/0x0)
<6>[ 79.401554] snd_hda_codec_realtek hdaudioC0D0: mono: mono_out=0x0
<6>[ 79.401555] snd_hda_codec_realtek hdaudioC0D0: dig-out=0x11/0x1e
<6>[ 79.401556] snd_hda_codec_realtek hdaudioC0D0: inputs:
<6>[ 79.401558] snd_hda_codec_realtek hdaudioC0D0: Front Mic=0x19
<6>[ 79.401560] snd_hda_codec_realtek hdaudioC0D0: Rear Mic=0x18
<6>[ 79.401561] snd_hda_codec_realtek hdaudioC0D0: Line=0x1a
<4>[ 79.465004] snd_hda_core<6>[ 79.466327] input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8
<6>[ 79.466387] input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9
<6>[ 79.466438] input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10
<6>[ 79.466486] input: HDA Intel PCH Line Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11
<6>[ 79.466535] input: HDA Intel PCH Line Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12
<6>[ 79.466583] input: HDA Intel PCH Line Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13
<6>[ 79.466631] input: HDA Intel PCH Line Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input14
<6>[ 79.466679] input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input15
<6>[ 79.466728] input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1b.0/sound/card0/input16
<4>[ 79.551884] ghash_clmulni_intel snd_hwdep pata_via aesni_intel lrw snd_pcm gf128mul ata_piix drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops glue_helper ablk_helper cryptd snd_timer i2c_i801 microcode snd pcspkr libata serio_raw soundcore drm shpchp wmi parport_pc parport tpm_infineon video
<4>[ 79.551885] CPU: 2 PID: 163 Comm: systemd-udevd Not tainted 4.2.0-rc8-01410-gbcfe0c0 #1
<4>[ 79.551886] Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 1002 04/01/2011
<4>[ 79.551887] ffffffffa009d370 ffff8801bdb179e8 ffffffff8189e2e9 0000000000000001
<4>[ 79.551888] 0000000000000000 ffff8801bdb17a28 ffffffff8107348a ffff8801bdb17a38
<4>[ 79.551889] ffff8801bd583000 0000000000000000 ffffffffa03bb100 ffffffffa03bb100
<4>[ 79.551890] Call Trace:
<4>[ 79.551895] [<ffffffff8189e2e9>] dump_stack+0x4c/0x65
<4>[ 79.551898] [<ffffffff8107348a>] warn_slowpath_common+0x8a/0xc0
<4>[ 79.551900] [<ffffffff8107357a>] warn_slowpath_null+0x1a/0x20
<4>[ 79.551908] [<ffffffffa006f647>] drm_dev_alloc+0x257/0x320 [drm]
<4>[ 79.551915] [<ffffffffa0071ddb>] drm_get_pci_dev+0x3b/0x1e0 [drm]
<4>[ 79.551932] [<ffffffffa02d0234>] i915_pci_probe+0x34/0x50 [i915]
<4>[ 79.551934] [<ffffffff81442925>] local_pci_probe+0x45/0xa0
<4>[ 79.551936] [<ffffffff81443bae>] ? pci_match_device+0xfe/0x120
<4>[ 79.551937] [<ffffffff81443cd7>] pci_device_probe+0xc7/0x120
<4>[ 79.551939] [<ffffffff81545dd6>] driver_probe_device+0x1f6/0x460
<4>[ 79.551940] [<ffffffff815460d0>] __driver_attach+0x90/0xa0
<4>[ 79.551941] [<ffffffff81546040>] ? driver_probe_device+0x460/0x460
<4>[ 79.551942] [<ffffffff81543ad4>] bus_for_each_dev+0x64/0xa0
<4>[ 79.551943] [<ffffffff815457ae>] driver_attach+0x1e/0x20
<4>[ 79.551944] [<ffffffff81545321>] bus_add_driver+0x1f1/0x290
<4>[ 79.551945] [<ffffffff81546ab0>] driver_register+0x60/0xe0
<4>[ 79.551946] [<ffffffff814421bc>] __pci_register_driver+0x4c/0x50
<4>[ 79.551951] [<ffffffffa0072060>] drm_pci_init+0xe0/0x110 [drm]
<4>[ 79.551952] [<ffffffffa03ea000>] ? 0xffffffffa03ea000
<4>[ 79.551964] [<ffffffffa03ea0a7>] i915_init+0xa7/0xaf [i915]
<4>[ 79.551966] [<ffffffff81002123>] do_one_initcall+0xb3/0x1d0
<4>[ 79.551968] [<ffffffff811bf8c0>] ? kmem_cache_alloc_trace+0x1d0/0x220
<4>[ 79.551970] [<ffffffff8189afea>] ? do_init_module+0x28/0x1ea
<4>[ 79.551971] [<ffffffff8189b023>] do_init_module+0x61/0x1ea
<4>[ 79.551973] [<ffffffff810f8bdc>] load_module+0x213c/0x2580
<4>[ 79.551974] [<ffffffff810f4cf0>] ? __symbol_put+0x40/0x40
<4>[ 79.551976] [<ffffffff810f9220>] SyS_finit_module+0x80/0xb0
<4>[ 79.551978] [<ffffffff818a5e2e>] entry_SYSCALL_64_fastpath+0x12/0x71
<4>[ 79.551979] ---[ end trace af4d150ad5fde0f9 ]---
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 3 months