WARNING: CPU: 0 PID: 61 at kernel/sched/core.c:7312 __might_sleep()
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/wait
commit 245747099820df3007f60128b1264fef9d2a69d2
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Wed Sep 24 10:18:55 2014 +0200
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Mon Oct 27 10:42:51 2014 +0100
sched: Debug nested sleeps
Validate we call might_sleep() with TASK_RUNNING, which catches places
where we nest blocking primitives, eg. mutex usage in a wait loop.
Since all blocking is arranged through task_struct::state, nesting
this will cause the inner primitive to set TASK_RUNNING and the outer
will thus not block.
Another observed problem is calling a blocking function from
schedule()->sched_submit_work()->blk_schedule_flush_plug() which will
then destroy the task state for the actual __schedule() call that
comes after it.
Cc: torvalds(a)linux-foundation.org
Cc: tglx(a)linutronix.de
Cc: ilya.dryomov(a)inktank.com
Cc: umgwanakikbuti(a)gmail.com
Cc: mingo(a)kernel.org
Cc: oleg(a)redhat.com
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: http://lkml.kernel.org/r/20140924082242.591637616@infradead.org
===================================================
PARENT COMMIT NOT CLEAN. LOOK OUT FOR WRONG BISECT!
===================================================
120 /kernel/i386-randconfig-r2-1027/592ed717ef33150f6888c333c28021283cc9aabc
To bisect errors in parent:
/c/kernel-tests/queue-reproduce /kernel/i386-randconfig-r2-1027/592ed717ef33150f6888c333c28021283cc9aabc/dmesg-quantal-kbuild-20:20141027231410:i386-randconfig-r2-1027:3.18.0-rc2-00036-g592ed71:139 BUG: kernel test crashed
Attached dmesg for the parent commit, too, to help confirm whether it is a noise error.
+---------------------------------------------------+------------+------------+------------+
| | 592ed717ef | 2457470998 | 2d55520314 |
+---------------------------------------------------+------------+------------+------------+
| boot_successes | 1080 | 267 | 110 |
| boot_failures | 120 | 33 | 21 |
| BUG:kernel_test_crashed | 110 | 30 | 16 |
| WARNING:at_kernel/locking/lockdep.c:check_flags() | 10 | 0 | 3 |
| backtrace:might_fault | 2 | | |
| backtrace:SyS_perf_event_open | 3 | 0 | 1 |
| backtrace:mutex_lock_nested | 1 | | |
| WARNING:at_kernel/sched/core.c:__might_sleep() | 0 | 3 | 2 |
| backtrace:cleanup_net | 0 | 3 | 2 |
| backtrace:register_perf_hw_breakpoint | 0 | 0 | 1 |
| backtrace:hw_breakpoint_event_init | 0 | 0 | 1 |
| backtrace:perf_init_event | 0 | 0 | 1 |
| backtrace:perf_event_alloc | 0 | 0 | 1 |
+---------------------------------------------------+------------+------------+------------+
[ 122.133640] Fix your initscripts?
[ 122.133905] trinity-c0 (23733) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.txt.
[ 122.247299] ------------[ cut here ]------------
[ 122.247328] WARNING: CPU: 0 PID: 61 at kernel/sched/core.c:7312 __might_sleep+0x50/0x249()
[ 122.247334] do not call blocking ops when !TASK_RUNNING; state=2 set at [<c106ffd9>] prepare_to_wait+0x3c/0x5f
[ 122.247339] Modules linked in:
[ 122.247349] CPU: 0 PID: 61 Comm: kworker/u2:1 Not tainted 3.18.0-rc2-00037-g24574709 #136
[ 122.247350] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 122.247368] Workqueue: netns cleanup_net
[ 122.247377] c1071d83 d2b83dd8 d2b83dac c15887b1 d2b83dc8 c104c4c6 00001c90 c1068ebf
[ 122.247383] 00000000 c17b67e3 0000026d d2b83de0 c104c508 00000009 d2b83dd8 c17b5d4b
[ 122.247388] d2b83df4 d2b83e0c c1068ebf c17b5cec 00001c90 c17b5d4b 00000002 c106ffd9
[ 122.247389] Call Trace:
[ 122.247393] [<c1071d83>] ? down_trylock+0x23/0x2c
[ 122.247402] [<c15887b1>] dump_stack+0x16/0x18
[ 122.247413] [<c104c4c6>] warn_slowpath_common+0x66/0x7d
[ 122.247416] [<c1068ebf>] ? __might_sleep+0x50/0x249
[ 122.247419] [<c104c508>] warn_slowpath_fmt+0x2b/0x2f
[ 122.247422] [<c1068ebf>] __might_sleep+0x50/0x249
[ 122.247424] [<c106ffd9>] ? prepare_to_wait+0x3c/0x5f
[ 122.247426] [<c106ffd9>] ? prepare_to_wait+0x3c/0x5f
[ 122.247432] [<c158c364>] mutex_lock_nested+0x23/0x347
[ 122.247436] [<c1075105>] ? trace_hardirqs_on+0xb/0xd
[ 122.247439] [<c158eb0c>] ? _raw_spin_unlock_irqrestore+0x66/0x78
[ 122.247445] [<c1570e10>] rtnl_lock+0x14/0x16
[ 122.247449] [<c156516b>] default_device_exit_batch+0x54/0xf3
[ 122.247452] [<c1570e1f>] ? rtnl_unlock+0xd/0xf
[ 122.247454] [<c1070233>] ? __wake_up_sync+0x12/0x12
[ 122.247461] [<c155e35d>] ops_exit_list+0x20/0x40
[ 122.247464] [<c155ec96>] cleanup_net+0xbe/0x140
[ 122.247473] [<c105ffe4>] process_one_work+0x29e/0x643
[ 122.247479] [<c1061215>] worker_thread+0x23a/0x311
[ 122.247482] [<c1060fdb>] ? rescuer_thread+0x204/0x204
[ 122.247486] [<c10648cc>] kthread+0xbe/0xc3
[ 122.247490] [<c158f4c0>] ret_from_kernel_thread+0x20/0x30
[ 122.247492] [<c106480e>] ? kthread_stop+0x364/0x364
[ 122.247495] ---[ end trace 2073c37ae3c8b3b4 ]---
[ 157.390879] Unregister pv shared memory for cpu 0
git bisect start 2d55520314eb5603b855ac1b994705dc6a352d9e 522e980064c24d3dd9859e9375e17417496567cf --
git bisect good c3f9b6ec744e12ff09677c4c0cb3164ad5b62702 # 19:25 300+ 36 Merge branch 'sched/core'
git bisect good 344c57c17c7f857f9c92317e0d5cbb5c59f8d6e0 # 19:49 300+ 62 Merge branch 'perf/urgent'
git bisect good 54de76b06a8098c11f15857a57e23c6e630a34b6 # 20:19 300+ 66 Merge branch 'perf/core'
git bisect good 126b6dbcbedb5c0defe5c39e0310feed061569bf # 20:51 300+ 50 exit: Deal with nested sleeps
git bisect good 8641f9cba8ce5f3bfc5da47861180617cbfc6e7f # 22:02 300+ 68 module: Fix nested sleep
git bisect bad 245747099820df3007f60128b1264fef9d2a69d2 # 22:25 142- 18 sched: Debug nested sleeps
git bisect good 592ed717ef33150f6888c333c28021283cc9aabc # 22:59 300+ 27 net: Clean up sk_wait_event() vs might_sleep()
# first bad commit: [245747099820df3007f60128b1264fef9d2a69d2] sched: Debug nested sleeps
git bisect good 592ed717ef33150f6888c333c28021283cc9aabc # 00:15 900+ 120 net: Clean up sk_wait_event() vs might_sleep()
git bisect bad 2d55520314eb5603b855ac1b994705dc6a352d9e # 00:19 0- 21 Merge branch 'sched/wait'
git bisect good cac7f2429872d3733dc3f9915857b1691da2eb2f # 01:33 900+ 66 Linux 3.18-rc2
git bisect good 7a891e6323e963f3301e44bdeee734028e34d390 # 02:26 900+ 93 Add linux-next specific files for 20141027
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
5 years, 11 months
[rfcomm_run] WARNING: CPU: 0 PID: 95 at kernel/sched/core.c:7312 __might_sleep()
by Fengguang Wu
Hi Peter,
FYI, this bug seems still there on v3.18-rc2.
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/wait
commit 245747099820df3007f60128b1264fef9d2a69d2
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Wed Sep 24 10:18:55 2014 +0200
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Mon Oct 27 10:42:51 2014 +0100
sched: Debug nested sleeps
Validate we call might_sleep() with TASK_RUNNING, which catches places
where we nest blocking primitives, eg. mutex usage in a wait loop.
Since all blocking is arranged through task_struct::state, nesting
this will cause the inner primitive to set TASK_RUNNING and the outer
will thus not block.
Another observed problem is calling a blocking function from
schedule()->sched_submit_work()->blk_schedule_flush_plug() which will
then destroy the task state for the actual __schedule() call that
comes after it.
Cc: torvalds(a)linux-foundation.org
Cc: tglx(a)linutronix.de
Cc: ilya.dryomov(a)inktank.com
Cc: umgwanakikbuti(a)gmail.com
Cc: mingo(a)kernel.org
Cc: oleg(a)redhat.com
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: http://lkml.kernel.org/r/20140924082242.591637616@infradead.org
+------------------------------------------------+------------+------------+------------+
| | 592ed717ef | 2457470998 | 2d55520314 |
+------------------------------------------------+------------+------------+------------+
| boot_successes | 60 | 0 | 0 |
| boot_failures | 0 | 20 | 11 |
| WARNING:at_kernel/sched/core.c:__might_sleep() | 0 | 20 | 11 |
| BUG:kernel_boot_hang | 0 | 20 | 11 |
| backtrace:rfcomm_run | 0 | 20 | 11 |
+------------------------------------------------+------------+------------+------------+
[ 23.006121] Bluetooth: BNEP socket layer initialized
[ 23.008365] ------------[ cut here ]------------
[ 23.008365] ------------[ cut here ]------------
[ 23.009632] WARNING: CPU: 0 PID: 95 at kernel/sched/core.c:7312 __might_sleep+0x6b/0x425()
[ 23.009632] WARNING: CPU: 0 PID: 95 at kernel/sched/core.c:7312 __might_sleep+0x6b/0x425()
[ 23.029611] do not call blocking ops when !TASK_RUNNING; state=1 set at [<7a3b715f>] rfcomm_run+0x1e9/0x20ed
[ 23.029611] do not call blocking ops when !TASK_RUNNING; state=1 set at [<7a3b715f>] rfcomm_run+0x1e9/0x20ed
[ 23.032456] CPU: 0 PID: 95 Comm: krfcommd Not tainted 3.18.0-rc2-00037-g24574709 #30
[ 23.032456] CPU: 0 PID: 95 Comm: krfcommd Not tainted 3.18.0-rc2-00037-g24574709 #30
[ 23.043505] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 23.043505] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 23.045977] 00000000
[ 23.045977] 00000000 8b0c9e5c 8b0c9e5c 8b0c9e30 8b0c9e30 7a512a6d 7a512a6d 8b0c9e4c 8b0c9e4c 7904f132 7904f132 00001c90 00001c90 79088434 79088434
[ 23.048104] 7aad3d8e
[ 23.048104] 7aad3d8e 0000026d 0000026d 00000000 00000000 8b0c9e64 8b0c9e64 7904f206 7904f206 00000009 00000009 8b0c9e5c 8b0c9e5c 7aad3598 7aad3598
[ 23.058746] 8b0c9e78
[ 23.058746] 8b0c9e78 8b0c9e90 8b0c9e90 79088434 79088434 7aad33f4 7aad33f4 00001c90 00001c90 7aad3598 7aad3598 00000001 00000001 7a3b715f 7a3b715f
[ 23.069520] Call Trace:
[ 23.069520] Call Trace:
[ 23.070302] [<7a512a6d>] dump_stack+0x40/0x5e
[ 23.070302] [<7a512a6d>] dump_stack+0x40/0x5e
[ 23.071541] [<7904f132>] warn_slowpath_common+0x9d/0xde
[ 23.071541] [<7904f132>] warn_slowpath_common+0x9d/0xde
[ 23.072987] [<79088434>] ? __might_sleep+0x6b/0x425
[ 23.072987] [<79088434>] ? __might_sleep+0x6b/0x425
[ 23.083568] [<7904f206>] warn_slowpath_fmt+0x42/0x54
[ 23.083568] [<7904f206>] warn_slowpath_fmt+0x42/0x54
[ 23.085113] [<79088434>] __might_sleep+0x6b/0x425
[ 23.085113] [<79088434>] __might_sleep+0x6b/0x425
[ 23.086584] [<7a3b715f>] ? rfcomm_run+0x1e9/0x20ed
[ 23.086584] [<7a3b715f>] ? rfcomm_run+0x1e9/0x20ed
[ 23.096868] [<7a3b715f>] ? rfcomm_run+0x1e9/0x20ed
[ 23.096868] [<7a3b715f>] ? rfcomm_run+0x1e9/0x20ed
[ 23.098376] [<7a52586b>] mutex_lock_nested+0x2c/0x612
[ 23.098376] [<7a52586b>] mutex_lock_nested+0x2c/0x612
[ 23.099836] [<790b980a>] ? init_timer_key+0x49/0x6b
[ 23.099836] [<790b980a>] ? init_timer_key+0x49/0x6b
[ 23.109733] [<7a3b4e20>] ? rfcomm_session_add+0x63/0xd2
[ 23.109733] [<7a3b4e20>] ? rfcomm_session_add+0x63/0xd2
[ 23.119542] [<7a3b71c6>] rfcomm_run+0x250/0x20ed
[ 23.119542] [<7a3b71c6>] rfcomm_run+0x250/0x20ed
[ 23.120965] [<7a524184>] ? __schedule+0x75c/0xa6c
[ 23.120965] [<7a524184>] ? __schedule+0x75c/0xa6c
[ 23.122378] [<7a3b6f76>] ? rfcomm_check_accept+0x125/0x125
[ 23.122378] [<7a3b6f76>] ? rfcomm_check_accept+0x125/0x125
[ 23.132533] [<7907a78f>] kthread+0x148/0x15b
[ 23.132533] [<7907a78f>] kthread+0x148/0x15b
[ 23.133892] [<7a52cb80>] ret_from_kernel_thread+0x20/0x30
[ 23.133892] [<7a52cb80>] ret_from_kernel_thread+0x20/0x30
[ 23.135543] [<7907a647>] ? __kthread_unpark+0x97/0x97
[ 23.135543] [<7907a647>] ? __kthread_unpark+0x97/0x97
[ 23.145851] ---[ end trace 62efeb57726492df ]---
[ 23.145851] ---[ end trace 62efeb57726492df ]---
git bisect start 2d55520314eb5603b855ac1b994705dc6a352d9e 522e980064c24d3dd9859e9375e17417496567cf --
git bisect good c3f9b6ec744e12ff09677c4c0cb3164ad5b62702 # 18:32 20+ 0 Merge branch 'sched/core'
git bisect good 344c57c17c7f857f9c92317e0d5cbb5c59f8d6e0 # 18:45 20+ 0 Merge branch 'perf/urgent'
git bisect good 54de76b06a8098c11f15857a57e23c6e630a34b6 # 18:51 20+ 0 Merge branch 'perf/core'
git bisect good 126b6dbcbedb5c0defe5c39e0310feed061569bf # 19:05 20+ 0 exit: Deal with nested sleeps
git bisect good 8641f9cba8ce5f3bfc5da47861180617cbfc6e7f # 19:14 20+ 0 module: Fix nested sleep
git bisect bad 245747099820df3007f60128b1264fef9d2a69d2 # 19:19 0- 6 sched: Debug nested sleeps
git bisect good 592ed717ef33150f6888c333c28021283cc9aabc # 19:26 20+ 0 net: Clean up sk_wait_event() vs might_sleep()
# first bad commit: [245747099820df3007f60128b1264fef9d2a69d2] sched: Debug nested sleeps
git bisect good 592ed717ef33150f6888c333c28021283cc9aabc # 19:31 60+ 0 net: Clean up sk_wait_event() vs might_sleep()
git bisect bad 2d55520314eb5603b855ac1b994705dc6a352d9e # 19:31 0- 11 Merge branch 'sched/wait'
git bisect good cac7f2429872d3733dc3f9915857b1691da2eb2f # 19:39 60+ 0 Linux 3.18-rc2
git bisect good 7a891e6323e963f3301e44bdeee734028e34d390 # 20:20 60+ 0 Add linux-next specific files for 20141027
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 2
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[futex] 76835b0ebf8: -12.1% will-it-scale.per_process_ops
by kernel test robot
FYI, we noticed the below changes on
commit 76835b0ebf8a7fe85beb03c75121419a7dec52f0 ("futex: Ensure get_futex_key_refs() always implies a barrier")
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751 testbox/testcase/testparams
---------------- -------------------------- ---------------------------
%stddev %change %stddev
\ | \
9204866 ± 0% -10.2% 8266680 ± 0% lkp-wsx01/will-it-scale/performance-futex3
11271283 ± 0% -12.1% 9911001 ± 0% nhm4/will-it-scale/performance-futex3
10185806 -11.1% 9051578 GEO-MEAN will-it-scale.per_process_ops
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
9211097 ± 0% -10.2% 8268370 ± 0% lkp-wsx01/will-it-scale/performance-futex3
11324367 ± 0% -12.0% 9969101 ± 0% nhm4/will-it-scale/performance-futex3
10213219 -11.1% 9078999 GEO-MEAN will-it-scale.per_thread_ops
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
0.63 ± 0% +6.3% 0.66 ± 0% lkp-wsx01/will-it-scale/performance-futex3
0.66 ± 0% +9.6% 0.72 ± 0% nhm4/will-it-scale/performance-futex3
0.64 +7.9% 0.69 GEO-MEAN will-it-scale.scalability
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
2.01 ± 1% +365.3% 9.35 ± 1% lkp-wsx01/will-it-scale/performance-futex3
1.67 ± 1% +509.7% 10.17 ± 3% nhm4/will-it-scale/performance-futex3
1.83 +432.6% 9.75 GEO-MEAN perf-profile.cpu-cycles.get_futex_key_refs.isra.11.futex_wake.do_futex.sys_futex.system_call_fastpath
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1 ± 34% +200.0% 4 ± 17% lkp-wsx01/will-it-scale/performance-futex3
1 +200.0% 4 GEO-MEAN sched_debug.cpu#63.nr_uninterruptible
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
161 ± 27% +159.0% 418 ± 29% lkp-wsx01/will-it-scale/performance-futex3
161 +159.0% 418 GEO-MEAN sched_debug.cpu#48.sched_goidle
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
395 ± 23% +146.2% 972 ± 27% lkp-wsx01/will-it-scale/performance-futex3
395 +146.2% 972 GEO-MEAN sched_debug.cpu#48.nr_switches
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
404 ± 23% +143.3% 984 ± 27% lkp-wsx01/will-it-scale/performance-futex3
404 +143.3% 984 GEO-MEAN sched_debug.cpu#48.sched_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
434 ± 32% -68.0% 139 ± 20% lkp-wsx01/will-it-scale/performance-futex3
434 -68.0% 138 GEO-MEAN sched_debug.cpu#61.ttwu_local
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
5335 ± 48% -54.4% 2433 ± 43% lkp-wsx01/will-it-scale/performance-futex3
5335 -54.4% 2433 GEO-MEAN sched_debug.cpu#20.sched_goidle
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
498 ± 23% -54.4% 227 ± 25% lkp-wsx01/will-it-scale/performance-futex3
498 -54.4% 227 GEO-MEAN sched_debug.cpu#55.sched_goidle
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
576 ± 26% -46.8% 306 ± 34% lkp-wsx01/will-it-scale/performance-futex3
576 -46.8% 306 GEO-MEAN sched_debug.cpu#55.ttwu_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
0.47 ± 7% +119.7% 1.02 ± 5% nhm4/will-it-scale/performance-futex3
0.47 +119.7% 1.02 GEO-MEAN perf-profile.cpu-cycles.ret_from_sys_call.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
11216 ± 46% -49.6% 5648 ± 38% lkp-wsx01/will-it-scale/performance-futex3
11216 -49.6% 5648 GEO-MEAN sched_debug.cpu#20.nr_switches
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
11237 ± 46% -49.5% 5669 ± 38% lkp-wsx01/will-it-scale/performance-futex3
11237 -49.5% 5669 GEO-MEAN sched_debug.cpu#20.sched_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1232 ± 27% -52.8% 581 ± 21% lkp-wsx01/will-it-scale/performance-futex3
1232 -52.8% 581 GEO-MEAN sched_debug.cpu#55.nr_switches
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
155 ± 36% +182.5% 438 ± 40% lkp-wsx01/will-it-scale/performance-futex3
155 +182.5% 438 GEO-MEAN sched_debug.cpu#48.ttwu_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1245 ± 27% -52.5% 591 ± 21% lkp-wsx01/will-it-scale/performance-futex3
1245 -52.5% 591 GEO-MEAN sched_debug.cpu#55.sched_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1461 ± 31% -46.9% 776 ± 23% lkp-wsx01/will-it-scale/performance-futex3
1461 -46.9% 776 GEO-MEAN sched_debug.cpu#42.ttwu_local
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
856 ± 34% -50.2% 426 ± 31% lkp-wsx01/will-it-scale/performance-futex3
855 -50.2% 426 GEO-MEAN sched_debug.cpu#58.ttwu_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
926 ± 19% -41.0% 546 ± 24% lkp-wsx01/will-it-scale/performance-futex3
926 -41.0% 546 GEO-MEAN sched_debug.cpu#61.ttwu_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1593 ± 17% -45.1% 875 ± 20% lkp-wsx01/will-it-scale/performance-futex3
1593 -45.1% 875 GEO-MEAN sched_debug.cpu#61.nr_switches
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3667 ± 25% -38.3% 2263 ± 18% lkp-wsx01/will-it-scale/performance-futex3
3667 -38.3% 2262 GEO-MEAN sched_debug.cpu#42.nr_switches
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1604 ± 17% -44.8% 885 ± 20% lkp-wsx01/will-it-scale/performance-futex3
1604 -44.8% 885 GEO-MEAN sched_debug.cpu#61.sched_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3680 ± 25% -38.1% 2279 ± 18% lkp-wsx01/will-it-scale/performance-futex3
3680 -38.1% 2279 GEO-MEAN sched_debug.cpu#42.sched_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
402 ± 22% +45.4% 585 ± 29% lkp-wsx01/will-it-scale/performance-futex3
402 +45.4% 585 GEO-MEAN sched_debug.cpu#70.sched_goidle
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
153 ± 13% -35.3% 99 ± 22% nhm4/will-it-scale/performance-futex3
153 -35.3% 99 GEO-MEAN sched_debug.cpu#6.load
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
595 ± 21% +108.8% 1242 ± 40% lkp-wsx01/will-it-scale/performance-futex3
595 +108.8% 1242 GEO-MEAN sched_debug.cpu#70.ttwu_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1127 ± 31% -42.3% 651 ± 8% nhm4/will-it-scale/performance-futex3
1127 -42.3% 650 GEO-MEAN sched_debug.cfs_rq[3]:/.blocked_load_avg
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
16.48 ± 1% +56.0% 25.71 ± 2% nhm4/will-it-scale/performance-futex3
16.48 +56.0% 25.71 GEO-MEAN perf-profile.cpu-cycles.futex_wake.do_futex.sys_futex.system_call_fastpath.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
575 ± 24% -39.7% 346 ± 21% lkp-wsx01/will-it-scale/performance-futex3
575 -39.7% 346 GEO-MEAN sched_debug.cpu#61.sched_goidle
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
534 ± 11% +63.0% 871 ± 35% lkp-wsx01/will-it-scale/performance-futex3
534 +63.0% 871 GEO-MEAN sched_debug.cpu#45.nr_switches
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
545 ± 11% +62.0% 883 ± 35% lkp-wsx01/will-it-scale/performance-futex3
545 +62.0% 882 GEO-MEAN sched_debug.cpu#45.sched_count
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
10 ± 26% -44.0% 5 ± 18% lkp-wsx01/will-it-scale/performance-futex3
10 -44.0% 5 GEO-MEAN sched_debug.cpu#72.cpu_load[0]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
827 ± 26% +55.5% 1286 ± 23% nhm4/will-it-scale/performance-futex3
826 +55.5% 1285 GEO-MEAN sched_debug.cfs_rq[4]:/.blocked_load_avg
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1245 ± 28% -38.8% 762 ± 8% nhm4/will-it-scale/performance-futex3
1245 -38.8% 762 GEO-MEAN sched_debug.cfs_rq[3]:/.tg_load_contrib
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
969 ± 22% +48.4% 1438 ± 20% nhm4/will-it-scale/performance-futex3
969 +48.4% 1438 GEO-MEAN sched_debug.cfs_rq[4]:/.tg_load_contrib
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
133 ± 11% -28.7% 95 ± 22% nhm4/will-it-scale/performance-futex3
133 -28.7% 95 GEO-MEAN sched_debug.cfs_rq[6]:/.load
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
799 ± 19% -34.8% 521 ± 26% nhm4/will-it-scale/performance-futex3
799 -34.8% 521 GEO-MEAN sched_debug.cfs_rq[2]:/.blocked_load_avg
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
906 ± 17% -30.8% 627 ± 21% nhm4/will-it-scale/performance-futex3
906 -30.8% 627 GEO-MEAN sched_debug.cfs_rq[2]:/.tg_load_contrib
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
70 ± 10% +21.7% 85 ± 9% nhm4/will-it-scale/performance-futex3
70 +21.7% 85 GEO-MEAN sched_debug.cpu#2.cpu_load[3]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3149 ± 11% -15.0% 2677 ± 10% lkp-wsx01/will-it-scale/performance-futex3
3149 -15.0% 2676 GEO-MEAN sched_debug.cpu#71.curr->pid
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
68 ± 12% +23.5% 84 ± 9% nhm4/will-it-scale/performance-futex3
68 +23.5% 83 GEO-MEAN sched_debug.cpu#2.cpu_load[4]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
214345 ± 13% +31.5% 281773 ± 13% nhm4/will-it-scale/performance-futex3
214345 +31.5% 281773 GEO-MEAN sched_debug.cfs_rq[2]:/.min_vruntime
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
38234 ± 5% -17.6% 31518 ± 1% nhm4/will-it-scale/performance-futex3
38234 -17.6% 31518 GEO-MEAN softirqs.RCU
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
37071 ± 15% +36.8% 50702 ± 15% nhm4/will-it-scale/performance-futex3
37071 +36.8% 50702 GEO-MEAN sched_debug.cfs_rq[2]:/.exec_clock
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
324067 ± 8% -18.2% 265084 ± 14% nhm4/will-it-scale/performance-futex3
324067 -18.2% 265084 GEO-MEAN sched_debug.cfs_rq[6]:/.min_vruntime
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
103 ± 5% -21.8% 81 ± 9% nhm4/will-it-scale/performance-futex3
103 -21.8% 81 GEO-MEAN sched_debug.cpu#6.cpu_load[2]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
114 ± 4% -21.6% 89 ± 12% nhm4/will-it-scale/performance-futex3
114 -21.6% 89 GEO-MEAN sched_debug.cpu#6.cpu_load[1]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
58381 ± 10% -23.2% 44812 ± 18% nhm4/will-it-scale/performance-futex3
58381 -23.2% 44812 GEO-MEAN sched_debug.cfs_rq[6]:/.exec_clock
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
82318 ± 11% -20.7% 65237 ± 13% nhm4/will-it-scale/performance-futex3
82318 -20.7% 65237 GEO-MEAN sched_debug.cpu#1.nr_load_updates
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
95 ± 7% -19.9% 76 ± 8% nhm4/will-it-scale/performance-futex3
95 -19.9% 76 GEO-MEAN sched_debug.cpu#6.cpu_load[4]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
7.76 ± 0% +15.3% 8.95 ± 0% lkp-wsx01/will-it-scale/performance-futex3
6.55 ± 3% +22.3% 8.01 ± 1% nhm4/will-it-scale/performance-futex3
7.13 +18.8% 8.47 GEO-MEAN perf-profile.cpu-cycles.get_futex_key.futex_wake.do_futex.sys_futex.system_call_fastpath
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
98 ± 7% -20.5% 78 ± 8% nhm4/will-it-scale/performance-futex3
98 -20.5% 78 GEO-MEAN sched_debug.cpu#6.cpu_load[3]
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1.22 ± 4% -15.5% 1.03 ± 9% nhm4/will-it-scale/performance-futex3
1.22 -15.5% 1.03 GEO-MEAN perf-profile.cpu-cycles.drop_futex_key_refs.isra.12.do_futex.sys_futex.system_call_fastpath.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3.76 ± 0% -13.5% 3.25 ± 0% lkp-wsx01/will-it-scale/performance-futex3
3.23 ± 2% -15.0% 2.75 ± 2% nhm4/will-it-scale/performance-futex3
3.49 -14.3% 2.99 GEO-MEAN perf-profile.cpu-cycles.sysret_check.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3053 ± 8% -18.2% 2499 ± 13% nhm4/will-it-scale/performance-futex3
3053 -18.2% 2499 GEO-MEAN sched_debug.cpu#6.curr->pid
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
11.31 ± 0% -11.2% 10.05 ± 0% lkp-wsx01/will-it-scale/performance-futex3
11.27 ± 1% -16.4% 9.43 ± 1% nhm4/will-it-scale/performance-futex3
11.29 -13.8% 9.73 GEO-MEAN perf-profile.cpu-cycles.system_call.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
18.21 ± 1% -13.6% 15.74 ± 2% nhm4/will-it-scale/performance-futex3
18.21 -13.6% 15.74 GEO-MEAN perf-profile.cpu-cycles.hash_futex.do_futex.sys_futex.system_call_fastpath.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
12.56 ± 0% -12.2% 11.03 ± 1% lkp-wsx01/will-it-scale/performance-futex3
12.54 ± 1% -14.4% 10.73 ± 3% nhm4/will-it-scale/performance-futex3
12.55 -13.3% 10.88 GEO-MEAN perf-profile.cpu-cycles.system_call_after_swapgs.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
21140 ± 9% +17.5% 24849 ± 8% nhm4/will-it-scale/performance-futex3
21140 +17.5% 24849 GEO-MEAN sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
4.52 ± 2% -11.5% 4.00 ± 1% lkp-wsx01/will-it-scale/performance-futex3
42.44 ± 0% +13.8% 48.30 ± 2% nhm4/will-it-scale/performance-futex3
13.85 +0.4% 13.90 GEO-MEAN perf-profile.cpu-cycles.do_futex.sys_futex.system_call_fastpath.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1.21 ± 1% -11.4% 1.08 ± 1% lkp-wsx01/will-it-scale/performance-futex3
1.21 -11.4% 1.08 GEO-MEAN perf-profile.cpu-cycles.drop_futex_key_refs.isra.12.futex_wake.do_futex.sys_futex.system_call_fastpath
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
50.65 ± 0% +11.9% 56.69 ± 2% nhm4/will-it-scale/performance-futex3
50.65 +11.9% 56.69 GEO-MEAN perf-profile.cpu-cycles.system_call_fastpath.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
20.50 ± 0% -10.9% 18.25 ± 0% lkp-wsx01/will-it-scale/performance-futex3
20.50 -10.9% 18.25 GEO-MEAN perf-profile.cpu-cycles.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
27368 ± 7% -13.6% 23638 ± 8% nhm4/will-it-scale/performance-futex3
27368 -13.6% 23638 GEO-MEAN sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
4.92 ± 0% -11.5% 4.36 ± 1% lkp-wsx01/will-it-scale/performance-futex3
48.57 ± 0% +11.3% 54.04 ± 2% nhm4/will-it-scale/performance-futex3
15.46 -0.7% 15.34 GEO-MEAN perf-profile.cpu-cycles.sys_futex.system_call_fastpath.syscall
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
596 ± 7% -13.6% 515 ± 8% nhm4/will-it-scale/performance-futex3
596 -13.6% 515 GEO-MEAN sched_debug.cfs_rq[6]:/.tg_runnable_contrib
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3007 ± 6% -11.8% 2654 ± 13% lkp-wsx01/will-it-scale/performance-futex3
3007 -11.8% 2653 GEO-MEAN sched_debug.cpu#38.curr->pid
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
39113 ± 2% +10.7% 43292 ± 4% nhm4/will-it-scale/performance-futex3
39113 +10.7% 43292 GEO-MEAN cpuidle.C6-NHM.usage
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
1401 ± 0% -10.1% 1260 ± 0% lkp-wsx01/will-it-scale/performance-futex3
199 ± 0% -21.5% 156 ± 2% nhm4/will-it-scale/performance-futex3
528 -16.0% 443 GEO-MEAN time.user_time
0429fbc0bdc297d6 76835b0ebf8a7fe85beb03c751
---------------- --------------------------
3901 ± 0% +3.6% 4041 ± 0% lkp-wsx01/will-it-scale/performance-futex3
522 ± 0% +8.2% 565 ± 0% nhm4/will-it-scale/performance-futex3
1428 +5.9% 1511 GEO-MEAN time.system_time
lkp-wsx01: Westmere-EX
Memory: 128G
nhm4: Nehalem
Memory: 4G
will-it-scale.scalability
0.73 ++----------------------------------------O--------------------------+
O O O O O O O O O O O O O
0.72 ++ |
0.71 ++ |
| |
0.7 ++ |
| |
0.69 ++ |
| |
0.68 ++ |
0.67 ++ |
| |
0.66 *+...*.....*....*.... ..*.... ..*.....*....* |
| *.....*.. *.. |
0.65 ++-------------------------------------------------------------------+
will-it-scale.per_process_ops
1.14e+07 ++---------------------------------------------------------------+
*....*....*....*.... ..*....*....*....*....*....* |
1.12e+07 ++ *.. |
| |
1.1e+07 ++ |
1.08e+07 ++ |
| |
1.06e+07 ++ |
| |
1.04e+07 ++ |
1.02e+07 ++ |
| |
1e+07 ++ |
O O O O O O O O O O O O O O
9.8e+06 ++---------------------------------------------------------------+
will-it-scale.per_thread_ops
1.14e+07 ++---------------------------------------------------------------+
*....*....*....*....*....*....*....*....*....*....* |
1.12e+07 ++ |
| |
1.1e+07 ++ |
1.08e+07 ++ |
| |
1.06e+07 ++ |
| |
1.04e+07 ++ |
1.02e+07 ++ |
| |
1e+07 O+ O O O O O O O O O O
| O O O |
9.8e+06 ++---------------------------------------------------------------+
28 ++---------------------------------------------------------------------+
| O |
26 O+ O O O O O O O O O O
| O O |
24 ++ |
| |
22 ++ |
| |
20 ++ |
| |
18 ++ |
| ...*....*..... ..*.....*.... ...*....*..... |
16 *+...*.. *.. *.. * |
| |
14 ++---------------------------------------------------------------------+
11 ++---------------------------------------------------------------------+
10 O+ O O O O O O O O O O O
| O O |
9 ++ |
8 ++ |
| |
7 ++ |
6 ++ |
5 ++ |
| |
4 ++ |
3 ++ |
| |
2 *+...*.....*....*.....*....*.....*....*.....*....*.....* |
1 ++---------------------------------------------------------------------+
9 ++--------------------------------------------------------------------+
| |
8.5 ++ O |
| O |
O O O O O O
8 ++ O O O O O O |
| |
7.5 ++ |
| |
7 ++ |
| *.... |
| .. *..... ..*....*.... .*....*..... |
6.5 ++ . *.. . .. * |
| .. *. |
6 *+---*----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[sched] [ INFO: suspicious RCU usage. ]
by Fengguang Wu
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/urgent
commit f6a2b544517d33f6a1e428567bda96fd859ce1c9
Author: Kirill Tkhai <ktkhai(a)parallels.com>
AuthorDate: Mon Oct 27 14:18:25 2014 +0400
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Mon Oct 27 13:23:31 2014 +0100
sched: Fix race between task_group and sched_task_group
The race may happen when somebody is changing task_group of a forking task.
Child's cgroup is the same as parent's after dup_task_struct() (there just
memory copying). Also, cfs_rq and rt_rq are the same as parent's.
But if parent changes its task_group before it's called cgroup_post_fork(),
we do not reflect this situation on child. Child's cfs_rq and rt_rq remain
the same, while child's task_group changes in cgroup_post_fork().
To fix this we introduce fork() method, which calls sched_move_task() directly.
This function changes sched_task_group on appropriate (also its logic has
no problem with freshly created tasks, so we shouldn't introduce something
special; we are able just to use it).
Possibly, this decides the Burke Libbey's problem: https://lkml.org/lkml/2014/10/24/456
Signed-off-by: Kirill Tkhai <ktkhai(a)parallels.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Link: http://lkml.kernel.org/r/1414405105.19914.169.camel@tkhai
+---------------------------+-----------+------------+------------+
| | v3.18-rc2 | f6a2b54451 | 8e5859c73b |
+---------------------------+-----------+------------+------------+
| boot_successes | 200 | 0 | 0 |
| boot_failures | 1 | 20 | 11 |
| BUG:kernel_boot_hang | 1 | | |
| INFO:suspicious_RCU_usage | 0 | 20 | 11 |
| backtrace:do_fork | 0 | 20 | 11 |
+---------------------------+-----------+------------+------------+
[ 0.060187]
[ 0.060500] ===============================
[ 0.060500] ===============================
[ 0.061307] [ INFO: suspicious RCU usage. ]
[ 0.061307] [ INFO: suspicious RCU usage. ]
[ 0.062109] 3.18.0-rc2-gf6a2b54 #404 Not tainted
[ 0.062109] 3.18.0-rc2-gf6a2b54 #404 Not tainted
[ 0.063049] -------------------------------
git bisect start 8e5859c73b9f45602222441a23eba899bb24c82e 522e980064c24d3dd9859e9375e17417496567cf --
git bisect bad 259820751fd17a4b49098429c68c2c0adfd1c9ed # 21:11 0- 2 Merge branch 'sched/core'
git bisect bad 8c64bf8de891aa51e916b51a3f7992321ec19b63 # 21:15 0- 1 Merge branch 'sched/urgent'
git bisect bad fd457f2bafdc4e367d1814ac395035a35980783f # 21:27 0- 7 sched/fair: Care divide error in update_task_scan_period()
git bisect bad fb5b330f079e243dcc831e9a3d65b9b9fbbed7f8 # 21:36 0- 1 sched/deadline: don't replenish from a !SCHED_DEADLINE entity
git bisect bad f6a2b544517d33f6a1e428567bda96fd859ce1c9 # 21:41 0- 2 sched: Fix race between task_group and sched_task_group
# first bad commit: [f6a2b544517d33f6a1e428567bda96fd859ce1c9] sched: Fix race between task_group and sched_task_group
git bisect good cac7f2429872d3733dc3f9915857b1691da2eb2f # 21:58 60+ 1 Linux 3.18-rc2
git bisect bad 8e5859c73b9f45602222441a23eba899bb24c82e # 22:00 0- 11 Merge branch 'sched/wait'
git bisect good cac7f2429872d3733dc3f9915857b1691da2eb2f # 22:05 60+ 1 Linux 3.18-rc2
git bisect good 7a891e6323e963f3301e44bdeee734028e34d390 # 22:14 60+ 0 Add linux-next specific files for 20141027
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-initrd $initrd
-m 320
-smp 2
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[vfs] WARNING: CPU: 3 PID: 2339 at mm/truncate.c:758 pagecache_isize_extended+0xdd/0x120()
by Fengguang Wu
Hi Jan,
Your patch gives a warning on the xfs code path. :)
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git dev
commit be330474e2d0533a7a6185e567f3654fec096dbd ("vfs: fix data corruption when blocksize < pagesize for mmaped data")
testbox/testcase/testparams: bay/fileio/performance-600s-100%-1HDD-xfs-64G-1024f-seqrewr-sync
f6e63f90809946d4 be330474e2d0533a7a6185e567
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:1 100% 5:5 kmsg.WARNING:at_mm/truncate.c:pagecache_isize_extended()
:1 100% 5:5 dmesg.WARNING:at_mm/truncate.c:pagecache_isize_extended()
%stddev %change %stddev
\ | \
4272 ± 29% +224.1% 13848 ± 20% sched_debug.cfs_rq[2]:/.spread0
5 ± 25% +84.0% 9 ± 18% sched_debug.cpu#1.cpu_load[1]
3 ± 22% +88.9% 6 ± 21% sched_debug.cpu#1.cpu_load[2]
6 ± 23% +53.1% 9 ± 16% sched_debug.cpu#1.cpu_load[0]
15780 ± 5% +50.5% 23751 ± 8% sched_debug.cfs_rq[3]:/.exec_clock
20615 ± 6% +52.2% 31371 ± 15% sched_debug.cfs_rq[3]:/.min_vruntime
193118 ± 16% +33.9% 258650 ± 15% sched_debug.cpu#1.ttwu_local
277966 ± 10% -26.1% 205411 ± 13% sched_debug.cpu#2.ttwu_local
24040 ± 6% +38.2% 33230 ± 11% sched_debug.cfs_rq[2]:/.min_vruntime
98 ± 15% +57.8% 154 ± 23% sched_debug.cfs_rq[1]:/.blocked_load_avg
23851 ± 0% +33.9% 31941 ± 0% proc-vmstat.nr_free_pages
95193 ± 0% +34.0% 127532 ± 0% meminfo.MemFree
96061 ± 0% +33.4% 128133 ± 0% vmstat.memory.free
19715 ± 3% +27.7% 25185 ± 5% sched_debug.cfs_rq[2]:/.exec_clock
690451 ± 9% -18.2% 564810 ± 6% sched_debug.cpu#2.ttwu_count
640 ± 12% +27.3% 815 ± 9% slabinfo.proc_inode_cache.active_objs
704 ± 6% +21.3% 854 ± 9% slabinfo.proc_inode_cache.num_objs
52372 ± 6% +18.8% 62197 ± 4% sched_debug.cpu#3.nr_load_updates
32780 ± 0% +18.7% 38924 ± 4% meminfo.DirectMap4k
1252 ± 2% +20.8% 1512 ± 9% slabinfo.kmalloc-128.num_objs
1252 ± 2% +20.8% 1512 ± 9% slabinfo.kmalloc-128.active_objs
89552 ± 0% +9.1% 97715 ± 1% softirqs.TIMER
51602 ± 2% +10.4% 56947 ± 7% sched_debug.cpu#1.nr_load_updates
81.08 ± 1% +14.8% 93.05 ± 1% time.system_time
8413 ± 1% -9.1% 7649 ± 6% time.involuntary_context_switches
14 ± 0% +10.0% 15 ± 3% time.percent_of_cpu_this_job_got
14850 ± 0% -8.1% 13644 ± 11% vmstat.system.cs
7715 ± 0% -7.7% 7118 ± 11% vmstat.system.in
603 ± 0% +2.4% 617 ± 0% time.elapsed_time
141 ± 0% -2.2% 138 ± 0% iostat.sda.avgqu-sz
<5>[ 25.956576] XFS (sda1): Mounting V4 Filesystem
<6>[ 26.194468] XFS (sda1): Ending clean mount
<4>[ 27.258450] ------------[ cut here ]------------
<4>[ 27.258789] WARNING: CPU: 3 PID: 2339 at mm/truncate.c:758 pagecache_isize_extended+0xdd/0x120()
<4>[ 27.259443] Modules linked in: ipmi_watchdog ipmi_msghandler btrfs xor raid6_pq sg sr_mod cdrom sd_mod firewire_ohci firewire_core crc_itu_t snd_hda_codec_realtek pcspkr snd_hda_codec_generic ahci libahci libata snd_hda_intel i2c_i801 snd_hda_controller parport_pc parport snd_hda_codec snd_hwdep snd_pcm snd_timer shpchp snd x38_edac edac_core soundcore acpi_cpufreq
<4>[ 27.262734] CPU: 3 PID: 2339 Comm: fallocate Not tainted 3.17.0-gda9a9f1 #1
<4>[ 27.263153] Hardware name: / , BIOS VVRBLI9J.86A.2891.2007.0511.1144 05/11/2007
<4>[ 27.263780] 0000000000000009 ffff88007a43fd88 ffffffff81859ea6 0000000000000000
<4>[ 27.264492] ffff88007a43fdc0 ffffffff8106ef0d 0000000000001000 ffff88005b6f05a8
<4>[ 27.265199] 0000000000000000 ffff88005b6f05a8 0000000004000000 ffff88007a43fdd0
<4>[ 27.265906] Call Trace:
<4>[ 27.266165] [<ffffffff81859ea6>] dump_stack+0x4d/0x66
<4>[ 27.266511] [<ffffffff8106ef0d>] warn_slowpath_common+0x7d/0xa0
<4>[ 27.266900] [<ffffffff8106efea>] warn_slowpath_null+0x1a/0x20
<4>[ 27.267286] [<ffffffff8117020d>] pagecache_isize_extended+0xdd/0x120
<4>[ 27.267690] [<ffffffff811712b7>] truncate_setsize+0x27/0x40
<4>[ 27.268068] [<ffffffff8133eab7>] xfs_setattr_size+0x157/0x3a0
<4>[ 27.268442] [<ffffffff8134c827>] ? xfs_trans_commit+0x157/0x250
<4>[ 27.268821] [<ffffffff813336df>] xfs_file_fallocate+0x2df/0x300
<4>[ 27.269215] [<ffffffff811dbb09>] ? __sb_start_write+0x49/0xf0
<4>[ 27.269596] [<ffffffff813923b4>] ? selinux_file_permission+0xc4/0x120
<4>[ 27.270009] [<ffffffff811d7563>] do_fallocate+0x123/0x1b0
<4>[ 27.270380] [<ffffffff811d7633>] SyS_fallocate+0x43/0x70
<4>[ 27.270738] [<ffffffff81862c69>] system_call_fastpath+0x16/0x1b
<4>[ 27.271119] ---[ end trace 6a3b1350ad399610 ]---
<4>[ 27.274498] ------------[ cut here ]------------
Thanks,
Fengguang
6 years, 2 months
[AHCI] 18dcf433f3d: -3.3% fileio.requests_per_sec
by kernel test robot
FYI, we noticed the below changes on
commit 18dcf433f3ded61eb140a55e7048ec2fef79e723 ("AHCI: Optimize single IRQ interrupt processing")
227dfb4dbf109596 18dcf433f3ded61eb140a55e70 testbox/testcase/testparams
---------------- -------------------------- ---------------------------
%stddev %change %stddev
\ | \
61.60 ± 1% -3.3% 59.56 ± 0% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
61.60 -3.3% 59.56 GEO-MEAN fileio.requests_per_sec
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
436798 ± 28% -73.4% 116066 ± 33% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
436798 -73.4% 116066 GEO-MEAN sched_debug.cpu#2.ttwu_local
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
29791 ± 4% -72.7% 8123 ± 6% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
29791 -72.7% 8123 GEO-MEAN sched_debug.cfs_rq[2]:/.exec_clock
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
26505 ± 3% -71.1% 7657 ± 6% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
26505 -71.1% 7657 GEO-MEAN sched_debug.cfs_rq[3]:/.exec_clock
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
211746 ± 47% +122.7% 471499 ± 23% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
211746 +122.7% 471499 GEO-MEAN sched_debug.cpu#3.ttwu_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
25 ± 38% +206.3% 77 ± 12% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
25 +206.3% 77 GEO-MEAN sched_debug.cfs_rq[0]:/.tg_runnable_contrib
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
395023 ± 21% +179.3% 1103475 ± 9% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
395023 +179.3% 1103475 GEO-MEAN sched_debug.cpu#0.sched_goidle
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
26678 ± 6% +185.9% 76281 ± 0% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
26678 +185.9% 76281 GEO-MEAN sched_debug.cfs_rq[0]:/.exec_clock
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
206903 ± 45% +105.1% 424444 ± 26% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
206903 +105.1% 424444 GEO-MEAN sched_debug.cpu#3.sched_goidle
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
814594 ± 21% +173.2% 2225796 ± 9% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
814594 +173.2% 2225795 GEO-MEAN sched_debug.cpu#0.nr_switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
815199 ± 21% +173.2% 2227078 ± 9% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
815198 +173.2% 2227078 GEO-MEAN sched_debug.cpu#0.sched_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1213 ± 36% +195.3% 3583 ± 12% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
1213 +195.3% 3583 GEO-MEAN sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
29624 ± 6% +176.5% 81923 ± 1% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
29624 +176.5% 81923 GEO-MEAN sched_debug.cfs_rq[0]:/.min_vruntime
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
193739 ± 46% +99.9% 387345 ± 15% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
193739 +99.9% 387345 GEO-MEAN sched_debug.cpu#1.sched_goidle
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
482611 ± 24% -60.5% 190666 ± 19% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
482611 -60.5% 190666 GEO-MEAN sched_debug.cpu#2.sched_goidle
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
31073 ± 4% -63.6% 11312 ± 2% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
31073 -63.6% 11312 GEO-MEAN sched_debug.cfs_rq[3]:/.min_vruntime
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
12 ± 33% -68.3% 4 ± 27% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
12 -68.3% 4 GEO-MEAN sched_debug.cpu#3.cpu_load[4]
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
440281 ± 42% +97.4% 869268 ± 25% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
440281 +97.4% 869268 GEO-MEAN sched_debug.cpu#3.nr_switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
440898 ± 42% +97.2% 869665 ± 25% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
440898 +97.2% 869665 GEO-MEAN sched_debug.cpu#3.sched_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
33967 ± 4% -63.3% 12450 ± 3% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
33967 -63.3% 12450 GEO-MEAN sched_debug.cfs_rq[2]:/.min_vruntime
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
413955 ± 43% +93.7% 801767 ± 14% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
413955 +93.7% 801767 GEO-MEAN sched_debug.cpu#1.nr_switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
414509 ± 43% +93.5% 802162 ± 14% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
414509 +93.5% 802162 GEO-MEAN sched_debug.cpu#1.sched_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
991210 ± 24% -57.1% 425186 ± 17% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
991210 -57.1% 425186 GEO-MEAN sched_debug.cpu#2.nr_switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
991832 ± 24% -57.1% 425587 ± 17% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
991831 -57.1% 425587 GEO-MEAN sched_debug.cpu#2.sched_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
437 ± 14% -59.6% 176 ± 27% bay/dd-write/performance-1HDD-cfq-xfs-10dd
539 ± 7% -50.6% 266 ± 7% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
485 -55.3% 217 GEO-MEAN slabinfo.xfs_buf.active_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
446 ± 14% -57.7% 188 ± 30% bay/dd-write/performance-1HDD-cfq-xfs-10dd
548 ± 6% -43.9% 307 ± 4% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
494 -51.3% 240 GEO-MEAN slabinfo.xfs_buf.num_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
216837 ± 39% +85.6% 402550 ± 14% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
216837 +85.6% 402550 GEO-MEAN sched_debug.cpu#1.ttwu_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
496141 ± 24% +117.8% 1080683 ± 2% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
496141 +117.8% 1080683 GEO-MEAN sched_debug.cpu#2.ttwu_count
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
23697 ± 7% -51.9% 11404 ± 2% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
23697 -51.9% 11404 GEO-MEAN sched_debug.cfs_rq[1]:/.min_vruntime
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
88762 ± 5% +94.9% 173015 ± 2% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
88762 +94.9% 173015 GEO-MEAN sched_debug.cpu#0.nr_load_updates
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
21233 ± 8% -49.3% 10767 ± 5% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
21233 -49.3% 10767 GEO-MEAN sched_debug.cfs_rq[1]:/.exec_clock
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
14 ± 29% -51.4% 7 ± 15% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
14 -51.4% 6 GEO-MEAN sched_debug.cpu#3.cpu_load[3]
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
44 ± 13% -45.2% 24 ± 7% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
44 -45.2% 24 GEO-MEAN sched_debug.cfs_rq[3]:/.tg_runnable_contrib
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
5.00 ± 16% -31.1% 3.45 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
5.00 -31.1% 3.45 GEO-MEAN perf-profile.cpu-cycles.__writeback_inodes_wb.wb_writeback.bdi_writeback_workfn.process_one_work.worker_thread
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
5.00 ± 16% -31.6% 3.42 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
5.00 -31.6% 3.42 GEO-MEAN perf-profile.cpu-cycles.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.bdi_writeback_workfn.process_one_work
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.99 ± 16% -31.6% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.99 -31.6% 3.41 GEO-MEAN perf-profile.cpu-cycles.write_cache_pages.generic_writepages.xfs_vm_writepages.do_writepages.__writeback_single_inode
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.99 ± 16% -31.6% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.99 -31.6% 3.41 GEO-MEAN perf-profile.cpu-cycles.xfs_vm_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
5.00 ± 16% -31.1% 3.45 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
5.00 -31.1% 3.45 GEO-MEAN perf-profile.cpu-cycles.wb_writeback.bdi_writeback_workfn.process_one_work.worker_thread.kthread
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.99 ± 16% -31.6% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.99 -31.6% 3.41 GEO-MEAN perf-profile.cpu-cycles.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.98 ± 16% -31.4% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.98 -31.4% 3.41 GEO-MEAN perf-profile.cpu-cycles.__writepage.write_cache_pages.generic_writepages.xfs_vm_writepages.do_writepages
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
5.00 ± 16% -31.7% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
5.00 -31.7% 3.41 GEO-MEAN perf-profile.cpu-cycles.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.bdi_writeback_workfn
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.98 ± 16% -31.4% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.98 -31.4% 3.41 GEO-MEAN perf-profile.cpu-cycles.xfs_vm_writepage.__writepage.write_cache_pages.generic_writepages.xfs_vm_writepages
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.99 ± 16% -31.6% 3.41 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.99 -31.6% 3.41 GEO-MEAN perf-profile.cpu-cycles.generic_writepages.xfs_vm_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
5.00 ± 16% -31.1% 3.45 ± 28% bay/dd-write/performance-1HDD-cfq-xfs-10dd
5.00 -31.1% 3.45 GEO-MEAN perf-profile.cpu-cycles.bdi_writeback_workfn.process_one_work.worker_thread.kthread.ret_from_fork
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4.02 ± 17% -37.3% 2.52 ± 32% bay/dd-write/performance-1HDD-cfq-xfs-10dd
4.02 -37.3% 2.52 GEO-MEAN perf-profile.cpu-cycles.xfs_cluster_write.xfs_vm_writepage.__writepage.write_cache_pages.generic_writepages
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
2071 ± 12% -43.9% 1162 ± 6% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
2071 -43.9% 1162 GEO-MEAN sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
13 ± 48% -42.6% 7 ± 18% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
13 -42.6% 7 GEO-MEAN sched_debug.cpu#2.cpu_load[3]
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
3.20 ± 19% -37.6% 1.99 ± 25% bay/dd-write/performance-1HDD-cfq-xfs-10dd
3.20 -37.6% 1.99 GEO-MEAN perf-profile.cpu-cycles.xfs_convert_page.isra.11.xfs_cluster_write.xfs_vm_writepage.__writepage.write_cache_pages
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
9 ± 16% -43.7% 5 ± 49% bay/dd-write/performance-1HDD-cfq-xfs-10dd
9 -43.8% 5 GEO-MEAN sched_debug.cfs_rq[0]:/.runnable_load_avg
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
627 ± 12% +55.9% 978 ± 8% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
627 +55.9% 978 GEO-MEAN slabinfo.btrfs_delayed_tree_ref.num_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
627 ± 12% +54.9% 972 ± 8% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
627 +54.9% 972 GEO-MEAN slabinfo.btrfs_delayed_tree_ref.active_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
2552 ± 8% -35.9% 1635 ± 10% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
2552 -35.9% 1635 GEO-MEAN slabinfo.kmalloc-96.active_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
2577 ± 7% -35.0% 1676 ± 9% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
2577 -35.0% 1676 GEO-MEAN slabinfo.kmalloc-96.num_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
650 ± 5% -29.7% 457 ± 7% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
649 -29.7% 457 GEO-MEAN slabinfo.blkdev_requests.active_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.17 ± 13% +24.6% 1.46 ± 8% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.17 +24.6% 1.46 GEO-MEAN perf-profile.cpu-cycles.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
664 ± 4% -26.8% 486 ± 7% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
664 -26.8% 486 GEO-MEAN slabinfo.blkdev_requests.num_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.40 ± 4% -23.4% 1.07 ± 9% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.40 -23.4% 1.07 GEO-MEAN perf-profile.cpu-cycles.default_idle.arch_cpu_idle.cpu_startup_entry.rest_init.start_kernel
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.40 ± 4% -22.8% 1.08 ± 10% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.40 -22.8% 1.08 GEO-MEAN perf-profile.cpu-cycles.arch_cpu_idle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
100710 ± 5% +28.7% 129619 ± 0% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
100710 +28.7% 129619 GEO-MEAN sched_debug.cpu#2.nr_load_updates
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
2671 ± 10% +19.1% 3181 ± 6% bay/dd-write/performance-1HDD-cfq-xfs-10dd
2671 +19.1% 3181 GEO-MEAN slabinfo.anon_vma.num_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
86107 ± 5% -21.2% 67836 ± 8% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
86107 -21.2% 67835 GEO-MEAN sched_debug.cpu#3.nr_load_updates
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.57 ± 5% -15.2% 1.33 ± 9% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.57 -15.2% 1.33 GEO-MEAN perf-profile.cpu-cycles.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.56 ± 5% -14.5% 1.33 ± 9% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.56 -14.5% 1.33 GEO-MEAN perf-profile.cpu-cycles.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.57 ± 5% -15.2% 1.33 ± 9% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.57 -15.2% 1.33 GEO-MEAN perf-profile.cpu-cycles.x86_64_start_reservations.x86_64_start_kernel
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.57 ± 5% -15.2% 1.33 ± 9% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.57 -15.2% 1.33 GEO-MEAN perf-profile.cpu-cycles.x86_64_start_kernel
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1.57 ± 5% -15.2% 1.33 ± 9% bay/dd-write/performance-1HDD-cfq-xfs-10dd
1.57 -15.2% 1.33 GEO-MEAN perf-profile.cpu-cycles.start_kernel.x86_64_start_reservations.x86_64_start_kernel
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
50 ± 32% -34.0% 33 ± 5% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
49 -34.0% 33 GEO-MEAN sched_debug.cfs_rq[2]:/.tg_runnable_contrib
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
815734 ± 1% -16.3% 682841 ± 2% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
815734 -16.3% 682841 GEO-MEAN sched_debug.cpu#0.avg_idle
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
2341 ± 31% -33.3% 1560 ± 4% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
2340 -33.3% 1560 GEO-MEAN sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
77032 ± 7% -15.7% 64945 ± 4% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
77032 -15.7% 64945 GEO-MEAN sched_debug.cpu#1.nr_load_updates
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
718 ± 5% +18.6% 852 ± 3% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
718 +18.6% 852 GEO-MEAN slabinfo.btrfs_trans_handle.active_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
723 ± 5% +18.8% 859 ± 3% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
723 +18.8% 859 GEO-MEAN slabinfo.btrfs_trans_handle.num_objs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4889 ± 5% -13.4% 4233 ± 5% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
4889 -13.4% 4233 GEO-MEAN sched_debug.cpu#3.curr->pid
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
4606 ± 2% +14.4% 5268 ± 3% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
4606 +14.4% 5268 GEO-MEAN sched_debug.cpu#2.curr->pid
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
63213 ± 0% +13.2% 71544 ± 0% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
63213 +13.2% 71544 GEO-MEAN softirqs.SCHED
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
166 ± 4% +10.1% 183 ± 4% bay/dd-write/performance-1HDD-cfq-xfs-10dd
166 +10.1% 183 GEO-MEAN uptime.idle
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
418402 ± 6% +250.9% 1468345 ± 0% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
418402 +250.9% 1468345 GEO-MEAN time.voluntary_context_switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
1434 ± 0% +56.3% 2241 ± 1% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
1433 +56.3% 2241 GEO-MEAN vmstat.system.in
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
5144 ± 0% +6.8% 5493 ± 0% bay/dd-write/performance-1HDD-cfq-xfs-10dd
2610 ± 0% +65.1% 4310 ± 1% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
3664 +32.8% 4865 GEO-MEAN vmstat.system.cs
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
10375 ± 3% -7.7% 9576 ± 3% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
10374 -7.7% 9576 GEO-MEAN time.involuntary_context_switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
3210548 ± 0% +6.7% 3426526 ± 0% bay/dd-write/performance-1HDD-cfq-xfs-10dd
3210548 +6.7% 3426526 GEO-MEAN perf-stat.context-switches
227dfb4dbf109596 18dcf433f3ded61eb140a55e70
---------------- --------------------------
706310 ± 1% -3.2% 683678 ± 0% bay/fileio/performance-600s-100%-1HDD-btrfs-64G-1024f-rndrw-sync
706310 -3.2% 683678 GEO-MEAN time.file_system_inputs
bay: Pentium D
Memory: 2G
time.voluntary_context_switches
1.6e+06 ++----------------------------------------------------------------+
| O O O O O O O O O OO O O O O |
1.4e+06 O+O O OO O |
| |
1.2e+06 ++ |
| |
1e+06 ++ |
| |
800000 ++ |
| |
600000 ++ |
| .*. |
400000 *+*.*.**.*.*.*.*.**.*.*.*.*.**.*.*.*.*.**.*.*.*.*.**.*.*.* **.*.*
| |
200000 ++----------------------------------------------------------------+
softirqs.SCHED
78000 ++------------------------------------------------------------------+
| O |
76000 ++ O |
74000 ++ |
| O O O O O O * |
72000 ++O O O O O O O O O O O |
70000 O+ O :: |
| : : |
68000 ++ : : |
66000 ++ : : |
| : : |
64000 *+ : :.*.**. .*.*. *.*. .*.|
62000 ++ *.*.* .*.*.*. .*.*.*. *. .*.* * *.*.*.* *.* * *
|: + * * * *.* |
60000 ++*-----------------------------------------------------------------+
vmstat.system.in
2600 ++-------------------------------------------------------------------+
2400 ++ O |
| O O O O O O O O O O |
2200 O+O O O O O O O OO |
2000 ++ |
| |
1800 ++ |
1600 ++ |
1400 *+ *.*.*.*.**.*.*.*.*.*.*.*.*.*.*.**.*.*.*.*.*.*.*.*.*.*.**.*.*.*.*.*
| : |
1200 ++ : |
1000 ++ : |
| : |
800 ++* |
600 ++-------------------------------------------------------------------+
vmstat.system.cs
5000 ++-------------------------------------------------------------------+
| |
4500 ++ O O O OO O O O O O OO O |
4000 O+O O O O O O O |
| |
3500 ++ |
| |
3000 ++ |
| *.*.*. .*. .*.*.*.*.*. .*. *.*.*.*. .*. .*.*.*.*.**.*.*.*.*.*
2500 *+ : *.** * * *.* * * |
2000 ++ : |
|: : |
1500 ++: |
| * |
1000 ++-------------------------------------------------------------------+
sched_debug.cfs_rq[3]:/.exec_clock
35000 ++------------------------------------------------------------------+
| |
30000 ++ * |
| .* .*.* *. .* : + .*. *.*.*. .* .* |
|.*.*.* :.* + + * *. : * *.* * *.* + .*.**.* + .*
25000 *+ * *.* * + + * * |
| * |
20000 ++ |
| |
15000 ++ |
O O O |
| O |
10000 ++ O O O O O O |
| O OO O O O O O O O O |
5000 ++------------------------------------------------------------------+
sched_debug.cfs_rq[3]:/.min_vruntime
40000 ++------------------------------------------------------------------+
| * |
35000 ++ :: * |
| .*.* .* : : .*. :+ .* |
| .* * + .*.* *. : : *.* * *.*. .* + .*.**.*.*. .*
30000 ++*.*.* :+ *.* * *. + * * * |
* * * |
25000 ++ |
| |
20000 ++ |
| |
O |
15000 ++O O O O |
| O O O O O OO O O O O |
10000 ++----O-OO--------------------------O-O-----------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[rcu] BUG: kernel_boot_hang mode:0x10d0
by kernel test robot
FYI, we noticed the below changes on
commit eea203fea3484598280a07fe503e025e886297fb ("rcu: Use pr_alert/pr_cont for printing logs")
+------------------------------------------------------------------+------------+------------+
| | 188c1e896c | eea203fea3 |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 2 |
| boot_failures | 15 | 13 |
| page_allocation_failure:order:,mode:x10d0 | 15 | 13 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 15 | |
| backtrace:ring_buffer_consumer_thread | 15 | 13 |
| backtrace:rcu_torture_stats | 15 | |
| BUG:kernel_boot_hang | 0 | 13 |
+------------------------------------------------------------------+------------+------------+
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[ACPI / processor] f3ca4164529: -48.3% will-it-scale.per_process_ops
by kernel test robot
FYI, we noticed the below changes on
commit f3ca4164529b875374c410193bbbac0ee960895f ("ACPI / processor: Rework processor throttling with work_on_cpu()")
v3.14-rc4 f3ca4164529b875374c410193b testbox/testcase/testparams
---------------- -------------------------- ---------------------------
%stddev %change %stddev
\ | \
624516 ± 0% -48.3% 322914 ± 0% brickland1/will-it-scale/powersave-pwrite1
624516 -48.3% 322914 GEO-MEAN will-it-scale.per_process_ops
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
576025 ± 0% -47.9% 299956 ± 0% brickland1/will-it-scale/powersave-pwrite1
576024 -47.9% 299955 GEO-MEAN will-it-scale.per_thread_ops
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
0.46 ± 0% +6.5% 0.48 ± 2% brickland1/will-it-scale/powersave-pwrite1
0.46 +6.5% 0.48 GEO-MEAN will-it-scale.scalability
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
176818 ± 3% -84.7% 26974 ± 7% brickland1/will-it-scale/powersave-pwrite1
176818 -84.7% 26973 GEO-MEAN cpuidle.C6-IVB.usage
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
7.139e+09 ± 1% -78.5% 1.533e+09 ± 5% brickland1/will-it-scale/powersave-pwrite1
7.139e+09 -78.5% 1.533e+09 GEO-MEAN cpuidle.C6-IVB.time
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
32 ± 34% -77.2% 7 ± 44% brickland1/will-it-scale/powersave-pwrite1
32 -77.2% 7 GEO-MEAN sched_debug.cfs_rq[33]:/.blocked_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
84 ± 27% -71.6% 24 ± 39% brickland1/will-it-scale/powersave-pwrite1
84 -71.6% 24 GEO-MEAN proc-vmstat.nr_dirtied
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
976 ± 43% +148.7% 2427 ± 37% brickland1/will-it-scale/powersave-pwrite1
976 +148.7% 2427 GEO-MEAN sched_debug.cpu#20.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
41 ± 28% +185.6% 118 ± 19% brickland1/will-it-scale/powersave-pwrite1
41 +185.6% 118 GEO-MEAN sched_debug.cpu#76.nr_uninterruptible
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2144 ± 38% +134.5% 5027 ± 36% brickland1/will-it-scale/powersave-pwrite1
2144 +134.5% 5027 GEO-MEAN sched_debug.cpu#20.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
37 ± 18% -60.8% 14 ± 44% brickland1/will-it-scale/powersave-pwrite1
37 -60.8% 14 GEO-MEAN sched_debug.cfs_rq[91]:/.load
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
37 ± 18% -60.8% 14 ± 44% brickland1/will-it-scale/powersave-pwrite1
37 -60.8% 14 GEO-MEAN sched_debug.cpu#91.load
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
35 ± 46% +235.4% 117 ± 35% brickland1/will-it-scale/powersave-pwrite1
34 +235.4% 117 GEO-MEAN sched_debug.cfs_rq[92]:/.tg_load_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
54 ± 9% -57.0% 23 ± 43% brickland1/will-it-scale/powersave-pwrite1
54 -57.0% 23 GEO-MEAN sched_debug.cpu#91.cpu_load[1]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
37 ± 29% -64.5% 13 ± 16% brickland1/will-it-scale/powersave-pwrite1
37 -64.5% 13 GEO-MEAN sched_debug.cfs_rq[33]:/.tg_load_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1785 ± 46% +75.8% 3139 ± 28% brickland1/will-it-scale/powersave-pwrite1
1785 +75.8% 3139 GEO-MEAN sched_debug.cpu#41.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1799 ± 23% -38.6% 1104 ± 38% brickland1/will-it-scale/powersave-pwrite1
1799 -38.6% 1104 GEO-MEAN sched_debug.cpu#60.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
46 ± 9% -54.9% 21 ± 24% brickland1/will-it-scale/powersave-pwrite1
46 -54.9% 21 GEO-MEAN sched_debug.cpu#91.cpu_load[2]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2138 ± 34% +108.7% 4462 ± 33% brickland1/will-it-scale/powersave-pwrite1
2138 +108.7% 4462 GEO-MEAN sched_debug.cpu#43.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
344 ± 36% +73.2% 596 ± 18% brickland1/will-it-scale/powersave-pwrite1
344 +73.2% 596 GEO-MEAN sched_debug.cfs_rq[91]:/.blocked_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
104 ± 26% -47.6% 54 ± 1% brickland1/will-it-scale/powersave-pwrite1
104 -47.6% 54 GEO-MEAN sched_debug.cpu#0.cpu_load[4]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1803 ± 23% -37.9% 1119 ± 37% brickland1/will-it-scale/powersave-pwrite1
1803 -37.9% 1119 GEO-MEAN sched_debug.cpu#60.sched_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
680 ± 38% +104.0% 1387 ± 37% brickland1/will-it-scale/powersave-pwrite1
679 +104.0% 1387 GEO-MEAN sched_debug.cpu#40.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
96 ± 26% -47.6% 50 ± 0% brickland1/will-it-scale/powersave-pwrite1
96 -47.6% 50 GEO-MEAN sched_debug.cpu#0.cpu_load[3]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
382 ± 34% +60.2% 612 ± 16% brickland1/will-it-scale/powersave-pwrite1
382 +60.2% 612 GEO-MEAN sched_debug.cfs_rq[91]:/.tg_load_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
881 ± 38% +135.9% 2079 ± 37% brickland1/will-it-scale/powersave-pwrite1
881 +135.9% 2079 GEO-MEAN sched_debug.cpu#43.ttwu_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
63 ± 24% -40.5% 37 ± 5% brickland1/will-it-scale/powersave-pwrite1
63 -40.5% 37 GEO-MEAN sched_debug.cfs_rq[0]:/.runnable_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
69 ± 34% -46.7% 37 ± 7% brickland1/will-it-scale/powersave-pwrite1
69 -46.7% 37 GEO-MEAN sched_debug.cfs_rq[0]:/.load
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
63 ± 24% -40.5% 37 ± 5% brickland1/will-it-scale/powersave-pwrite1
63 -40.5% 37 GEO-MEAN sched_debug.cpu#0.load
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
85 ± 26% -46.4% 45 ± 2% brickland1/will-it-scale/powersave-pwrite1
85 -46.4% 45 GEO-MEAN sched_debug.cpu#0.cpu_load[2]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
41 ± 12% -50.2% 20 ± 15% brickland1/will-it-scale/powersave-pwrite1
41 -50.2% 20 GEO-MEAN sched_debug.cpu#91.cpu_load[3]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
63 ± 24% -37.0% 40 ± 11% brickland1/will-it-scale/powersave-pwrite1
63 -37.0% 40 GEO-MEAN sched_debug.cfs_rq[0]:/.tg_load_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
374 ± 28% +81.1% 678 ± 45% brickland1/will-it-scale/powersave-pwrite1
374 +81.1% 678 GEO-MEAN sched_debug.cpu#12.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2.26 ± 7% +96.5% 4.44 ± 3% brickland1/will-it-scale/powersave-pwrite1
2.26 +96.5% 4.44 GEO-MEAN perf-profile.cpu-cycles.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__run_hrtimer
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2.40 ± 8% +95.7% 4.69 ± 3% brickland1/will-it-scale/powersave-pwrite1
2.40 +95.7% 4.69 GEO-MEAN perf-profile.cpu-cycles.update_process_times.tick_sched_handle.tick_sched_timer.__run_hrtimer.hrtimer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2.41 ± 8% +95.7% 4.71 ± 3% brickland1/will-it-scale/powersave-pwrite1
2.41 +95.7% 4.71 GEO-MEAN perf-profile.cpu-cycles.tick_sched_handle.isra.17.tick_sched_timer.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
25 ± 23% -29.9% 17 ± 29% brickland1/will-it-scale/powersave-pwrite1
25 -29.9% 17 GEO-MEAN sched_debug.cfs_rq[17]:/.load
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
75 ± 28% -41.9% 43 ± 5% brickland1/will-it-scale/powersave-pwrite1
74 -41.9% 43 GEO-MEAN sched_debug.cpu#0.cpu_load[1]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
130 ± 18% -46.8% 69 ± 13% brickland1/will-it-scale/powersave-pwrite1
129 -46.8% 69 GEO-MEAN proc-vmstat.nr_written
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
898 ± 23% +68.6% 1513 ± 40% brickland1/will-it-scale/powersave-pwrite1
898 +68.6% 1513 GEO-MEAN sched_debug.cpu#12.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2177 ± 25% +102.9% 4418 ± 47% brickland1/will-it-scale/powersave-pwrite1
2177 +102.9% 4418 GEO-MEAN sched_debug.cpu#46.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
66 ± 29% -35.3% 43 ± 13% brickland1/will-it-scale/powersave-pwrite1
66 -35.3% 43 GEO-MEAN sched_debug.cpu#0.cpu_load[0]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.12 ± 0% -43.8% 0.63 ± 0% brickland1/will-it-scale/powersave-pwrite1
1.12 -43.8% 0.63 GEO-MEAN turbostat.GHz
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
167 ± 28% +67.5% 280 ± 3% brickland1/will-it-scale/powersave-pwrite1
167 +67.5% 279 GEO-MEAN slabinfo.nfs_write_data.num_objs
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
167 ± 28% +67.5% 280 ± 3% brickland1/will-it-scale/powersave-pwrite1
167 +67.5% 279 GEO-MEAN slabinfo.nfs_write_data.active_objs
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2.00 ± 7% +75.5% 3.52 ± 3% brickland1/will-it-scale/powersave-pwrite1
2.00 +75.5% 3.52 GEO-MEAN perf-profile.cpu-cycles.sched_clock_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
33 ± 13% -39.4% 20 ± 13% brickland1/will-it-scale/powersave-pwrite1
33 -39.4% 19 GEO-MEAN sched_debug.cpu#91.cpu_load[4]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.98 ± 7% +73.2% 3.44 ± 3% brickland1/will-it-scale/powersave-pwrite1
1.98 +73.2% 3.44 GEO-MEAN perf-profile.cpu-cycles.ktime_get.sched_clock_tick.scheduler_tick.update_process_times.tick_sched_handle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.96 ± 7% +71.8% 3.37 ± 3% brickland1/will-it-scale/powersave-pwrite1
1.96 +71.8% 3.37 GEO-MEAN perf-profile.cpu-cycles.read_hpet.ktime_get.sched_clock_tick.scheduler_tick.update_process_times
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
474 ± 17% +57.3% 746 ± 31% brickland1/will-it-scale/powersave-pwrite1
474 +57.3% 745 GEO-MEAN sched_debug.cpu#110.ttwu_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1585 ± 29% -37.8% 985 ± 30% brickland1/will-it-scale/powersave-pwrite1
1584 -37.8% 985 GEO-MEAN sched_debug.cpu#108.ttwu_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
23 ± 7% +81.0% 42 ± 22% brickland1/will-it-scale/powersave-pwrite1
23 +81.0% 42 GEO-MEAN sched_debug.cpu#106.cpu_load[4]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2741 ± 19% +43.2% 3924 ± 18% brickland1/will-it-scale/powersave-pwrite1
2741 +43.2% 3924 GEO-MEAN sched_debug.cpu#35.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.32 ± 6% +41.2% 1.86 ± 12% brickland1/will-it-scale/powersave-pwrite1
1.32 +41.2% 1.86 GEO-MEAN perf-profile.cpu-cycles.apic_timer_interrupt.shmem_write_end.generic_file_buffered_write.__generic_file_aio_write.generic_file_aio_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
4 ± 0% +45.0% 5 ± 12% brickland1/will-it-scale/powersave-pwrite1
4 +45.0% 5 GEO-MEAN sched_debug.cpu#92.cpu_load[2]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.29 ± 6% +39.9% 1.80 ± 11% brickland1/will-it-scale/powersave-pwrite1
1.29 +39.9% 1.80 GEO-MEAN perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.shmem_write_end.generic_file_buffered_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.29 ± 7% +40.0% 1.80 ± 12% brickland1/will-it-scale/powersave-pwrite1
1.29 +40.0% 1.80 GEO-MEAN perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.shmem_write_end
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.29 ± 6% +39.9% 1.81 ± 11% brickland1/will-it-scale/powersave-pwrite1
1.29 +39.9% 1.81 GEO-MEAN perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt.shmem_write_end.generic_file_buffered_write.__generic_file_aio_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
426 ± 39% -43.1% 242 ± 10% brickland1/will-it-scale/powersave-pwrite1
426 -43.1% 242 GEO-MEAN sched_debug.cpu#60.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
28 ± 5% +57.4% 44 ± 23% brickland1/will-it-scale/powersave-pwrite1
28 +57.4% 44 GEO-MEAN sched_debug.cpu#106.cpu_load[3]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
5 ± 7% +26.9% 6 ± 7% brickland1/will-it-scale/powersave-pwrite1
5 +26.9% 6 GEO-MEAN sched_debug.cpu#32.cpu_load[3]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
7.81 ± 7% +39.6% 10.91 ± 2% brickland1/will-it-scale/powersave-pwrite1
7.81 +39.6% 10.91 GEO-MEAN perf-profile.cpu-cycles.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
7.78 ± 7% +38.9% 10.80 ± 2% brickland1/will-it-scale/powersave-pwrite1
7.78 +38.9% 10.80 GEO-MEAN perf-profile.cpu-cycles.tick_sched_timer.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
2961 ± 9% +30.2% 3856 ± 27% brickland1/will-it-scale/powersave-pwrite1
2960 +30.2% 3856 GEO-MEAN sched_debug.cpu#18.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3.87 ± 5% +40.4% 5.44 ± 1% brickland1/will-it-scale/powersave-pwrite1
3.87 +40.4% 5.44 GEO-MEAN perf-profile.cpu-cycles.apic_timer_interrupt.generic_file_buffered_write.__generic_file_aio_write.generic_file_aio_write.do_sync_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
106 ± 7% +58.7% 169 ± 31% brickland1/will-it-scale/powersave-pwrite1
106 +58.7% 169 GEO-MEAN sched_debug.cpu#86.ttwu_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3.76 ± 5% +39.6% 5.24 ± 1% brickland1/will-it-scale/powersave-pwrite1
3.76 +39.6% 5.24 GEO-MEAN perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.generic_file_buffered_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
803 ± 17% +40.3% 1127 ± 19% brickland1/will-it-scale/powersave-pwrite1
803 +40.3% 1127 GEO-MEAN sched_debug.cpu#39.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3.78 ± 5% +39.5% 5.27 ± 1% brickland1/will-it-scale/powersave-pwrite1
3.78 +39.5% 5.27 GEO-MEAN perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt.generic_file_buffered_write.__generic_file_aio_write.generic_file_aio_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3.76 ± 5% +39.4% 5.25 ± 1% brickland1/will-it-scale/powersave-pwrite1
3.76 +39.4% 5.25 GEO-MEAN perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.generic_file_buffered_write.__generic_file_aio_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
19688 ± 19% +35.1% 26591 ± 0% brickland1/will-it-scale/powersave-pwrite1
19688 +35.1% 26591 GEO-MEAN sched_debug.cfs_rq[118]:/.exec_clock
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1712 ± 17% +44.7% 2477 ± 25% brickland1/will-it-scale/powersave-pwrite1
1712 +44.7% 2476 GEO-MEAN sched_debug.cpu#39.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
73 ± 1% +63.7% 119 ± 47% brickland1/will-it-scale/powersave-pwrite1
73 +63.7% 119 GEO-MEAN sched_debug.cpu#86.ttwu_local
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
429 ± 23% +46.7% 630 ± 29% brickland1/will-it-scale/powersave-pwrite1
429 +46.7% 630 GEO-MEAN sched_debug.cpu#26.ttwu_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
98 ± 15% +65.9% 162 ± 36% brickland1/will-it-scale/powersave-pwrite1
98 +65.9% 162 GEO-MEAN sched_debug.cpu#86.sched_goidle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
71965 ± 14% +17.5% 84574 ± 0% brickland1/will-it-scale/powersave-pwrite1
71965 +17.5% 84574 GEO-MEAN sched_debug.cfs_rq[60]:/.exec_clock
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
367 ± 17% +43.8% 528 ± 24% brickland1/will-it-scale/powersave-pwrite1
367 +43.8% 528 GEO-MEAN sched_debug.cpu#82.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
371 ± 17% +45.0% 538 ± 26% brickland1/will-it-scale/powersave-pwrite1
371 +45.0% 538 GEO-MEAN sched_debug.cpu#82.sched_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
196924 ± 13% +17.8% 231931 ± 0% brickland1/will-it-scale/powersave-pwrite1
196924 +17.8% 231931 GEO-MEAN sched_debug.cfs_rq[60]:/.spread0
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
309 ± 10% +65.1% 511 ± 40% brickland1/will-it-scale/powersave-pwrite1
309 +65.1% 511 GEO-MEAN sched_debug.cpu#86.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3844 ± 17% +30.9% 5032 ± 0% brickland1/will-it-scale/powersave-pwrite1
3844 +30.9% 5032 GEO-MEAN sched_debug.cpu#118.curr->pid
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
313 ± 10% +64.4% 514 ± 39% brickland1/will-it-scale/powersave-pwrite1
313 +64.4% 514 GEO-MEAN sched_debug.cpu#86.sched_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
928 ± 32% -28.0% 668 ± 14% brickland1/will-it-scale/powersave-pwrite1
928 -28.0% 668 GEO-MEAN sched_debug.cpu#63.nr_switches
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3080 ± 6% +25.8% 3875 ± 2% brickland1/will-it-scale/powersave-pwrite1
3080 +25.8% 3875 GEO-MEAN sched_debug.cfs_rq[107]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3070 ± 6% +25.4% 3849 ± 2% brickland1/will-it-scale/powersave-pwrite1
3070 +25.4% 3849 GEO-MEAN sched_debug.cfs_rq[109]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
6762 ± 12% -18.8% 5493 ± 16% brickland1/will-it-scale/powersave-pwrite1
6762 -18.8% 5493 GEO-MEAN proc-vmstat.pgalloc_dma32
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3072 ± 6% +25.5% 3854 ± 2% brickland1/will-it-scale/powersave-pwrite1
3072 +25.5% 3854 GEO-MEAN sched_debug.cfs_rq[108]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
4 ± 0% +35.0% 5 ± 14% brickland1/will-it-scale/powersave-pwrite1
4 +35.0% 5 GEO-MEAN sched_debug.cpu#92.cpu_load[4]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
4 ± 0% +45.0% 5 ± 20% brickland1/will-it-scale/powersave-pwrite1
4 +45.0% 5 GEO-MEAN sched_debug.cpu#92.cpu_load[3]
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1609011 ± 17% +29.1% 2076675 ± 0% brickland1/will-it-scale/powersave-pwrite1
1609011 +29.1% 2076675 GEO-MEAN sched_debug.cfs_rq[118]:/.min_vruntime
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3209 ± 2% +21.4% 3896 ± 2% brickland1/will-it-scale/powersave-pwrite1
3209 +21.4% 3896 GEO-MEAN sched_debug.cfs_rq[100]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3105 ± 5% +25.1% 3885 ± 1% brickland1/will-it-scale/powersave-pwrite1
3105 +25.1% 3885 GEO-MEAN sched_debug.cfs_rq[106]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3197 ± 2% +21.2% 3875 ± 2% brickland1/will-it-scale/powersave-pwrite1
3197 +21.2% 3875 GEO-MEAN sched_debug.cfs_rq[102]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3198 ± 2% +22.1% 3906 ± 2% brickland1/will-it-scale/powersave-pwrite1
3198 +22.1% 3906 GEO-MEAN sched_debug.cfs_rq[95]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3195 ± 2% +22.0% 3897 ± 2% brickland1/will-it-scale/powersave-pwrite1
3195 +22.0% 3897 GEO-MEAN sched_debug.cfs_rq[99]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3202 ± 2% +22.4% 3919 ± 2% brickland1/will-it-scale/powersave-pwrite1
3202 +22.4% 3919 GEO-MEAN sched_debug.cfs_rq[97]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3211 ± 2% +21.3% 3895 ± 2% brickland1/will-it-scale/powersave-pwrite1
3211 +21.3% 3895 GEO-MEAN sched_debug.cfs_rq[101]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3195 ± 2% +22.3% 3908 ± 2% brickland1/will-it-scale/powersave-pwrite1
3195 +22.3% 3908 GEO-MEAN sched_debug.cfs_rq[98]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3045 ± 5% +23.4% 3758 ± 2% brickland1/will-it-scale/powersave-pwrite1
3045 +23.4% 3758 GEO-MEAN sched_debug.cfs_rq[119]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3076 ± 5% +23.9% 3811 ± 2% brickland1/will-it-scale/powersave-pwrite1
3075 +23.9% 3810 GEO-MEAN sched_debug.cfs_rq[113]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3060 ± 5% +24.2% 3800 ± 2% brickland1/will-it-scale/powersave-pwrite1
3060 +24.2% 3800 GEO-MEAN sched_debug.cfs_rq[114]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3215 ± 3% +21.2% 3897 ± 1% brickland1/will-it-scale/powersave-pwrite1
3215 +21.2% 3897 GEO-MEAN sched_debug.cfs_rq[94]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3229 ± 3% +21.0% 3906 ± 1% brickland1/will-it-scale/powersave-pwrite1
3229 +21.0% 3906 GEO-MEAN sched_debug.cfs_rq[93]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3071 ± 6% +24.9% 3834 ± 2% brickland1/will-it-scale/powersave-pwrite1
3070 +24.9% 3834 GEO-MEAN sched_debug.cfs_rq[111]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
336 ± 4% +23.1% 413 ± 6% brickland1/will-it-scale/powersave-pwrite1
336 +23.1% 413 GEO-MEAN sched_debug.cfs_rq[106]:/.tg_runnable_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3081 ± 6% +24.9% 3848 ± 2% brickland1/will-it-scale/powersave-pwrite1
3081 +24.9% 3848 GEO-MEAN sched_debug.cfs_rq[110]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3202 ± 2% +21.9% 3904 ± 2% brickland1/will-it-scale/powersave-pwrite1
3202 +21.9% 3904 GEO-MEAN sched_debug.cfs_rq[96]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3102 ± 5% +25.3% 3888 ± 2% brickland1/will-it-scale/powersave-pwrite1
3102 +25.3% 3888 GEO-MEAN sched_debug.cfs_rq[105]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
19954 ± 1% +20.9% 24119 ± 2% brickland1/will-it-scale/powersave-pwrite1
19954 +20.9% 24119 GEO-MEAN sched_debug.cfs_rq[106]:/.exec_clock
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3056 ± 5% +24.0% 3790 ± 2% brickland1/will-it-scale/powersave-pwrite1
3056 +24.0% 3790 GEO-MEAN sched_debug.cfs_rq[115]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3063 ± 5% +23.7% 3791 ± 2% brickland1/will-it-scale/powersave-pwrite1
3063 +23.7% 3790 GEO-MEAN sched_debug.cfs_rq[116]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3204 ± 2% +20.8% 3872 ± 2% brickland1/will-it-scale/powersave-pwrite1
3204 +20.8% 3872 GEO-MEAN sched_debug.cfs_rq[103]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3062 ± 5% +23.4% 3778 ± 2% brickland1/will-it-scale/powersave-pwrite1
3062 +23.4% 3778 GEO-MEAN sched_debug.cfs_rq[117]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3072 ± 5% +23.5% 3795 ± 2% brickland1/will-it-scale/powersave-pwrite1
3072 +23.5% 3795 GEO-MEAN sched_debug.cfs_rq[112]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3060 ± 5% +23.1% 3768 ± 2% brickland1/will-it-scale/powersave-pwrite1
3060 +23.1% 3768 GEO-MEAN sched_debug.cfs_rq[118]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.23 ± 5% +18.3% 1.46 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.23 +18.3% 1.46 GEO-MEAN perf-profile.cpu-cycles.apic_timer_interrupt.__generic_file_aio_write.generic_file_aio_write.do_sync_write.vfs_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
15533 ± 4% +23.4% 19163 ± 6% brickland1/will-it-scale/powersave-pwrite1
15533 +23.4% 19162 GEO-MEAN sched_debug.cfs_rq[106]:/.avg->runnable_avg_sum
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3209 ± 2% +21.7% 3907 ± 2% brickland1/will-it-scale/powersave-pwrite1
3209 +21.7% 3907 GEO-MEAN sched_debug.cfs_rq[104]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3235 ± 3% +19.9% 3877 ± 1% brickland1/will-it-scale/powersave-pwrite1
3235 +19.9% 3877 GEO-MEAN sched_debug.cfs_rq[92]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.21 ± 6% +17.1% 1.41 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.21 +17.1% 1.41 GEO-MEAN perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__generic_file_aio_write.generic_file_aio_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.21 ± 6% +17.1% 1.41 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.21 +17.1% 1.41 GEO-MEAN perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__generic_file_aio_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.21 ± 6% +17.2% 1.42 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.21 +17.2% 1.42 GEO-MEAN perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt.__generic_file_aio_write.generic_file_aio_write.do_sync_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3230 ± 2% +19.9% 3873 ± 2% brickland1/will-it-scale/powersave-pwrite1
3230 +19.9% 3873 GEO-MEAN sched_debug.cfs_rq[83]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3235 ± 2% +20.3% 3891 ± 1% brickland1/will-it-scale/powersave-pwrite1
3235 +20.3% 3891 GEO-MEAN sched_debug.cfs_rq[85]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3239 ± 3% +19.7% 3877 ± 1% brickland1/will-it-scale/powersave-pwrite1
3239 +19.7% 3877 GEO-MEAN sched_debug.cfs_rq[89]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3248 ± 3% +19.7% 3889 ± 1% brickland1/will-it-scale/powersave-pwrite1
3248 +19.7% 3889 GEO-MEAN sched_debug.cfs_rq[91]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3230 ± 2% +19.9% 3873 ± 2% brickland1/will-it-scale/powersave-pwrite1
3230 +19.9% 3873 GEO-MEAN sched_debug.cfs_rq[77]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.38 ± 2% -14.6% 1.18 ± 2% brickland1/will-it-scale/powersave-pwrite1
1.38 -14.6% 1.18 GEO-MEAN perf-profile.cpu-cycles.mutex_unlock.do_sync_write.vfs_write.sys_pwrite64.system_call_fastpath
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
49 ± 9% +33.9% 66 ± 18% brickland1/will-it-scale/powersave-pwrite1
49 +33.9% 66 GEO-MEAN sched_debug.cpu#118.ttwu_local
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3235 ± 2% +20.3% 3891 ± 1% brickland1/will-it-scale/powersave-pwrite1
3235 +20.3% 3891 GEO-MEAN sched_debug.cfs_rq[84]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3226 ± 2% +19.7% 3862 ± 1% brickland1/will-it-scale/powersave-pwrite1
3226 +19.7% 3861 GEO-MEAN sched_debug.cfs_rq[87]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3227 ± 3% +20.0% 3872 ± 2% brickland1/will-it-scale/powersave-pwrite1
3227 +20.0% 3872 GEO-MEAN sched_debug.cfs_rq[82]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3236 ± 2% +20.0% 3882 ± 1% brickland1/will-it-scale/powersave-pwrite1
3235 +20.0% 3882 GEO-MEAN sched_debug.cfs_rq[86]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3239 ± 3% +20.1% 3891 ± 1% brickland1/will-it-scale/powersave-pwrite1
3239 +20.1% 3891 GEO-MEAN sched_debug.cfs_rq[90]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3239 ± 3% +19.5% 3870 ± 2% brickland1/will-it-scale/powersave-pwrite1
3239 +19.5% 3870 GEO-MEAN sched_debug.cfs_rq[81]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3242 ± 3% +18.3% 3834 ± 2% brickland1/will-it-scale/powersave-pwrite1
3242 +18.3% 3834 GEO-MEAN sched_debug.cfs_rq[79]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3240 ± 2% +18.9% 3851 ± 2% brickland1/will-it-scale/powersave-pwrite1
3240 +18.9% 3851 GEO-MEAN sched_debug.cfs_rq[78]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3241 ± 2% +18.1% 3828 ± 2% brickland1/will-it-scale/powersave-pwrite1
3241 +18.1% 3828 GEO-MEAN sched_debug.cfs_rq[80]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3232 ± 2% +18.9% 3842 ± 1% brickland1/will-it-scale/powersave-pwrite1
3232 +18.9% 3842 GEO-MEAN sched_debug.cfs_rq[88]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3245 ± 2% +18.2% 3835 ± 3% brickland1/will-it-scale/powersave-pwrite1
3245 +18.2% 3835 GEO-MEAN sched_debug.cfs_rq[69]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3248 ± 2% +17.9% 3828 ± 3% brickland1/will-it-scale/powersave-pwrite1
3247 +17.9% 3828 GEO-MEAN sched_debug.cfs_rq[70]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
25.35 ± 1% -15.1% 21.52 ± 1% brickland1/will-it-scale/powersave-pwrite1
25.35 -15.1% 21.52 GEO-MEAN turbostat.%pc2
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3239 ± 2% +18.6% 3841 ± 3% brickland1/will-it-scale/powersave-pwrite1
3239 +18.6% 3841 GEO-MEAN sched_debug.cfs_rq[76]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3254 ± 2% +17.4% 3821 ± 4% brickland1/will-it-scale/powersave-pwrite1
3254 +17.4% 3821 GEO-MEAN sched_debug.cfs_rq[71]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
78 ± 15% +30.0% 101 ± 12% brickland1/will-it-scale/powersave-pwrite1
78 +30.0% 101 GEO-MEAN sched_debug.cpu#118.ttwu_count
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3247 ± 2% +15.2% 3742 ± 2% brickland1/will-it-scale/powersave-pwrite1
3247 +15.2% 3742 GEO-MEAN sched_debug.cfs_rq[61]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3236 ± 2% +17.9% 3814 ± 4% brickland1/will-it-scale/powersave-pwrite1
3235 +17.9% 3814 GEO-MEAN sched_debug.cfs_rq[72]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
421 ± 5% +11.4% 470 ± 4% brickland1/will-it-scale/powersave-pwrite1
421 +11.4% 470 GEO-MEAN sched_debug.cfs_rq[91]:/.tg_runnable_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3249 ± 2% +15.0% 3735 ± 3% brickland1/will-it-scale/powersave-pwrite1
3249 +15.0% 3735 GEO-MEAN sched_debug.cfs_rq[62]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3246 ± 2% +17.5% 3815 ± 3% brickland1/will-it-scale/powersave-pwrite1
3246 +17.5% 3815 GEO-MEAN sched_debug.cfs_rq[74]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3255 ± 2% +16.7% 3799 ± 3% brickland1/will-it-scale/powersave-pwrite1
3254 +16.7% 3799 GEO-MEAN sched_debug.cfs_rq[68]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
19474 ± 5% +11.2% 21665 ± 4% brickland1/will-it-scale/powersave-pwrite1
19474 +11.2% 21665 GEO-MEAN sched_debug.cfs_rq[91]:/.avg->runnable_avg_sum
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3239 ± 2% +17.7% 3813 ± 3% brickland1/will-it-scale/powersave-pwrite1
3239 +17.7% 3813 GEO-MEAN sched_debug.cfs_rq[75]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3260 ± 3% +14.6% 3735 ± 2% brickland1/will-it-scale/powersave-pwrite1
3260 +14.6% 3735 GEO-MEAN sched_debug.cfs_rq[59]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3244 ± 3% +17.0% 3797 ± 3% brickland1/will-it-scale/powersave-pwrite1
3244 +17.0% 3796 GEO-MEAN sched_debug.cfs_rq[73]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
5.81 ± 5% -10.1% 5.22 ± 2% brickland1/will-it-scale/powersave-pwrite1
5.81 -10.1% 5.22 GEO-MEAN perf-profile.cpu-cycles.read_hpet.ktime_get_update_offsets.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3319 ± 2% +13.7% 3773 ± 2% brickland1/will-it-scale/powersave-pwrite1
3318 +13.7% 3773 GEO-MEAN sched_debug.cfs_rq[46]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3978 ± 4% +9.8% 4369 ± 4% brickland1/will-it-scale/powersave-pwrite1
3978 +9.8% 4369 GEO-MEAN sched_debug.cpu#90.curr->pid
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3257 ± 2% +14.8% 3740 ± 2% brickland1/will-it-scale/powersave-pwrite1
3257 +14.8% 3740 GEO-MEAN sched_debug.cfs_rq[60]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3321 ± 2% +13.7% 3775 ± 2% brickland1/will-it-scale/powersave-pwrite1
3321 +13.7% 3774 GEO-MEAN sched_debug.cfs_rq[47]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3318 ± 3% +13.4% 3764 ± 3% brickland1/will-it-scale/powersave-pwrite1
3318 +13.4% 3764 GEO-MEAN sched_debug.cfs_rq[44]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3272 ± 3% +14.2% 3736 ± 2% brickland1/will-it-scale/powersave-pwrite1
3271 +14.2% 3736 GEO-MEAN sched_debug.cfs_rq[58]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3301 ± 3% +13.9% 3759 ± 3% brickland1/will-it-scale/powersave-pwrite1
3301 +13.9% 3759 GEO-MEAN sched_debug.cfs_rq[50]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3294 ± 3% +13.5% 3740 ± 3% brickland1/will-it-scale/powersave-pwrite1
3294 +13.5% 3740 GEO-MEAN sched_debug.cfs_rq[53]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.37 ± 5% +14.1% 1.57 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.37 +14.1% 1.57 GEO-MEAN perf-profile.cpu-cycles.apic_timer_interrupt.fsnotify.vfs_write.sys_pwrite64.system_call_fastpath
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3241 ± 3% +16.2% 3766 ± 3% brickland1/will-it-scale/powersave-pwrite1
3241 +16.2% 3765 GEO-MEAN sched_debug.cfs_rq[65]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3323 ± 2% +13.3% 3763 ± 3% brickland1/will-it-scale/powersave-pwrite1
3323 +13.3% 3763 GEO-MEAN sched_debug.cfs_rq[49]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3290 ± 2% +14.0% 3752 ± 3% brickland1/will-it-scale/powersave-pwrite1
3290 +14.0% 3751 GEO-MEAN sched_debug.cfs_rq[56]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3240 ± 2% +15.9% 3755 ± 3% brickland1/will-it-scale/powersave-pwrite1
3240 +15.9% 3754 GEO-MEAN sched_debug.cfs_rq[66]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3242 ± 2% +14.5% 3712 ± 3% brickland1/will-it-scale/powersave-pwrite1
3242 +14.5% 3712 GEO-MEAN sched_debug.cfs_rq[63]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3307 ± 3% +13.9% 3768 ± 3% brickland1/will-it-scale/powersave-pwrite1
3307 +13.9% 3768 GEO-MEAN sched_debug.cfs_rq[51]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3255 ± 2% +15.6% 3762 ± 3% brickland1/will-it-scale/powersave-pwrite1
3255 +15.6% 3762 GEO-MEAN sched_debug.cfs_rq[67]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3301 ± 3% +13.4% 3744 ± 3% brickland1/will-it-scale/powersave-pwrite1
3301 +13.4% 3744 GEO-MEAN sched_debug.cfs_rq[55]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3315 ± 3% +13.7% 3769 ± 3% brickland1/will-it-scale/powersave-pwrite1
3315 +13.7% 3769 GEO-MEAN sched_debug.cfs_rq[43]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3481 ± 4% +10.4% 3842 ± 3% brickland1/will-it-scale/powersave-pwrite1
3481 +10.4% 3842 GEO-MEAN sched_debug.cfs_rq[35]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
14790 ± 1% +17.6% 17396 ± 10% brickland1/will-it-scale/powersave-pwrite1
14790 +17.6% 17396 GEO-MEAN sched_debug.cfs_rq[47]:/.exec_clock
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
5.07 ± 8% +13.7% 5.77 ± 1% brickland1/will-it-scale/powersave-pwrite1
5.07 +13.7% 5.77 GEO-MEAN perf-profile.cpu-cycles.ktime_get.tick_sched_timer.__run_hrtimer.hrtimer_interrupt.local_apic_timer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3.90 ± 2% -11.1% 3.47 ± 1% brickland1/will-it-scale/powersave-pwrite1
3.90 -11.1% 3.47 GEO-MEAN perf-profile.cpu-cycles.__sb_start_write.vfs_write.sys_pwrite64.system_call_fastpath.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3295 ± 3% +13.5% 3739 ± 2% brickland1/will-it-scale/powersave-pwrite1
3295 +13.5% 3739 GEO-MEAN sched_debug.cfs_rq[54]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3274 ± 2% +14.1% 3737 ± 3% brickland1/will-it-scale/powersave-pwrite1
3274 +14.1% 3737 GEO-MEAN sched_debug.cfs_rq[57]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3449 ± 4% +9.0% 3761 ± 3% brickland1/will-it-scale/powersave-pwrite1
3449 +9.0% 3761 GEO-MEAN sched_debug.cfs_rq[42]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
73 ± 2% +21.0% 88 ± 18% brickland1/will-it-scale/powersave-pwrite1
73 +21.0% 88 GEO-MEAN sched_debug.cpu#87.ttwu_local
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3447 ± 4% +10.6% 3811 ± 3% brickland1/will-it-scale/powersave-pwrite1
3447 +10.6% 3811 GEO-MEAN sched_debug.cfs_rq[39]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3327 ± 2% +13.4% 3772 ± 2% brickland1/will-it-scale/powersave-pwrite1
3327 +13.4% 3772 GEO-MEAN sched_debug.cfs_rq[48]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3493 ± 4% +9.8% 3834 ± 2% brickland1/will-it-scale/powersave-pwrite1
3493 +9.8% 3834 GEO-MEAN sched_debug.cfs_rq[29]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3247 ± 3% +15.5% 3752 ± 3% brickland1/will-it-scale/powersave-pwrite1
3247 +15.5% 3752 GEO-MEAN sched_debug.cfs_rq[64]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3498 ± 4% +9.7% 3838 ± 2% brickland1/will-it-scale/powersave-pwrite1
3498 +9.7% 3838 GEO-MEAN sched_debug.cfs_rq[30]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3309 ± 3% +13.3% 3749 ± 3% brickland1/will-it-scale/powersave-pwrite1
3309 +13.3% 3749 GEO-MEAN sched_debug.cfs_rq[52]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3465 ± 5% +10.0% 3812 ± 3% brickland1/will-it-scale/powersave-pwrite1
3465 +10.0% 3812 GEO-MEAN sched_debug.cfs_rq[37]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.34 ± 4% +12.6% 1.51 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.34 +12.6% 1.51 GEO-MEAN perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.fsnotify.vfs_write
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3449 ± 5% +10.4% 3806 ± 3% brickland1/will-it-scale/powersave-pwrite1
3449 +10.4% 3806 GEO-MEAN sched_debug.cfs_rq[38]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3492 ± 4% +9.5% 3824 ± 3% brickland1/will-it-scale/powersave-pwrite1
3492 +9.5% 3824 GEO-MEAN sched_debug.cfs_rq[31]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3471 ± 5% +10.1% 3820 ± 3% brickland1/will-it-scale/powersave-pwrite1
3471 +10.1% 3820 GEO-MEAN sched_debug.cfs_rq[36]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3498 ± 4% +9.2% 3819 ± 3% brickland1/will-it-scale/powersave-pwrite1
3498 +9.2% 3819 GEO-MEAN sched_debug.cfs_rq[27]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3478 ± 5% +10.0% 3827 ± 3% brickland1/will-it-scale/powersave-pwrite1
3478 +10.0% 3827 GEO-MEAN sched_debug.cfs_rq[34]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3474 ± 4% +10.3% 3834 ± 3% brickland1/will-it-scale/powersave-pwrite1
3474 +10.3% 3834 GEO-MEAN sched_debug.cfs_rq[33]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
5.04 ± 8% +12.8% 5.68 ± 1% brickland1/will-it-scale/powersave-pwrite1
5.04 +12.8% 5.68 GEO-MEAN perf-profile.cpu-cycles.read_hpet.ktime_get.tick_sched_timer.__run_hrtimer.hrtimer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3452 ± 4% +9.4% 3776 ± 3% brickland1/will-it-scale/powersave-pwrite1
3452 +9.4% 3776 GEO-MEAN sched_debug.cfs_rq[40]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3455 ± 4% +9.2% 3772 ± 4% brickland1/will-it-scale/powersave-pwrite1
3455 +9.2% 3771 GEO-MEAN sched_debug.cfs_rq[41]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3478 ± 5% +9.6% 3811 ± 3% brickland1/will-it-scale/powersave-pwrite1
3478 +9.6% 3811 GEO-MEAN sched_debug.cfs_rq[32]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.83 ± 5% -11.3% 1.62 ± 5% brickland1/will-it-scale/powersave-pwrite1
1.83 -11.3% 1.62 GEO-MEAN perf-profile.cpu-cycles.rw_verify_area.vfs_write.sys_pwrite64.system_call_fastpath.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3321 ± 3% +12.8% 3747 ± 3% brickland1/will-it-scale/powersave-pwrite1
3321 +12.8% 3747 GEO-MEAN sched_debug.cfs_rq[45]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.34 ± 4% +12.7% 1.51 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.34 +12.7% 1.51 GEO-MEAN perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.fsnotify
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.35 ± 5% +12.9% 1.52 ± 4% brickland1/will-it-scale/powersave-pwrite1
1.35 +12.9% 1.52 GEO-MEAN perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt.fsnotify.vfs_write.sys_pwrite64
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3507 ± 4% +9.0% 3822 ± 2% brickland1/will-it-scale/powersave-pwrite1
3507 +9.0% 3822 GEO-MEAN sched_debug.cfs_rq[28]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3537 ± 4% +9.1% 3858 ± 3% brickland1/will-it-scale/powersave-pwrite1
3537 +9.1% 3858 GEO-MEAN sched_debug.cfs_rq[21]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.18 ± 5% -8.0% 1.09 ± 6% brickland1/will-it-scale/powersave-pwrite1
1.18 -8.0% 1.09 GEO-MEAN perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.89 ± 1% -10.3% 1.70 ± 1% brickland1/will-it-scale/powersave-pwrite1
1.89 -10.3% 1.70 GEO-MEAN perf-profile.cpu-cycles.system_call.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3519 ± 5% +9.5% 3855 ± 3% brickland1/will-it-scale/powersave-pwrite1
3519 +9.5% 3855 GEO-MEAN sched_debug.cfs_rq[19]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
5.84 ± 4% -7.7% 5.40 ± 2% brickland1/will-it-scale/powersave-pwrite1
5.84 -7.7% 5.40 GEO-MEAN perf-profile.cpu-cycles.ktime_get_update_offsets.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
16504 ± 0% -9.7% 14896 ± 1% brickland1/will-it-scale/powersave-pwrite1
16504 -9.7% 14896 GEO-MEAN slabinfo.kmalloc-192.active_objs
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
16506 ± 0% -9.8% 14897 ± 1% brickland1/will-it-scale/powersave-pwrite1
16506 -9.8% 14897 GEO-MEAN slabinfo.kmalloc-192.num_objs
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
32046 ± 0% +11.3% 35673 ± 0% brickland1/will-it-scale/powersave-pwrite1
32046 +11.3% 35673 GEO-MEAN sched_debug.cfs_rq[91]:/.exec_clock
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3521 ± 5% +9.4% 3853 ± 3% brickland1/will-it-scale/powersave-pwrite1
3521 +9.4% 3853 GEO-MEAN sched_debug.cfs_rq[20]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.18 ± 5% -7.8% 1.09 ± 6% brickland1/will-it-scale/powersave-pwrite1
1.18 -7.8% 1.09 GEO-MEAN perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
853455 ± 1% +10.8% 945556 ± 1% brickland1/will-it-scale/powersave-pwrite1
853455 +10.8% 945556 GEO-MEAN sched_debug.cpu#76.avg_idle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3519 ± 4% +8.2% 3807 ± 3% brickland1/will-it-scale/powersave-pwrite1
3519 +8.2% 3807 GEO-MEAN sched_debug.cfs_rq[26]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.19 ± 4% -7.9% 1.09 ± 5% brickland1/will-it-scale/powersave-pwrite1
1.19 -7.9% 1.09 GEO-MEAN perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
842339 ± 2% +11.3% 937538 ± 1% brickland1/will-it-scale/powersave-pwrite1
842339 +11.3% 937537 GEO-MEAN sched_debug.cpu#64.avg_idle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
15887 ± 4% +8.0% 17162 ± 3% brickland1/will-it-scale/powersave-pwrite1
15887 +8.0% 17162 GEO-MEAN sched_debug.cfs_rq[90]:/.avg->runnable_avg_sum
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
344 ± 4% +7.7% 370 ± 3% brickland1/will-it-scale/powersave-pwrite1
344 +7.7% 370 GEO-MEAN sched_debug.cfs_rq[90]:/.tg_runnable_contrib
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1.21 ± 4% -7.4% 1.12 ± 6% brickland1/will-it-scale/powersave-pwrite1
1.21 -7.4% 1.12 GEO-MEAN perf-profile.cpu-cycles.apic_timer_interrupt.__libc_pwrite
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
835125 ± 2% +11.7% 932608 ± 2% brickland1/will-it-scale/powersave-pwrite1
835125 +11.7% 932608 GEO-MEAN sched_debug.cpu#69.avg_idle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3547 ± 4% +9.3% 3879 ± 3% brickland1/will-it-scale/powersave-pwrite1
3547 +9.3% 3878 GEO-MEAN sched_debug.cfs_rq[22]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
835405 ± 1% +10.3% 921248 ± 2% brickland1/will-it-scale/powersave-pwrite1
835405 +10.3% 921247 GEO-MEAN sched_debug.cpu#73.avg_idle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
412312 ± 29% +32.7% 547007 ± 1% brickland1/will-it-scale/powersave-pwrite1
412312 +32.7% 547007 GEO-MEAN softirqs.SCHED
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
844305 ± 1% +10.6% 933588 ± 2% brickland1/will-it-scale/powersave-pwrite1
844305 +10.6% 933587 GEO-MEAN sched_debug.cpu#65.avg_idle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
833909 ± 1% +11.6% 931037 ± 2% brickland1/will-it-scale/powersave-pwrite1
833909 +11.6% 931037 GEO-MEAN sched_debug.cpu#68.avg_idle
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
12247 ± 2% +6.3% 13016 ± 4% brickland1/will-it-scale/powersave-pwrite1
12247 +6.3% 13016 GEO-MEAN sched_debug.cfs_rq[105]:/.avg->runnable_avg_sum
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
3526 ± 4% +8.0% 3810 ± 3% brickland1/will-it-scale/powersave-pwrite1
3526 +8.0% 3810 GEO-MEAN sched_debug.cfs_rq[25]:/.tg_load_avg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
0.00 ± 0% -100.0% 0.00 ± 0% brickland1/will-it-scale/powersave-pwrite1
0.00 -100.0% 0.00 GEO-MEAN energy.energy-pkg
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
0.00 ± 0% -100.0% 0.00 ± 0% brickland1/will-it-scale/powersave-pwrite1
0.00 -100.0% 0.00 GEO-MEAN energy.energy-cores
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1220 ± 0% -56.2% 534 ± 0% brickland1/will-it-scale/powersave-pwrite1
1220 -56.2% 534 GEO-MEAN turbostat.Cor_W
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1233 ± 0% -55.6% 547 ± 0% brickland1/will-it-scale/powersave-pwrite1
1233 -55.6% 547 GEO-MEAN turbostat.Pkg_W
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
66125 ± 0% -12.5% 57879 ± 0% brickland1/will-it-scale/powersave-pwrite1
66125 -12.5% 57878 GEO-MEAN vmstat.system.in
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
495 ± 1% -6.0% 465 ± 2% brickland1/will-it-scale/powersave-pwrite1
495 -6.0% 465 GEO-MEAN time.user_time
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
1649 ± 0% +3.1% 1700 ± 0% brickland1/will-it-scale/powersave-pwrite1
1649 +3.1% 1699 GEO-MEAN vmstat.system.cs
v3.14-rc4 f3ca4164529b875374c410193b
---------------- --------------------------
311 ± 0% +1.0% 314 ± 0% brickland1/will-it-scale/powersave-pwrite1
311 +1.0% 314 GEO-MEAN time.elapsed_time
brickland1: Brickland Ivy Bridge-EX
Memory: 128G
testbox: fake
Memory: 12G
time.elapsed_time
314.5 O+---------------O----------------O----------------O----------------O
| |
314 ++ |
| |
313.5 ++ |
| |
313 ++ |
| |
312.5 ++ |
| |
312 ++ |
| |
311.5 ++ |
*................*................*................*................*
311 ++------------------------------------------------------------------+
will-it-scale.per_process_ops
650000 ++-----------------------------------------------------------------+
*................*................*...............*................*
600000 ++ |
| |
550000 ++ |
| |
500000 ++ |
| |
450000 ++ |
| |
400000 ++ |
| |
350000 ++ |
O O O O O
300000 ++-----------------------------------------------------------------+
will-it-scale.per_thread_ops
600000 ++-----------------------------------------------------------------+
*................*................*...............*................*
550000 ++ |
| |
500000 ++ |
| |
450000 ++ |
| |
400000 ++ |
| |
350000 ++ |
| |
300000 O+ O O O O
| |
250000 ++-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[virtio_blk] kernel BUG at drivers/virtio/virtio.c:116!
by kernel test robot
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git vhost-next
commit 067da47bd4572e15532e303b6ced799540654589 ("virtio_blk: v1.0 support")
+------------------------------------------+------------+------------+
| | bea3a62baf | 067da47bd4 |
+------------------------------------------+------------+------------+
| boot_successes | 15 | 10 |
| early-boot-hang | 1 | |
| boot_failures | 0 | 5 |
| kernel_BUG_at_drivers/virtio/virtio.c | 0 | 5 |
| invalid_opcode | 0 | 5 |
| RIP:virtio_check_driver_offered_feature | 0 | 5 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 5 |
| backtrace:init | 0 | 5 |
| backtrace:kernel_init_freeable | 0 | 5 |
+------------------------------------------+------------+------------+
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months
[mm] 3193913ce62: 4.7% will-it-scale.per_process_ops
by kernel test robot
FYI, we noticed the below changes on
commit 3193913ce62c63056bc67a6ae378beaf494afa66 ("mm: page_alloc: default node-ordering on 64-bit NUMA, zone-ordering on 32-bit")
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3 testbox/testcase/testparams
---------------- -------------------------- ---------------------------
%stddev %change %stddev
\ | \
0.08 ± 0% +6.2% 0.08 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
0.08 +6.2% 0.08 GEO-MEAN will-it-scale.scalability
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
238085 ± 0% +4.7% 249169 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
238085 +4.7% 249169 GEO-MEAN will-it-scale.per_process_ops
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
210740 ± 0% +2.2% 215479 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
210740 +2.2% 215479 GEO-MEAN will-it-scale.per_thread_ops
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
2733 ± 0% -1.6% 2689 ± 0% client7/netperf/performance-900s-200%-TCP_STREAM
2733 -1.6% 2689 GEO-MEAN netperf.Throughput_Mbps
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
4 ± 40% +2e+09% 92498119 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
4 +2e+09% 92498119 GEO-MEAN proc-vmstat.pgalloc_dma32
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3 ± 45% -73.7% 1 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
3 -73.7% 1 GEO-MEAN sched_debug.cfs_rq[33]:/.runnable_load_avg
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
4069 ± 33% +201.3% 12262 ± 49% lkp-wsx01/will-it-scale/performance-page_fault1
4069 +201.3% 12262 GEO-MEAN sched_debug.cpu#25.sched_goidle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
8353 ± 33% +195.8% 24712 ± 48% lkp-wsx01/will-it-scale/performance-page_fault1
8353 +195.8% 24712 GEO-MEAN sched_debug.cpu#25.nr_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
8388 ± 33% +195.2% 24763 ± 48% lkp-wsx01/will-it-scale/performance-page_fault1
8388 +195.2% 24763 GEO-MEAN sched_debug.cpu#25.sched_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
544767 ± 41% -54.8% 246478 ± 47% client7/netperf/performance-900s-200%-TCP_STREAM
544767 -54.8% 246478 GEO-MEAN cpuidle.C1-NHM.time
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3 ± 19% -57.9% 1 ± 30% lkp-wsx01/will-it-scale/performance-page_fault1
3 -57.9% 1 GEO-MEAN sched_debug.cfs_rq[30]:/.runnable_load_avg
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
4 ± 15% -55.0% 1 ± 41% lkp-wsx01/will-it-scale/performance-page_fault1
4 -55.0% 1 GEO-MEAN sched_debug.cpu#30.cpu_load[0]
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
4 ± 35% -60.9% 1 ± 41% lkp-wsx01/will-it-scale/performance-page_fault1
4 -60.9% 1 GEO-MEAN sched_debug.cpu#22.cpu_load[0]
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3784 ± 35% +138.2% 9015 ± 49% lkp-wsx01/will-it-scale/performance-page_fault1
3784 +138.2% 9015 GEO-MEAN sched_debug.cpu#29.sched_goidle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
1704 ± 36% +82.4% 3109 ± 32% lkp-wsx01/will-it-scale/performance-page_fault1
1704 +82.4% 3109 GEO-MEAN sched_debug.cpu#22.ttwu_local
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
7771 ± 34% +134.1% 18194 ± 48% lkp-wsx01/will-it-scale/performance-page_fault1
7770 +134.1% 18194 GEO-MEAN sched_debug.cpu#29.nr_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
7806 ± 34% +133.3% 18216 ± 48% lkp-wsx01/will-it-scale/performance-page_fault1
7806 +133.3% 18216 GEO-MEAN sched_debug.cpu#29.sched_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
44 ± 16% -49.5% 22 ± 15% client7/netperf/performance-900s-200%-TCP_STREAM
44 -49.5% 22 GEO-MEAN sched_debug.cpu#11.nr_uninterruptible
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
1679 ± 12% -32.6% 1131 ± 13% lkp-wsx01/will-it-scale/performance-page_fault1
1678 -32.6% 1131 GEO-MEAN sched_debug.cpu#33.curr->pid
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
29 ± 14% -34.9% 19 ± 19% lkp-wsx01/will-it-scale/performance-page_fault1
29 -34.9% 18 GEO-MEAN sched_debug.cfs_rq[51]:/.nr_spread_over
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
1516 ± 24% -23.5% 1160 ± 21% lkp-wsx01/will-it-scale/performance-page_fault1
1516 -23.5% 1160 GEO-MEAN sched_debug.cpu#37.curr->pid
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
4991 ± 15% +42.4% 7106 ± 21% lkp-wsx01/will-it-scale/performance-page_fault1
4991 +42.4% 7106 GEO-MEAN sched_debug.cpu#22.ttwu_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
18 ± 24% +42.4% 26 ± 23% lkp-wsx01/will-it-scale/performance-page_fault1
18 +42.4% 26 GEO-MEAN sched_debug.cpu#51.cpu_load[3]
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
2077 ± 15% -34.3% 1365 ± 16% lkp-wsx01/will-it-scale/performance-page_fault1
2077 -34.3% 1365 GEO-MEAN sched_debug.cpu#30.curr->pid
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
6608 ± 13% +30.9% 8648 ± 21% lkp-wsx01/will-it-scale/performance-page_fault1
6608 +30.9% 8648 GEO-MEAN sched_debug.cpu#22.sched_goidle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
13690 ± 13% +29.8% 17766 ± 20% lkp-wsx01/will-it-scale/performance-page_fault1
13690 +29.8% 17765 GEO-MEAN sched_debug.cpu#22.nr_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
13732 ± 13% +29.6% 17792 ± 20% lkp-wsx01/will-it-scale/performance-page_fault1
13732 +29.6% 17792 GEO-MEAN sched_debug.cpu#22.sched_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
6728 ± 17% -21.9% 5252 ± 10% client7/netperf/performance-900s-200%-TCP_STREAM
6728 -21.9% 5252 GEO-MEAN cpuidle.C6-NHM.usage
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
5777 ± 16% +46.6% 8469 ± 23% lkp-wsx01/will-it-scale/performance-page_fault1
5777 +46.6% 8469 GEO-MEAN sched_debug.cpu#21.sched_goidle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
12608 ± 14% +42.5% 17962 ± 22% lkp-wsx01/will-it-scale/performance-page_fault1
12607 +42.5% 17962 GEO-MEAN sched_debug.cpu#21.nr_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
12665 ± 14% +42.0% 17983 ± 22% lkp-wsx01/will-it-scale/performance-page_fault1
12665 +42.0% 17983 GEO-MEAN sched_debug.cpu#21.sched_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
41168114 ± 17% -27.5% 29866057 ± 7% client7/netperf/performance-900s-200%-TCP_STREAM
41168114 -27.5% 29866057 GEO-MEAN cpuidle.C6-NHM.time
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
1848 ± 19% -26.0% 1367 ± 23% lkp-wsx01/will-it-scale/performance-page_fault1
1848 -26.0% 1367 GEO-MEAN sched_debug.cpu#29.curr->pid
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
59132 ± 0% -25.8% 43889 ± 0% client7/netperf/performance-900s-200%-TCP_STREAM
59132 -25.8% 43889 GEO-MEAN numa-vmstat.node0.numa_interleave
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
8 ± 21% -34.9% 5 ± 18% lkp-wsx01/will-it-scale/performance-page_fault1
8 -34.9% 5 GEO-MEAN sched_debug.cpu#65.cpu_load[0]
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
7 ± 9% -25.6% 5 ± 16% lkp-wsx01/will-it-scale/performance-page_fault1
7 -25.6% 5 GEO-MEAN sched_debug.cpu#65.cpu_load[1]
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
6 ± 12% +29.0% 8 ± 13% lkp-wsx01/will-it-scale/performance-page_fault1
6 +29.0% 7 GEO-MEAN sched_debug.cpu#19.cpu_load[0]
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
2464 ± 9% -24.6% 1857 ± 19% lkp-wsx01/will-it-scale/performance-page_fault1
2464 -24.6% 1857 GEO-MEAN sched_debug.cpu#21.curr->pid
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
642 ± 4% -20.5% 511 ± 9% lkp-wsx01/will-it-scale/performance-page_fault1
642 -20.5% 511 GEO-MEAN slabinfo.ip_fib_trie.num_objs
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
642 ± 4% -20.5% 511 ± 9% lkp-wsx01/will-it-scale/performance-page_fault1
642 -20.5% 511 GEO-MEAN slabinfo.ip_fib_trie.active_objs
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
314 ± 6% -22.6% 243 ± 15% lkp-wsx01/will-it-scale/performance-page_fault1
314 -22.6% 243 GEO-MEAN sched_debug.cfs_rq[79]:/.tg_runnable_contrib
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
14409 ± 6% -22.3% 11197 ± 14% lkp-wsx01/will-it-scale/performance-page_fault1
14409 -22.3% 11197 GEO-MEAN sched_debug.cfs_rq[79]:/.avg->runnable_avg_sum
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
203 ± 9% -13.8% 175 ± 8% lkp-wsx01/will-it-scale/performance-page_fault1
203 -13.8% 175 GEO-MEAN sched_debug.cfs_rq[35]:/.tg_runnable_contrib
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
9346 ± 9% -13.6% 8078 ± 8% lkp-wsx01/will-it-scale/performance-page_fault1
9346 -13.6% 8078 GEO-MEAN sched_debug.cfs_rq[35]:/.avg->runnable_avg_sum
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
0.24 ± 16% -26.9% 0.17 ± 5% client7/netperf/performance-900s-200%-TCP_STREAM
0.24 -26.9% 0.17 GEO-MEAN turbostat.%c6
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
17920 ± 7% -13.3% 15538 ± 7% lkp-wsx01/will-it-scale/performance-page_fault1
17920 -13.3% 15538 GEO-MEAN sched_debug.cfs_rq[65]:/.avg->runnable_avg_sum
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
389 ± 7% -13.2% 338 ± 7% lkp-wsx01/will-it-scale/performance-page_fault1
389 -13.2% 338 GEO-MEAN sched_debug.cfs_rq[65]:/.tg_runnable_contrib
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
1843 ± 1% -23.6% 1408 ± 2% client7/netperf/performance-900s-200%-TCP_STREAM
1295 ± 3% -10.7% 1156 ± 3% lkp-wsx01/will-it-scale/performance-page_fault1
1545 -17.4% 1275 GEO-MEAN numa-vmstat.node0.nr_alloc_batch
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
426745 ± 12% -18.6% 347562 ± 11% lkp-wsx01/will-it-scale/performance-page_fault1
426745 -18.6% 347562 GEO-MEAN sched_debug.cfs_rq[35]:/.min_vruntime
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
211 ± 4% -13.4% 182 ± 7% lkp-wsx01/will-it-scale/performance-page_fault1
211 -13.4% 182 GEO-MEAN sched_debug.cpu#73.ttwu_local
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
7560956 ± 0% +18.7% 8973197 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
7560956 +18.7% 8973197 GEO-MEAN numa-numastat.node0.local_node
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
7566122 ± 0% +18.7% 8978364 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
7566122 +18.7% 8978364 GEO-MEAN numa-numastat.node0.numa_hit
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
329055 ± 9% +23.0% 404724 ± 6% lkp-wsx01/will-it-scale/performance-page_fault1
329055 +23.0% 404724 GEO-MEAN softirqs.SCHED
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3822373 ± 0% +17.6% 4496660 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
3822373 +17.6% 4496660 GEO-MEAN numa-vmstat.node0.numa_hit
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3817085 ± 0% +17.7% 4490957 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
3817085 +17.7% 4490957 GEO-MEAN numa-vmstat.node0.numa_local
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
2.358e+09 ± 0% -15.0% 2.004e+09 ± 0% client7/netperf/performance-900s-200%-TCP_STREAM
2.358e+09 -15.0% 2.004e+09 GEO-MEAN proc-vmstat.pgalloc_normal
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
1777 ± 13% +34.4% 2387 ± 16% lkp-wsx01/will-it-scale/performance-page_fault1
1776 +34.4% 2387 GEO-MEAN sched_debug.cpu#74.sched_goidle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
663710 ± 10% +22.1% 810425 ± 5% client7/netperf/performance-900s-200%-TCP_STREAM
663710 +22.1% 810425 GEO-MEAN sched_debug.cpu#9.avg_idle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3338 ± 0% -13.8% 2876 ± 1% client7/netperf/performance-900s-200%-TCP_STREAM
3338 -13.8% 2876 GEO-MEAN proc-vmstat.nr_alloc_batch
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
47807 ± 6% +19.2% 56984 ± 11% lkp-wsx01/will-it-scale/performance-page_fault1
47807 +19.2% 56984 GEO-MEAN sched_debug.cpu#25.nr_load_updates
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3741 ± 13% +33.3% 4985 ± 16% lkp-wsx01/will-it-scale/performance-page_fault1
3740 +33.3% 4985 GEO-MEAN sched_debug.cpu#74.nr_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
295529 ± 3% +16.2% 343525 ± 3% lkp-wsx01/will-it-scale/performance-page_fault1
295529 +16.2% 343525 GEO-MEAN softirqs.RCU
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3374 ± 4% -9.0% 3070 ± 5% client7/netperf/performance-900s-200%-TCP_STREAM
3374 -9.0% 3070 GEO-MEAN sched_debug.cpu#4.curr->pid
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
3900 ± 12% +31.4% 5124 ± 15% lkp-wsx01/will-it-scale/performance-page_fault1
3900 +31.4% 5124 GEO-MEAN sched_debug.cpu#74.sched_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
14175 ± 3% -19.5% 11408 ± 13% lkp-wsx01/will-it-scale/performance-page_fault1
14175 -19.5% 11408 GEO-MEAN sched_debug.cfs_rq[78]:/.avg->runnable_avg_sum
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
308 ± 3% -19.5% 248 ± 13% lkp-wsx01/will-it-scale/performance-page_fault1
308 -19.5% 248 GEO-MEAN sched_debug.cfs_rq[78]:/.tg_runnable_contrib
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
605032 ± 8% +19.7% 723984 ± 11% client7/netperf/performance-900s-200%-TCP_STREAM
605032 +19.7% 723983 GEO-MEAN sched_debug.cpu#6.avg_idle
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
493721 ± 2% -10.8% 440327 ± 3% lkp-wsx01/will-it-scale/performance-page_fault1
493721 -10.8% 440327 GEO-MEAN sched_debug.cfs_rq[31]:/.min_vruntime
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
691172 ± 9% -14.7% 589776 ± 1% lkp-wsx01/will-it-scale/performance-page_fault1
691172 -14.7% 589776 GEO-MEAN sched_debug.cfs_rq[21]:/.min_vruntime
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
10083100 ± 0% -9.5% 9128227 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
10083100 -9.5% 9128227 GEO-MEAN time.maximum_resident_set_size
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
99367 ± 3% +8.2% 107474 ± 5% client7/netperf/performance-900s-200%-TCP_STREAM
99367 +8.2% 107474 GEO-MEAN sched_debug.cpu#11.ttwu_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
8369 ± 10% +11.0% 9289 ± 2% lkp-wsx01/will-it-scale/performance-page_fault1
8369 +11.0% 9289 GEO-MEAN sched_debug.cpu#51.sched_count
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
8073 ± 10% +11.1% 8968 ± 2% lkp-wsx01/will-it-scale/performance-page_fault1
8073 +11.1% 8968 GEO-MEAN sched_debug.cpu#51.nr_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
81.24 ± 1% -16.7% 67.64 ± 1% lkp-wsx01/will-it-scale/performance-page_fault1
81.24 -16.7% 67.64 GEO-MEAN time.user_time
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
248649 ± 2% +6.0% 263514 ± 1% client7/netperf/performance-900s-200%-TCP_STREAM
13518 ± 2% -16.2% 11335 ± 2% lkp-wsx01/will-it-scale/performance-page_fault1
57977 -5.7% 54652 GEO-MEAN time.involuntary_context_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
577 ± 0% +1.5% 585 ± 0% client7/netperf/performance-900s-200%-TCP_STREAM
1155 ± 0% -11.4% 1024 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
816 -5.2% 774 GEO-MEAN time.percent_of_cpu_this_job_got
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
5059 ± 0% +1.5% 5132 ± 0% client7/netperf/performance-900s-200%-TCP_STREAM
3521 ± 0% -11.3% 3123 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
4220 -5.1% 4004 GEO-MEAN time.system_time
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
6480 ± 0% +5.8% 6855 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
6480 +5.8% 6855 GEO-MEAN vmstat.system.cs
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
2613356 ± 0% -1.4% 2576765 ± 0% client7/netperf/performance-900s-200%-TCP_STREAM
791288 ± 0% +8.0% 854734 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
1438026 +3.2% 1484065 GEO-MEAN time.voluntary_context_switches
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
36.32 ± 0% -4.4% 34.72 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
36.32 -4.4% 34.72 GEO-MEAN turbostat.%c0
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
30544 ± 0% -4.3% 29236 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
30544 -4.3% 29236 GEO-MEAN vmstat.system.in
97ee4ba7cbd30f18 3193913ce62c63056bc67a6ae3
---------------- --------------------------
7423764 ± 0% +1.9% 7566579 ± 0% lkp-wsx01/will-it-scale/performance-page_fault1
7423764 +1.9% 7566579 GEO-MEAN time.minor_page_faults
lkp-wsx01: Westmere-EX
Memory: 128G
client7: Nehalem EP
Memory: 48G
time.system_time
3550 ++------*---------*--------------------------------*-------*---------*
3500 ++.*.*. *.. .*. + .*..*. .*. .*.*.. .*.. : *. .. *..*. ..|
*. * *. *. *. *.*. : * * |
3450 ++ * |
3400 ++ |
| |
3350 ++ |
3300 ++ |
3250 ++ |
| |
3200 ++ O |
3150 ++ O O O O |
O O O O O O O O O O O O O O O O O |
3100 ++ O O |
3050 ++-------------------------------------------------------------------+
time.percent_of_cpu_this_job_got
1180 ++-------------------------------------------------------------------+
| .*. *.. *
1160 ++.*.*. *.. .*..*. .*..*. .*. .*.*.. .*.. : *. .*.*..*. ..|
1140 *+ * *. *. *. *.*. : *. * |
| * |
1120 ++ |
1100 ++ |
| |
1080 ++ |
1060 ++ |
| O |
1040 ++ O O O |
1020 O+ O O O O O O O O O O O O O O O |
| O O O O |
1000 ++-------------------------------------------------------------------+
time.maximum_resident_set_size
1.02e+07 ++---------------------------------------------------------------+
| .*. .*. .*.. .*..*.*..* *..*.*..*.|
1e+07 *+*. *..*.*.*..*.*..* *..*.*..* *.* + + *
| * |
9.8e+06 ++ |
| |
9.6e+06 ++ |
| |
9.4e+06 ++ |
| |
9.2e+06 ++ O O O O O O |
| O O O O O O O O O O O O O O |
9e+06 O+ O O O |
| |
8.8e+06 ++---------------------------------------------------------------+
time.voluntary_context_switches
880000 ++-----------------------------------------------------------------+
| O |
860000 ++ O O O O O O |
O O O O O O O O O O O O O |
| O O O O |
840000 ++ |
| |
820000 ++ |
| |
800000 ++ .* *. *..* *. .*.. |
*.*. : : *.. + + .*. .*..*.*..*. .. *.. .*..*.*..* |
| : : * *. *..* * * *.|
780000 ++ :: *
| * |
760000 ++-----------------------------------------------------------------+
will-it-scale.per_process_ops
255000 ++-----------------------------------------------------------------+
| |
250000 ++ O O O O O O |
O O O O O O O O O O O O O |
| O O O O O |
245000 ++ |
| |
240000 ++ |
*.*..* .*..*.*..*.*..*.*.. .*.. *..*.*..*.*..*.*..*.*..*.*..*.*
235000 ++ : * * + |
| : + * |
| :+ |
230000 ++ * |
| |
225000 ++-----------------------------------------------------------------+
turbostat.%c0
36.6 ++-------------------------------------------------------------------+
36.4 ++ .*. *
| .*. *.. .*..*. .*..*. .*. .*.. .*.. *..*. .*.*..*. ..|
36.2 *+.* * *. *. *..* *.*. + *. * |
36 ++ * |
| |
35.8 ++ |
35.6 ++ |
35.4 ++ |
| |
35.2 ++ |
35 ++ O |
| O O O O |
34.8 O+ O O O O O O O O O O O O O |
34.6 ++----------------O----O-------------------O--O--O-------------------+
numa-numastat.node0.numa_hit
9.2e+06 ++----------------------------------------------------------------+
9e+06 O+ O O O O O O O O O O O O O |
| O O O O O O O O O O |
8.8e+06 ++ |
8.6e+06 ++ |
| |
8.4e+06 ++ |
8.2e+06 ++ |
8e+06 ++ |
| |
7.8e+06 ++ |
7.6e+06 *+*..* .*.. .*..*.*..*.*.. .*.*.. .*.. .*.. .*.|
| : *.*..* *.*.*..* * * * *.*. *
7.4e+06 ++ : .. |
7.2e+06 ++-----*----------------------------------------------------------+
numa-numastat.node0.local_node
9.2e+06 ++----------------------------------------------------------------+
9e+06 O+ O O O O O O O O O O O |
| O O O O O O O O O O O O |
8.8e+06 ++ |
8.6e+06 ++ |
| |
8.4e+06 ++ |
8.2e+06 ++ |
8e+06 ++ |
| |
7.8e+06 ++ |
7.6e+06 *+ .* .*.. .*.*..*.*.. .*.*.. .*.. .*.. .*.|
| *. : *.*..* *.*.*..*.*. * * * *.*. *
7.4e+06 ++ : .. |
7.2e+06 ++-----*----------------------------------------------------------+
vmstat.system.in
30800 ++------------------------------------------------------------------+
| .*. .*.. * |
30600 ++ .*. *.. .*..*.*..* .*.. .*..*. + + *..*. .*.*..*.*..*
30400 *+.* * * * *..* + + *. |
| * |
30200 ++ |
30000 ++ |
| |
29800 ++ |
29600 ++ |
| O |
29400 ++ O O O O |
29200 O+ O O O O O O O O O O O O O |
| O O O O O |
29000 ++------------------------------------------------------------------+
vmstat.system.cs
7100 ++-------------------------------------------------------------------+
| |
7000 ++ O |
6900 O+ O O O O O O O |
| O O O O O O O O O O O O |
6800 ++ O O O |
| |
6700 ++ |
| |
6600 ++ .* *.. *.. * |
6500 ++.* : : * .*.. .*..*.*.. .. .*. .. : |
*. : : + .* .*.*..* *.* *.*..*.*. * : |
6400 ++ : : *. *. *..*
| :: |
6300 ++------*------------------------------------------------------------+
proc-vmstat.pgalloc_dma32
1e+08 ++------------------------------------------------------------------+
9e+07 O+ O O O O O O O O O O O O O O O O O O O O O O O |
| |
8e+07 ++ |
7e+07 ++ |
| |
6e+07 ++ |
5e+07 ++ |
4e+07 ++ |
| |
3e+07 ++ |
2e+07 ++ |
| |
1e+07 ++ |
0 *+-*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*
numa-vmstat.node0.numa_hit
4.6e+06 ++----------------------------------------------------------------+
O O O O O O O O O O |
4.5e+06 ++O O O O O O O O O O O O O O |
4.4e+06 ++ |
| |
4.3e+06 ++ |
4.2e+06 ++ |
| |
4.1e+06 ++ |
4e+06 ++ |
| |
3.9e+06 ++ .*.. |
3.8e+06 *+*..* *. .* *.*.*..*.*..*.*..*.*..*.*.*..*.*..*.*..*.*..*.*
| + .. *. |
3.7e+06 ++-----*----------------------------------------------------------+
numa-vmstat.node0.numa_local
4.6e+06 ++----------------------------------------------------------------+
| O O O O O O O O |
4.5e+06 O+O O O O O O O O O O O O O O O |
4.4e+06 ++ |
| |
4.3e+06 ++ |
4.2e+06 ++ |
| |
4.1e+06 ++ |
4e+06 ++ |
| |
3.9e+06 ++ .*.. |
3.8e+06 *+*..* *. .* *.*.*.. .*..*.*..*.*..*.*.*..*.*..*.*..*.*..*.|
| + .. *. * *
3.7e+06 ++-----*----------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
6 years, 2 months