Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
1 year, 7 months
Re: [LKP] [PATCH v2 0/4] improve fault-tolerance of rhashtable runtime-test
by Herbert Xu
On Mon, Nov 30, 2015 at 11:14:01AM +0100, Phil Sutter wrote:
> On Mon, Nov 30, 2015 at 05:37:55PM +0800, Herbert Xu wrote:
> > Phil Sutter <phil(a)nwl.cc> wrote:
> > > The following series aims to improve lib/test_rhashtable in different
> > > situations:
> > >
> > > Patch 1 allows the kernel to reschedule so the test does not block too
> > > long on slow systems.
> > > Patch 2 fixes behaviour under pressure, retrying inserts in non-permanent
> > > error case (-EBUSY).
> > > Patch 3 auto-adjusts the upper table size limit according to the number
> > > of threads (in concurrency test). In fact, the current default is
> > > already too small.
> > > Patch 4 makes it possible to retry inserts even in supposedly permanent
> > > error case (-ENOMEM) to expose rhashtable's remaining problem of
> > > -ENOMEM being not as permanent as it is expected to be.
> >
> > I'm sorry but this patch series is simply bogus.
>
> The whole series?!
Well at least patch two and four seem clearly wrong because no
rhashtable user should need to retry insertions.
> Did you try with my bogus patch series applied? How many CPUs does your
> test system actually have?
>
> > So can someone please help me reproduce this? Because just loading
> > test_rhashtable isn't doing it.
>
> As said, maybe you need to increase the number of spawned threads
> (tcount=50 or so).
OK that's better. I think I see the problem. The test in
rhashtable_insert_rehash is racy and if two threads both try
to grow the table one of them may be tricked into doing a rehash
instead.
I'm working on a fix.
Thanks,
--
Email: Herbert Xu <herbert(a)gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
5 years, 1 month
Re: [LKP] [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by Huang, Ying
Mel Gorman <mgorman(a)techsingularity.net> writes:
> On Fri, Nov 27, 2015 at 09:14:52AM +0800, Huang, Ying wrote:
>> Hi, Mel,
>>
>> Mel Gorman <mgorman(a)techsingularity.net> writes:
>>
>> > On Thu, Nov 26, 2015 at 08:56:12AM +0800, kernel test robot wrote:
>> >> FYI, we noticed the below changes on
>> >>
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> >> commit d0164adc89f6bb374d304ffcc375c6d2652fe67d ("mm, page_alloc:
>> >> distinguish between being unable to sleep, unwilling to sleep and
>> >> avoiding waking kswapd")
>> >>
>> >> Note: the testing machine is a virtual machine with only 1G memory.
>> >>
>> >
>> > I'm not actually seeing any problem here. Is this a positive report or
>> > am I missing something obvious?
>>
>> Sorry the email subject is generated automatically and I forget to
>> change it to some meaningful stuff before sending out. From the testing
>> result, we found the commit make the OOM possibility increased from 0%
>> to 100% on this machine with small memory. I also added proc-vmstat
>> information data too to help diagnose it.
>>
>
> There is no reference to OOM possibility in the email that I can see. Can
> you give examples of the OOM messages that shows the problem sites? It was
> suspected that there may be some callers that were accidentally depending
> on access to emergency reserves. If so, either they need to be fixed (if
> the case is extremely rare) or a small reserve will have to be created
> for callers that are not high priority but still cannot reclaim.
>
> Note that I'm travelling a lot over the next two weeks so I'll be slow to
> respond but I will get to it.
Here is the kernel log, the full dmesg is attached too. The OOM
occurs during fsmark testing.
Best Regards,
Huang, Ying
[ 31.453514] kworker/u4:0: page allocation failure: order:0, mode:0x2200000
[ 31.463570] CPU: 0 PID: 6 Comm: kworker/u4:0 Not tainted 4.3.0-08056-gd0164ad #1
[ 31.466115] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 31.477146] Workqueue: writeback wb_workfn (flush-253:0)
[ 31.481450] 0000000000000000 ffff880035ac75e8 ffffffff8140a142 0000000002200000
[ 31.492582] ffff880035ac7670 ffffffff8117117b ffff880037586b28 ffff880000000040
[ 31.507631] ffff88003523b270 0000000000000040 ffff880035abc800 ffffffff00000000
[ 31.510568] Call Trace:
[ 31.511828] [<ffffffff8140a142>] dump_stack+0x4b/0x69
[ 31.513391] [<ffffffff8117117b>] warn_alloc_failed+0xdb/0x140
[ 31.523163] [<ffffffff81174ec4>] __alloc_pages_nodemask+0x874/0xa60
[ 31.524949] [<ffffffff811bcb62>] alloc_pages_current+0x92/0x120
[ 31.526659] [<ffffffff811c73e4>] new_slab+0x3d4/0x480
[ 31.536134] [<ffffffff811c7c36>] __slab_alloc+0x376/0x470
[ 31.537541] [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
[ 31.543268] [<ffffffff81338221>] ? xfs_submit_ioend_bio+0x31/0x40
[ 31.545104] [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
[ 31.546982] [<ffffffff811c8e8d>] __kmalloc+0x20d/0x260
[ 31.548334] [<ffffffff814e0ced>] alloc_indirect+0x1d/0x50
[ 31.549805] [<ffffffff814e0fec>] virtqueue_add_sgs+0x2cc/0x3a0
[ 31.555396] [<ffffffff81573a30>] __virtblk_add_req+0xb0/0x1f0
[ 31.556846] [<ffffffff8117a121>] ? pagevec_lookup_tag+0x21/0x30
[ 31.558318] [<ffffffff813e5d72>] ? blk_rq_map_sg+0x1e2/0x4f0
[ 31.563880] [<ffffffff81573c82>] virtio_queue_rq+0x112/0x280
[ 31.565307] [<ffffffff813e9de7>] __blk_mq_run_hw_queue+0x1d7/0x370
[ 31.571005] [<ffffffff813e9bef>] blk_mq_run_hw_queue+0x9f/0xc0
[ 31.572472] [<ffffffff813eb10a>] blk_mq_insert_requests+0xfa/0x1a0
[ 31.573982] [<ffffffff813ebdb3>] blk_mq_flush_plug_list+0x123/0x140
[ 31.583686] [<ffffffff813e1777>] blk_flush_plug_list+0xa7/0x200
[ 31.585138] [<ffffffff813e1c49>] blk_finish_plug+0x29/0x40
[ 31.586542] [<ffffffff81215f85>] wb_writeback+0x185/0x2c0
[ 31.592429] [<ffffffff812166a5>] wb_workfn+0xf5/0x390
[ 31.594037] [<ffffffff81091297>] process_one_work+0x157/0x420
[ 31.599804] [<ffffffff81091ef9>] worker_thread+0x69/0x4a0
[ 31.601484] [<ffffffff81091e90>] ? rescuer_thread+0x380/0x380
[ 31.611368] [<ffffffff8109746f>] kthread+0xef/0x110
[ 31.612953] [<ffffffff81097380>] ? kthread_park+0x60/0x60
[ 31.619418] [<ffffffff818bce8f>] ret_from_fork+0x3f/0x70
[ 31.621221] [<ffffffff81097380>] ? kthread_park+0x60/0x60
[ 31.635226] Mem-Info:
[ 31.636569] active_anon:4942 inactive_anon:1643 isolated_anon:0
[ 31.636569] active_file:23196 inactive_file:110131 isolated_file:251
[ 31.636569] unevictable:92329 dirty:2865 writeback:1925 unstable:0
[ 31.636569] slab_reclaimable:10588 slab_unreclaimable:3390
[ 31.636569] mapped:2848 shmem:1687 pagetables:876 bounce:0
[ 31.636569] free:1932 free_pcp:218 free_cma:0
[ 31.667096] Node 0 DMA free:3948kB min:60kB low:72kB high:88kB active_anon:264kB inactive_anon:128kB active_file:1544kB inactive_file:5296kB unevictable:3136kB isolated(anon):0kB isolated(file):236kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:440kB shmem:128kB slab_reclaimable:588kB slab_unreclaimable:304kB kernel_stack:112kB pagetables:80kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:3376 all_unreclaimable? no
[ 31.708140] lowmem_reserve[]: 0 972 972 972
[ 31.710104] Node 0 DMA32 free:3780kB min:3824kB low:4780kB high:5736kB active_anon:19504kB inactive_anon:6444kB active_file:91240kB inactive_file:435228kB unevictable:366180kB isolated(anon):0kB isolated(file):768kB present:1032064kB managed:997532kB mlocked:0kB dirty:11460kB writeback:7700kB mapped:10952kB shmem:6620kB slab_reclaimable:41764kB slab_unreclaimable:13256kB kernel_stack:2752kB pagetables:3424kB unstable:0kB bounce:0kB free_pcp:872kB local_pcp:232kB free_cma:0kB writeback_tmp:0kB pages_scanned:140404 all_unreclaimable? no
[ 31.743737] lowmem_reserve[]: 0 0 0 0
[ 31.745320] Node 0 DMA: 7*4kB (UME) 2*8kB (UM) 2*16kB (ME) 1*32kB (E) 0*64kB 2*128kB (ME) 2*256kB (ME) 2*512kB (UM) 2*1024kB (ME) 0*2048kB 0*4096kB = 3948kB
[ 31.757513] Node 0 DMA32: 1*4kB (U) 0*8kB 4*16kB (UME) 3*32kB (UE) 3*64kB (UM) 1*128kB (U) 1*256kB (U) 0*512kB 3*1024kB (UME) 0*2048kB 0*4096kB = 3812kB
[ 31.766470] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 31.772953] 227608 total pagecache pages
[ 31.774127] 0 pages in swap cache
[ 31.775428] Swap cache stats: add 0, delete 0, find 0/0
[ 31.776785] Free swap = 0kB
[ 31.777799] Total swap = 0kB
[ 31.779569] 262014 pages RAM
[ 31.780584] 0 pages HighMem/MovableOnly
[ 31.781744] 8654 pages reserved
[ 31.790944] 0 pages hwpoisoned
[ 31.792008] SLUB: Unable to allocate memory on node -1 (gfp=0x2080000)
[ 31.793537] cache: kmalloc-128, object size: 128, buffer size: 128, default order: 0, min order: 0
[ 31.796088] node 0: slabs: 27, objs: 864, free: 0
5 years, 1 month
Re: [LKP] [PATCH v2 0/4] improve fault-tolerance of rhashtable runtime-test
by Herbert Xu
Phil Sutter <phil(a)nwl.cc> wrote:
> The following series aims to improve lib/test_rhashtable in different
> situations:
>
> Patch 1 allows the kernel to reschedule so the test does not block too
> long on slow systems.
> Patch 2 fixes behaviour under pressure, retrying inserts in non-permanent
> error case (-EBUSY).
> Patch 3 auto-adjusts the upper table size limit according to the number
> of threads (in concurrency test). In fact, the current default is
> already too small.
> Patch 4 makes it possible to retry inserts even in supposedly permanent
> error case (-ENOMEM) to expose rhashtable's remaining problem of
> -ENOMEM being not as permanent as it is expected to be.
I'm sorry but this patch series is simply bogus.
If rhashtable is indeed returning such errors under normal
conditions then rhashtable is broken and we must fix it instead
of working around it in the test code!
FWIW I still haven't been able to reproduce this problem, perhaps
because my machines have too few CPUs?
So can someone please help me reproduce this? Because just loading
test_rhashtable isn't doing it.
Thanks,
--
Email: Herbert Xu <herbert(a)gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
5 years, 1 month
如何把网站建設与网络营销完美的融合
by 朱光俊
| | | |
| |
| | |
亲!您好
我们只是走過路过,打打酱油,如有打扰,萬分抱歉!
預祝您生意兴隆,万事如意!
蜂巢联合科技有限公司是专为全国广大企事业單位精心搭建的互联網網站建设平台,公司成立于2005年6月,注册资金来110万,主要从事信息技術领域内的Internet网絡服务和网络商业應用研究(包括電子商务、网络营销、网络广告、商业网站規划和网页设计等),面向政府機构、企事业单位和广大個人用户提供Internet和Intranet基础服务和增值服务。
请不要直接回复此電子邮件,
有建站需求的,可以联系策划人员个人Email:zaikravec@sina.com
溫馨提示:给程序员直接发邮件,可以省去公司利潤和业务员提成,沟通更方便!
淘宝担保交易,信誉保证!
万分期待你的咨詢!
祝商祺!
| | |
| | | |
| |
5 years, 1 month
Re: [LKP] [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by Huang, Ying
Hi, Mel,
Mel Gorman <mgorman(a)techsingularity.net> writes:
> On Thu, Nov 26, 2015 at 08:56:12AM +0800, kernel test robot wrote:
>> FYI, we noticed the below changes on
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> commit d0164adc89f6bb374d304ffcc375c6d2652fe67d ("mm, page_alloc:
>> distinguish between being unable to sleep, unwilling to sleep and
>> avoiding waking kswapd")
>>
>> Note: the testing machine is a virtual machine with only 1G memory.
>>
>
> I'm not actually seeing any problem here. Is this a positive report or
> am I missing something obvious?
Sorry the email subject is generated automatically and I forget to
change it to some meaningful stuff before sending out. From the testing
result, we found the commit make the OOM possibility increased from 0%
to 100% on this machine with small memory. I also added proc-vmstat
information data too to help diagnose it.
Best Regards,
Huang, Ying
5 years, 1 month
Re: [LKP] [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by Rik van Riel
On 11/26/2015 08:25 AM, Mel Gorman wrote:
> On Thu, Nov 26, 2015 at 08:56:12AM +0800, kernel test robot wrote:
>> FYI, we noticed the below changes on
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> commit d0164adc89f6bb374d304ffcc375c6d2652fe67d ("mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd")
>>
>> Note: the testing machine is a virtual machine with only 1G memory.
>>
>
> I'm not actually seeing any problem here. Is this a positive report or
> am I missing something obvious?
I've gotten several reports that could be either
positive or negative, but where I am not quite
sure how to interpret the results.
The tool seems to CC the maintainers of the code
that was changed, so I am hoping they will pipe
up when they see a problem.
Of course, that doesn't help in this case :)
5 years, 1 month
hugepage compaction causes performance drop
by Aaron Lu
Hi,
One vm related test case run by LKP on a Haswell EP with 128GiB memory
showed that compaction code would cause performance drop about 30%. To
illustrate the problem, I've simplified the test with a program called
usemem(see attached). The test goes like this:
1 Boot up the server;
2 modprobe scsi_debug(a module that could use memory as SCSI device),
dev_size set to 4/5 free memory, i.e. about 100GiB. Use it as swap.
3 run the usemem test, which use mmap to map a MAP_PRIVATE | MAP_ANON
region with the size set to 3/4 of (remaining_free_memory + swap), and
then write to that region sequentially to trigger page fault and swap
out.
The above test runs with two configs regarding the below two sysfs files:
/sys/kernel/mm/transparent_hugepage/enabled
/sys/kernel/mm/transparent_hugepage/defrag
1 transparent hugepage and defrag are both set to always, let's call it
always-always case;
2 transparent hugepage is set to always while defrag is set to never,
let's call it always-never case.
The output from the always-always case is:
Setting up swapspace version 1, size = 104627196 KiB
no label, UUID=aafa53ae-af9e-46c9-acb9-8b3d4f57f610
cmdline: /lkp/aaron/src/bin/usemem 99994672128
99994672128 transferred in 95 seconds, throughput: 1003 MB/s
And the output from the always-never case is:
etting up swapspace version 1, size = 104629244 KiB
no label, UUID=60563c82-d1c6-4d86-b9fa-b52f208097e9
cmdline: /lkp/aaron/src/bin/usemem 99995965440
99995965440 transferred in 67 seconds, throughput: 1423 MB/s
The vmstat and perf-profile are also attached, please let me know if you
need any more information, thanks.
5 years, 1 month
[lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit d0164adc89f6bb374d304ffcc375c6d2652fe67d ("mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd")
Note: the testing machine is a virtual machine with only 1G memory.
=========================================================================================
compiler/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/1HDD/16MB/xfs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/vm-vp-1G/60G/fsmark
commit:
016c13daa5c9e4827eca703e2f0621c131f2cca3
d0164adc89f6bb374d304ffcc375c6d2652fe67d
016c13daa5c9e482 d0164adc89f6bb374d304ffcc3
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 100% 4:4 last_state.fsmark.exit_code.143
:4 50% 2:4 last_state.is_incomplete_run
:4 100% 4:4 dmesg.Mem-Info
:4 100% 4:4 dmesg.page_allocation_failure:order:#,mode
:4 100% 4:4 dmesg.warn_alloc_failed+0x
6327 23% -93.5% 409.00 80% proc-vmstat.allocstall
173495 58% -100.0% 0.00 -1% proc-vmstat.compact_free_scanned
4394 59% -100.0% 0.00 -1% proc-vmstat.compact_isolated
10055 44% -100.0% 0.00 -1% proc-vmstat.compact_migrate_scanned
443.25 13% -99.7% 1.50 100% proc-vmstat.kswapd_high_wmark_hit_quickly
28950 12% -91.4% 2502 81% proc-vmstat.kswapd_low_wmark_hit_quickly
15704144 0% -91.1% 1402050 73% proc-vmstat.nr_dirtied
12851 0% +26.3% 16235 18% proc-vmstat.nr_dirty_background_threshold
25704 0% +26.3% 32471 18% proc-vmstat.nr_dirty_threshold
2882 0% +1130.5% 35463 84% proc-vmstat.nr_free_pages
15693749 0% -91.3% 1365065 75% proc-vmstat.nr_written
16289593 0% -91.0% 1464689 72% proc-vmstat.numa_hit
16289593 0% -91.0% 1464689 72% proc-vmstat.numa_local
30453 12% -91.6% 2552 81% proc-vmstat.pageoutrun
16316641 0% -91.0% 1468330 72% proc-vmstat.pgalloc_dma32
642889 5% -90.5% 61326 56% proc-vmstat.pgfault
16218859 0% -91.6% 1355797 78% proc-vmstat.pgfree
69.25 68% -100.0% 0.00 -1% proc-vmstat.pgmigrate_fail
2066 58% -100.0% 0.00 -1% proc-vmstat.pgmigrate_success
62849613 0% -91.2% 5512004 74% proc-vmstat.pgpgout
417966 16% -80.1% 82999 36% proc-vmstat.pgscan_direct_dma32
15259915 0% -91.5% 1303209 76% proc-vmstat.pgscan_kswapd_dma32
360298 23% -93.5% 23325 82% proc-vmstat.pgsteal_direct_dma32
15224912 0% -91.7% 1270706 79% proc-vmstat.pgsteal_kswapd_dma32
236736 0% -96.1% 9216 100% proc-vmstat.slabs_scanned
108153 0% -98.0% 2154 100% proc-vmstat.workingset_nodereclaim
vm-vp-1G: qemu-system-x86_64 -enable-kvm -cpu Nehalem
Memory: 1G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 1 month
[lkp] [drm/i915] 8f8c5663fc: [drm:i915_hangcheck_elapsed [i915]] *ERROR* Hangcheck timer elapsed... render ring idle
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Chris-Wilson/drm-i915-Break-busywaiting-for-requests-on-pending-signals/20151115-213544
commit 8f8c5663fcfcf635678a194d9d19dba496ce87a8 ("drm/i915: Limit the busy wait on requests to 2us not 10ms!")
<6>[ 614.082484] [drm] GPU HANG: ecode 7:1:0x277fffff, in gem_reset_stats [6671], reason: Ring hung, action: reset
<6>[ 614.087816] [drm] Simulated gpu hang, resetting stop_rings
<5>[ 614.090737] drm/i915: Resetting chip after gpu hang
<3>[ 687.113593] [drm:i915_hangcheck_elapsed [i915]] *ERROR* Hangcheck timer elapsed... render ring idle
<6>[ 763.152171] [drm] stuck on render ring
<6>[ 763.155314] [drm] GPU HANG: ecode 7:0:0xe77fffff, in gem_reset_stats [9393], reason: Ring hung, action: reset
<6>[ 763.160212] [drm] Simulated gpu hang, resetting stop_rings
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 1 month