On Fri, Aug 07, 2020 at 02:24:00AM +0300, Kirill A. Shutemov wrote:
On Tue, Aug 04, 2020 at 05:17:52PM +0100, Matthew Wilcox (Oracle)
wrote:
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 484a36185bb5..a474a92a2a72 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -18,6 +18,11 @@
>
> struct pagevec;
>
> +static inline bool page_cache_empty(struct address_space *mapping)
> +{
> + return xa_empty(&mapping->i_pages);
What about something like
bool empty = xa_empty(&mapping->i_pages);
VM_BUG_ON(empty && mapping->nrpages);
return empty;
I tried this and it's triggered by generic/418. The problem
is that it's called when the pagecache lock isn't held (by
invalidate_inode_pages2_range), so it's possible for xa_empty() to
return true, then a page be added to the page cache, and mapping->pages
be incremented to 1. That seems to be what's happened here:
(gdb) p/x *(struct address_space *)0xffff88804b21b360
$2 = {host = 0xffff88804b21b200, i_pages = {xa_lock = {{rlock = {raw_lock = {{
val = {counter = 0x0}, {locked = 0x0, pending = 0x0}, {
locked_pending = 0x0, tail = 0x0}}}}}}, xa_flags = 0x21,
* xa_head = 0xffffea0001e187c0}, gfp_mask = 0x100c4a, i_mmap_writable = {
counter = 0x0}, nr_thps = {counter = 0x0}, i_mmap = {rb_root = {
rb_node = 0x0}, rb_leftmost = 0x0}, i_mmap_rwsem = {count = {
counter = 0x0}, owner = {counter = 0x0}, osq = {tail = {counter = 0x0}},
wait_lock = {raw_lock = {{val = {counter = 0x0}, {locked = 0x0,
pending = 0x0}, {locked_pending = 0x0, tail = 0x0}}}},
wait_list = {next = 0xffff88804b21b3b0, prev = 0xffff88804b21b3b0}},
* nrpages = 0x1, writeback_index = 0x0, a_ops = 0xffffffff81c2ed60,
flags = 0x40, wb_err = 0x0, private_lock = {{rlock = {raw_lock = {{val = {
counter = 0x0}, {locked = 0x0, pending = 0x0}, {
locked_pending = 0x0, tail = 0x0}}}}}}, private_list = {
next = 0xffff88804b21b3e8, prev = 0xffff88804b21b3e8}, private_data = 0x0}
(marked the critical lines with *)