mm: hwpoison: call shake_page() after try_to_unmap() for mlocked page
authorNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Wed, 3 May 2017 21:56:22 +0000 (14:56 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 3 May 2017 22:52:12 +0000 (15:52 -0700)
Memory error handler calls try_to_unmap() for error pages in various
states.  If the error page is a mlocked page, error handling could fail
with "still referenced by 1 users" message.  This is because the page is
linked to and stays in lru cache after the following call chain.

  try_to_unmap_one
    page_remove_rmap
      clear_page_mlock
        putback_lru_page
          lru_cache_add

memory_failure() calls shake_page() to hanlde the similar issue, but
current code doesn't cover because shake_page() is called only before
try_to_unmap().  So this patches adds shake_page().

Fixes: 23a003bfd23ea9ea0b7756b920e51f64b284b468 ("mm/madvise: pass return code of memory_failure() to userspace")
Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop
Link: http://lkml.kernel.org/r/1493197841-23986-3-git-send-email-n-horiguchi@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reported-by: kernel test robot <lkp@intel.com>
Cc: Xiaolong Ye <xiaolong.ye@intel.com>
Cc: Chen Gong <gong.chen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory-failure.c

index 9d87fcab96c99688a0a309d58c68d319fe1d298c..73066b80d14af70d0fdf12e2228823b84c258903 100644 (file)
@@ -916,6 +916,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
        bool unmap_success;
        int kill = 1, forcekill;
        struct page *hpage = *hpagep;
+       bool mlocked = PageMlocked(hpage);
 
        /*
         * Here we are interested only in user-mapped pages, so skip any
@@ -979,6 +980,13 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
                pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n",
                       pfn, page_mapcount(hpage));
 
+       /*
+        * try_to_unmap() might put mlocked page in lru cache, so call
+        * shake_page() again to ensure that it's flushed.
+        */
+       if (mlocked)
+               shake_page(hpage, 0);
+
        /*
         * Now that the dirty bit has been propagated to the
         * struct page and all unmaps done we can decide if