mm, clear_huge_page: move order algorithm into a separate function
authorHuang Ying <ying.huang@intel.com>
Fri, 17 Aug 2018 22:45:46 +0000 (15:45 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 17 Aug 2018 23:20:29 +0000 (16:20 -0700)
Patch series "mm, huge page: Copy target sub-page last when copy huge
page", v2.

Huge page helps to reduce TLB miss rate, but it has higher cache
footprint, sometimes this may cause some issue.  For example, when
copying huge page on x86_64 platform, the cache footprint is 4M.  But on
a Xeon E5 v3 2699 CPU, there are 18 cores, 36 threads, and only 45M LLC
(last level cache).  That is, in average, there are 2.5M LLC for each
core and 1.25M LLC for each thread.

If the cache contention is heavy when copying the huge page, and we copy
the huge page from the begin to the end, it is possible that the begin
of huge page is evicted from the cache after we finishing copying the
end of the huge page.  And it is possible for the application to access
the begin of the huge page after copying the huge page.

In c79b57e462b5d ("mm: hugetlb: clear target sub-page last when clearing
huge page"), to keep the cache lines of the target subpage hot, the
order to clear the subpages in the huge page in clear_huge_page() is
changed to clearing the subpage which is furthest from the target
subpage firstly, and the target subpage last.  The similar order
changing helps huge page copying too.  That is implemented in this
patchset.

The patchset is a generic optimization which should benefit quite some
workloads, not for a specific use case.  To demonstrate the performance
benefit of the patchset, we have tested it with vm-scalability run on
transparent huge page.

With this patchset, the throughput increases ~16.6% in vm-scalability
anon-cow-seq test case with 36 processes on a 2 socket Xeon E5 v3 2699
system (36 cores, 72 threads).  The test case set
/sys/kernel/mm/transparent_hugepage/enabled to be always, mmap() a big
anonymous memory area and populate it, then forked 36 child processes,
each writes to the anonymous memory area from the begin to the end, so
cause copy on write.  For each child process, other child processes
could be seen as other workloads which generate heavy cache pressure.
At the same time, the IPC (instruction per cycle) increased from 0.63 to
0.78, and the time spent in user space is reduced ~7.2%.

This patch (of 4):

In c79b57e462b5d ("mm: hugetlb: clear target sub-page last when clearing
huge page"), to keep the cache lines of the target subpage hot, the
order to clear the subpages in the huge page in clear_huge_page() is
changed to clearing the subpage which is furthest from the target
subpage firstly, and the target subpage last.  This optimization could
be applied to copying huge page too with the same order algorithm.  To
avoid code duplication and reduce maintenance overhead, in this patch,
the order algorithm is moved out of clear_huge_page() into a separate
function: process_huge_page().  So that we can use it for copying huge
page too.

This will change the direct calls to clear_user_highpage() into the
indirect calls.  But with the proper inline support of the compilers,
the indirect call will be optimized to be the direct call.  Our tests
show no performance change with the patch.

This patch is a code cleanup without functionality change.

Link: http://lkml.kernel.org/r/20180524005851.4079-2-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Andi Kleen <andi.kleen@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Shaohua Li <shli@fb.com>
Cc: Christopher Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory.c

index d7b5b22a1a0a35fbab402099ffe29fa53b74d928..65bb59e031c9a91c34d802aad8586fd0fc3de29e 100644 (file)
@@ -4599,71 +4599,93 @@ EXPORT_SYMBOL(__might_fault);
 #endif
 
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
-static void clear_gigantic_page(struct page *page,
-                               unsigned long addr,
-                               unsigned int pages_per_huge_page)
-{
-       int i;
-       struct page *p = page;
-
-       might_sleep();
-       for (i = 0; i < pages_per_huge_page;
-            i++, p = mem_map_next(p, page, i)) {
-               cond_resched();
-               clear_user_highpage(p, addr + i * PAGE_SIZE);
-       }
-}
-void clear_huge_page(struct page *page,
-                    unsigned long addr_hint, unsigned int pages_per_huge_page)
+/*
+ * Process all subpages of the specified huge page with the specified
+ * operation.  The target subpage will be processed last to keep its
+ * cache lines hot.
+ */
+static inline void process_huge_page(
+       unsigned long addr_hint, unsigned int pages_per_huge_page,
+       void (*process_subpage)(unsigned long addr, int idx, void *arg),
+       void *arg)
 {
        int i, n, base, l;
        unsigned long addr = addr_hint &
                ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
 
-       if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
-               clear_gigantic_page(page, addr, pages_per_huge_page);
-               return;
-       }
-
-       /* Clear sub-page to access last to keep its cache lines hot */
+       /* Process target subpage last to keep its cache lines hot */
        might_sleep();
        n = (addr_hint - addr) / PAGE_SIZE;
        if (2 * n <= pages_per_huge_page) {
-               /* If sub-page to access in first half of huge page */
+               /* If target subpage in first half of huge page */
                base = 0;
                l = n;
-               /* Clear sub-pages at the end of huge page */
+               /* Process subpages at the end of huge page */
                for (i = pages_per_huge_page - 1; i >= 2 * n; i--) {
                        cond_resched();
-                       clear_user_highpage(page + i, addr + i * PAGE_SIZE);
+                       process_subpage(addr + i * PAGE_SIZE, i, arg);
                }
        } else {
-               /* If sub-page to access in second half of huge page */
+               /* If target subpage in second half of huge page */
                base = pages_per_huge_page - 2 * (pages_per_huge_page - n);
                l = pages_per_huge_page - n;
-               /* Clear sub-pages at the begin of huge page */
+               /* Process subpages at the begin of huge page */
                for (i = 0; i < base; i++) {
                        cond_resched();
-                       clear_user_highpage(page + i, addr + i * PAGE_SIZE);
+                       process_subpage(addr + i * PAGE_SIZE, i, arg);
                }
        }
        /*
-        * Clear remaining sub-pages in left-right-left-right pattern
-        * towards the sub-page to access
+        * Process remaining subpages in left-right-left-right pattern
+        * towards the target subpage
         */
        for (i = 0; i < l; i++) {
                int left_idx = base + i;
                int right_idx = base + 2 * l - 1 - i;
 
                cond_resched();
-               clear_user_highpage(page + left_idx,
-                                   addr + left_idx * PAGE_SIZE);
+               process_subpage(addr + left_idx * PAGE_SIZE, left_idx, arg);
                cond_resched();
-               clear_user_highpage(page + right_idx,
-                                   addr + right_idx * PAGE_SIZE);
+               process_subpage(addr + right_idx * PAGE_SIZE, right_idx, arg);
        }
 }
 
+static void clear_gigantic_page(struct page *page,
+                               unsigned long addr,
+                               unsigned int pages_per_huge_page)
+{
+       int i;
+       struct page *p = page;
+
+       might_sleep();
+       for (i = 0; i < pages_per_huge_page;
+            i++, p = mem_map_next(p, page, i)) {
+               cond_resched();
+               clear_user_highpage(p, addr + i * PAGE_SIZE);
+       }
+}
+
+static void clear_subpage(unsigned long addr, int idx, void *arg)
+{
+       struct page *page = arg;
+
+       clear_user_highpage(page + idx, addr);
+}
+
+void clear_huge_page(struct page *page,
+                    unsigned long addr_hint, unsigned int pages_per_huge_page)
+{
+       unsigned long addr = addr_hint &
+               ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
+
+       if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
+               clear_gigantic_page(page, addr, pages_per_huge_page);
+               return;
+       }
+
+       process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page);
+}
+
 static void copy_user_gigantic_page(struct page *dst, struct page *src,
                                    unsigned long addr,
                                    struct vm_area_struct *vma,