sched/numa: Update the scan period without holding the numa_group lock
authorSrikar Dronamraju <srikar@linux.vnet.ibm.com>
Wed, 20 Jun 2018 17:02:55 +0000 (22:32 +0530)
committerIngo Molnar <mingo@kernel.org>
Wed, 25 Jul 2018 09:41:08 +0000 (11:41 +0200)
The metrics for updating scan periods are local or task specific.
Currently this update happens under the numa_group lock, which seems
unnecessary. Hence move this update outside the lock.

Running SPECjbb2005 on a 4 node machine and comparing bops/JVM
JVMS  LAST_PATCH  WITH_PATCH  %CHANGE
16    25355.9     25645.4     1.141
1     72812       72142       -0.92

Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1529514181-9842-15-git-send-email-srikar@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 3bcf0e864613d079eb75c34176724e06d82898d1..fc33a4b40a097814266eccd55b5c2bf675484b9c 100644 (file)
@@ -2170,8 +2170,6 @@ static void task_numa_placement(struct task_struct *p)
                }
        }
 
-       update_task_scan_period(p, fault_types[0], fault_types[1]);
-
        if (p->numa_group) {
                numa_group_count_active_nodes(p->numa_group);
                spin_unlock_irq(group_lock);
@@ -2186,6 +2184,8 @@ static void task_numa_placement(struct task_struct *p)
                if (task_node(p) != p->numa_preferred_nid)
                        numa_migrate_preferred(p);
        }
+
+       update_task_scan_period(p, fault_types[0], fault_types[1]);
 }
 
 static inline int get_numa_group(struct numa_group *grp)