sched/fair: Defer calculation of 'prev_eff_load' in wake_affine_weight() until needed
authorMel Gorman <mgorman@techsingularity.net>
Tue, 13 Feb 2018 13:37:26 +0000 (13:37 +0000)
committerIngo Molnar <mingo@kernel.org>
Wed, 21 Feb 2018 07:49:07 +0000 (08:49 +0100)
On sync wakeups, the previous CPU effective load may not be used so delay
the calculation until it's needed.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Giovanni Gherdovich <ggherdovich@suse.cz>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180213133730.24064-3-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 0132572d7523e99ebaecfa94560f18fb3ad57571..ae3e6f87771162eb22a683d24a8c69e980c08522 100644 (file)
@@ -5724,7 +5724,6 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
        unsigned long task_load;
 
        this_eff_load = target_load(this_cpu, sd->wake_idx);
-       prev_eff_load = source_load(prev_cpu, sd->wake_idx);
 
        if (sync) {
                unsigned long current_load = task_h_load(current);
@@ -5742,6 +5741,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
                this_eff_load *= 100;
        this_eff_load *= capacity_of(prev_cpu);
 
+       prev_eff_load = source_load(prev_cpu, sd->wake_idx);
        prev_eff_load -= task_load;
        if (sched_feat(WA_BIAS))
                prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;