summaryrefslogtreecommitdiffstats
path: root/kernel/sched/fair.c
diff options
context:
space:
mode:
authorRik van Riel2014-06-08 22:55:57 +0200
committerLinus Torvalds2014-06-08 23:35:05 +0200
commit1662867a9b2574bfdb9d4e97186aa131218d7210 (patch)
tree6c8408360b9dceecad772dee5b9152a1515cf401 /kernel/sched/fair.c
parentDon't trigger congestion wait on dirty-but-not-writeout pages (diff)
downloadkernel-qcow2-linux-1662867a9b2574bfdb9d4e97186aa131218d7210.tar.gz
kernel-qcow2-linux-1662867a9b2574bfdb9d4e97186aa131218d7210.tar.xz
kernel-qcow2-linux-1662867a9b2574bfdb9d4e97186aa131218d7210.zip
numa,sched: fix load_to_imbalanced logic inversion
This function is supposed to return true if the new load imbalance is worse than the old one. It didn't. I can only hope brown paper bags are in style. Now things converge much better on both the 4 node and 8 node systems. I am not sure why this did not seem to impact specjbb performance on the 4 node system, which is the system I have full-time access to. This bug was introduced recently, with commit e63da03639cc ("sched/numa: Allow task switch if load imbalance improves") Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel/sched/fair.c')
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 17de1956ddad..9855e87d671a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1120,7 +1120,7 @@ static bool load_too_imbalanced(long orig_src_load, long orig_dst_load,
old_imb = orig_dst_load * 100 - orig_src_load * env->imbalance_pct;
/* Would this change make things worse? */
- return (old_imb > imb);
+ return (imb > old_imb);
}
/*