diff options
author | Michael J Wang | 2012-03-19 23:26:19 +0100 |
---|---|---|
committer | Ingo Molnar | 2012-03-27 14:52:12 +0200 |
commit | 1b028abc779b67b699daff55e27d2432f8d92666 (patch) | |
tree | 0b6deda71cb6ee5a17773716912665b0ad8ddabc | |
parent | sched: Fix select_fallback_rq() vs cpu_active/cpu_online (diff) | |
download | kernel-qcow2-linux-1b028abc779b67b699daff55e27d2432f8d92666.tar.gz kernel-qcow2-linux-1b028abc779b67b699daff55e27d2432f8d92666.tar.xz kernel-qcow2-linux-1b028abc779b67b699daff55e27d2432f8d92666.zip |
sched/rt: Improve pick_next_highest_task_rt()
Avoid extra work by continuing on to the next rt_rq if the highest
prio task in current rt_rq is the same priority as our candidate
task.
More detailed explanation: if next is not NULL, then we have found a
candidate task, and its priority is next->prio. Now we are looking
for an even higher priority task in the other rt_rq's. idx is the
highest priority in the current candidate rt_rq. In the current 3.3
code, if idx is equal to next->prio, we would start scanning the tasks
in that rt_rq and replace the current candidate task with a task from
that rt_rq. But the new task would only have a priority that is equal
to our previous candidate task, so we have not advanced our goal of
finding a higher prio task. So we should avoid the extra work by
continuing on to the next rt_rq if idx is equal to next->prio.
Signed-off-by: Michael J Wang <mjwang@broadcom.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Reviewed-by: Yong Zhang <yong.zhang0@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/2EF88150C0EF2C43A218742ED384C1BC0FC83D6B@IRVEXCHMB08.corp.ad.broadcom.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r-- | kernel/sched/rt.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index b60dad720173..44af55e6d5d0 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1428,7 +1428,7 @@ static struct task_struct *pick_next_highest_task_rt(struct rq *rq, int cpu) next_idx: if (idx >= MAX_RT_PRIO) continue; - if (next && next->prio < idx) + if (next && next->prio <= idx) continue; list_for_each_entry(rt_se, array->queue + idx, run_list) { struct task_struct *p; |