sched: Change cfs_rq load avg to unsigned long

Since the 'u64 runnable_load_avg, blocked_load_avg' in cfs_rq struct are
smaller than 'unsigned long' cfs_rq->load.weight. We don't need u64
vaiables to describe them. unsigned long is more efficient and convenience.

Signed-off-by: Alex Shi <alex.shi@intel.com>
Reviewed-by: Paul Turner <pjt@google.com>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1371694737-29336-10-git-send-email-alex.shi@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Alex Shi 2013-06-20 10:18:53 +08:00 committed by Ingo Molnar
parent a003a25b22
commit 72a4cf20cb
3 changed files with 5 additions and 8 deletions

View file

@ -4181,12 +4181,9 @@ static int tg_load_down(struct task_group *tg, void *data)
if (!tg->parent) {
load = cpu_rq(cpu)->avg.load_avg_contrib;
} else {
unsigned long tmp_rla;
tmp_rla = tg->parent->cfs_rq[cpu]->runnable_load_avg + 1;
load = tg->parent->cfs_rq[cpu]->h_load;
load *= tg->se[cpu]->avg.load_avg_contrib;
load /= tmp_rla;
load = div64_ul(load * tg->se[cpu]->avg.load_avg_contrib,
tg->parent->cfs_rq[cpu]->runnable_load_avg + 1);
}
tg->cfs_rq[cpu]->h_load = load;