sched: put back some stack hog changes that were undone in kernel/sched.c

Impact: prevents panic from stack overflow on numa-capable machines.

Some of the "removal of stack hogs" changes in kernel/sched.c by using
node_to_cpumask_ptr were undone by the early cpumask API updates, and
causes a panic due to stack overflow.  This patch undoes those changes
by using cpumask_of_node() which returns a 'const struct cpumask *'.

In addition, cpu_coregoup_map is replaced with cpu_coregroup_mask further
reducing stack usage.  (Both of these updates removed 9 FIXME's!)

Also:
   Pick up some remaining changes from the old 'cpumask_t' functions to
   the new 'struct cpumask *' functions.

   Optimize memory traffic by allocating each percpu local_cpu_mask on the
   same node as the referring cpu.

Signed-off-by: Mike Travis <travis@sgi.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
Mike Travis 2008-12-31 18:08:45 -08:00 committed by Ingo Molnar
parent 730cf27246
commit 6ca09dfc9f
2 changed files with 17 additions and 39 deletions

View file

@ -1383,7 +1383,8 @@ static inline void init_sched_rt_class(void)
unsigned int i;
for_each_possible_cpu(i)
alloc_cpumask_var(&per_cpu(local_cpu_mask, i), GFP_KERNEL);
alloc_cpumask_var_node(&per_cpu(local_cpu_mask, i),
GFP_KERNEL, cpu_to_node(i));
}
#endif /* CONFIG_SMP */