diff options
author | Glauber Costa | 2012-06-20 22:59:18 +0200 |
---|---|---|
committer | Pekka Enberg | 2012-07-02 12:56:59 +0200 |
commit | a164f89628fa813a2b012ec033625e9e507c29bb (patch) | |
tree | 5da295dece37926ab5da8d018373a80ab8388bb9 /mm | |
parent | slab: Fix a typo in commit 8c138b "slab: Get rid of obj_size macro" (diff) | |
download | kernel-qcow2-linux-a164f89628fa813a2b012ec033625e9e507c29bb.tar.gz kernel-qcow2-linux-a164f89628fa813a2b012ec033625e9e507c29bb.tar.xz kernel-qcow2-linux-a164f89628fa813a2b012ec033625e9e507c29bb.zip |
slab: move FULL state transition to an initcall
During kmem_cache_init_late(), we transition to the LATE state,
and after some more work, to the FULL state, its last state
This is quite different from slub, that will only transition to
its last state (previously SYSFS), in a (late)initcall, after a lot
more of the kernel is ready.
This means that in slab, we have no way to taking actions dependent
on the initialization of other pieces of the kernel that are supposed
to start way after kmem_init_late(), such as cgroups initialization.
To achieve more consistency in this behavior, that patch only
transitions to the UP state in kmem_init_late. In my analysis,
setup_cpu_cache() should be happy to test for >= UP, instead of
== FULL. It also has passed some tests I've made.
We then only mark FULL state after the reap timers are in place,
meaning that no further setup is expected.
Signed-off-by: Glauber Costa <glommer@parallels.com>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slab.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/slab.c b/mm/slab.c index 8b7cb802a754..105f188d14a3 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1668,9 +1668,6 @@ void __init kmem_cache_init_late(void) BUG(); mutex_unlock(&cache_chain_mutex); - /* Done! */ - g_cpucache_up = FULL; - /* * Register a cpu startup notifier callback that initializes * cpu_cache_get for all new cpus @@ -1700,6 +1697,9 @@ static int __init cpucache_init(void) */ for_each_online_cpu(cpu) start_cpu_timer(cpu); + + /* Done! */ + g_cpucache_up = FULL; return 0; } __initcall(cpucache_init); @@ -2167,7 +2167,7 @@ static size_t calculate_slab_order(struct kmem_cache *cachep, static int __init_refok setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp) { - if (g_cpucache_up == FULL) + if (g_cpucache_up >= LATE) return enable_cpucache(cachep, gfp); if (g_cpucache_up == NONE) { |