summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorShaohua Li2011-11-11 07:54:14 +0100
committerPekka Enberg2011-11-27 21:08:15 +0100
commit4c493a5a5c0bab6c434af2723328edd79c49aa0c (patch)
tree184c48e7c1759127de931d903bdbbdcc786acac6 /mm/slub.c
parentMerge branch 'slab/urgent' into slab/next (diff)
downloadkernel-qcow2-linux-4c493a5a5c0bab6c434af2723328edd79c49aa0c.tar.gz
kernel-qcow2-linux-4c493a5a5c0bab6c434af2723328edd79c49aa0c.tar.xz
kernel-qcow2-linux-4c493a5a5c0bab6c434af2723328edd79c49aa0c.zip
slub: add missed accounting
With per-cpu partial list, slab is added to partial list first and then moved to node list. The __slab_free() code path for add/remove_partial is almost deprecated(except for slub debug). But we forget to account add/remove_partial when move per-cpu partial pages to node list, so the statistics for such events are always 0. Add corresponding accounting. This is against the patch "slub: use correct parameter to add a page to partial list tail" Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c7
1 files changed, 5 insertions, 2 deletions
diff --git a/mm/slub.c b/mm/slub.c
index c3138233a6e8..108ed03fb422 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1901,11 +1901,14 @@ static void unfreeze_partials(struct kmem_cache *s)
}
if (l != m) {
- if (l == M_PARTIAL)
+ if (l == M_PARTIAL) {
remove_partial(n, page);
- else
+ stat(s, FREE_REMOVE_PARTIAL);
+ } else {
add_partial(n, page,
DEACTIVATE_TO_TAIL);
+ stat(s, FREE_ADD_PARTIAL);
+ }
l = m;
}