summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorJoonsoo Kim2015-02-10 23:09:35 +0100
committerLinus Torvalds2015-02-10 23:30:30 +0100
commitccaafd7fd039aebc9359a9799f8558b01f1c2adc (patch)
treec4a32ede5bb661489da8846cfe947bcb251f6c11 /mm/slub.c
parentmm/slub: optimize alloc/free fastpath by removing preemption on/off (diff)
downloadkernel-qcow2-linux-ccaafd7fd039aebc9359a9799f8558b01f1c2adc.tar.gz
kernel-qcow2-linux-ccaafd7fd039aebc9359a9799f8558b01f1c2adc.tar.xz
kernel-qcow2-linux-ccaafd7fd039aebc9359a9799f8558b01f1c2adc.zip
mm: don't use compound_head() in virt_to_head_page()
compound_head() is implemented with assumption that there would be race condition when checking tail flag. This assumption is only true when we try to access arbitrary positioned struct page. The situation that virt_to_head_page() is called is different case. We call virt_to_head_page() only in the range of allocated pages, so there is no race condition on tail flag. In this case, we don't need to handle race condition and we can reduce overhead slightly. This patch implements compound_head_fast() which is similar with compound_head() except tail flag race handling. And then, virt_to_head_page() uses this optimized function to improve performance. I saw 1.8% win in a fast-path loop over kmem_cache_alloc/free, (14.063 ns -> 13.810 ns) if target object is on tail page. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
0 files changed, 0 insertions, 0 deletions