From 3a79d52aa3c63c939f5a1f86e80e634f84e987c4 Mon Sep 17 00:00:00 2001 From: Waiman Long Date: Wed, 6 Aug 2014 16:05:38 -0700 Subject: mm, thp: replace smp_mb after atomic_add by smp_mb__after_atomic In some architectures like x86, atomic_add() is a full memory barrier. In that case, an additional smp_mb() is just a waste of time. This patch replaces that smp_mb() by smp_mb__after_atomic() which will avoid the redundant memory barrier in some architectures. With a 3.16-rc1 based kernel, this patch reduced the execution time of breaking 1000 transparent huge pages from 38,245us to 30,964us. A reduction of 19% which is quite sizeable. It also reduces the %cpu time of the __split_huge_page_refcount function in the perf profile from 2.18% to 1.15%. Signed-off-by: Waiman Long Acked-by: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Mel Gorman Cc: Rik van Riel Cc: Scott J Norton Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2161490526f0..4b95ff4120f5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1681,7 +1681,7 @@ static void __split_huge_page_refcount(struct page *page, &page_tail->_count); /* after clearing PageTail the gup refcount can be released */ - smp_mb(); + smp_mb__after_atomic(); /* * retain hwpoison flag of the poisoned tail page: -- cgit v1.2.3-55-g7522