From d4fc5069a39405d67937b25c0dc9249a9b6c50f1 Mon Sep 17 00:00:00 2001 From: Matthew Wilcox Date: Thu, 7 Jun 2018 17:08:26 -0700 Subject: mm: switch s_mem and slab_cache in struct page This will allow us to store slub's counters in the same bits as slab's s_mem. slub now needs to set page->mapping to NULL as it frees the page, just like slab does. Link: http://lkml.kernel.org/r/20180518194519.3820-5-willy@infradead.org Signed-off-by: Matthew Wilcox Acked-by: Christoph Lameter Acked-by: Vlastimil Babka Cc: Dave Hansen Cc: Jérôme Glisse Cc: "Kirill A . Shutemov" Cc: Lai Jiangshan Cc: Martin Schwidefsky Cc: Pekka Enberg Cc: Randy Dunlap Cc: Andrey Ryabinin Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/slub.c | 1 + 1 file changed, 1 insertion(+) (limited to 'mm/slub.c') diff --git a/mm/slub.c b/mm/slub.c index 48f75872c356..0170ea8a97fe 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1695,6 +1695,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page) __ClearPageSlab(page); page_mapcount_reset(page); + page->mapping = NULL; if (current->reclaim_state) current->reclaim_state->reclaimed_slab += pages; memcg_uncharge_slab(page, order, s); -- cgit v1.2.3-55-g7522