summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorAndy Lutomirski2016-07-29 00:48:14 +0200
committerLinus Torvalds2016-07-29 01:07:41 +0200
commitd30dd8be06a5ae640766b20ea9ae288832bd12ac (patch)
treec82789cc86b558aefa3a2eec522f0c63aadc457c /mm/page_alloc.c
parentmm: cleanup ifdef guards for vmem_altmap (diff)
downloadkernel-qcow2-linux-d30dd8be06a5ae640766b20ea9ae288832bd12ac.tar.gz
kernel-qcow2-linux-d30dd8be06a5ae640766b20ea9ae288832bd12ac.tar.xz
kernel-qcow2-linux-d30dd8be06a5ae640766b20ea9ae288832bd12ac.zip
mm: track NR_KERNEL_STACK in KiB instead of number of stacks
Currently, NR_KERNEL_STACK tracks the number of kernel stacks in a zone. This only makes sense if each kernel stack exists entirely in one zone, and allowing vmapped stacks could break this assumption. Since frv has THREAD_SIZE < PAGE_SIZE, we need to track kernel stack allocations in a unit that divides both THREAD_SIZE and PAGE_SIZE on all architectures. Keep it simple and use KiB. Link: http://lkml.kernel.org/r/083c71e642c5fa5f1b6898902e1b2db7b48940d4.1468523549.git.luto@kernel.org Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Vladimir Davydov <vdavydov@virtuozzo.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r--mm/page_alloc.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dfdb608f7b3d..c281125b2349 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4359,8 +4359,7 @@ void show_free_areas(unsigned int filter)
K(zone_page_state(zone, NR_MLOCK)),
K(zone_page_state(zone, NR_SLAB_RECLAIMABLE)),
K(zone_page_state(zone, NR_SLAB_UNRECLAIMABLE)),
- zone_page_state(zone, NR_KERNEL_STACK) *
- THREAD_SIZE / 1024,
+ zone_page_state(zone, NR_KERNEL_STACK_KB),
K(zone_page_state(zone, NR_PAGETABLE)),
K(zone_page_state(zone, NR_BOUNCE)),
K(free_pcp),