summaryrefslogtreecommitdiffstats
path: root/mm/slab_common.c
diff options
context:
space:
mode:
authorVladimir Davydov2014-12-11 00:44:16 +0100
committerLinus Torvalds2014-12-11 02:41:07 +0100
commit4ef461e8f4dd13a2e64c6c8f00c420d62294e2d4 (patch)
tree18137acb6cceb84855c370abb5555c830966bd55 /mm/slab_common.c
parentmm, hugetlb: correct bit shift in hstate_sizelog() (diff)
downloadkernel-qcow2-linux-4ef461e8f4dd13a2e64c6c8f00c420d62294e2d4.tar.gz
kernel-qcow2-linux-4ef461e8f4dd13a2e64c6c8f00c420d62294e2d4.tar.xz
kernel-qcow2-linux-4ef461e8f4dd13a2e64c6c8f00c420d62294e2d4.zip
memcg: remove mem_cgroup_reclaimable check from soft reclaim
mem_cgroup_reclaimable() checks whether a cgroup has reclaimable pages on *any* NUMA node. However, the only place where it's called is mem_cgroup_soft_reclaim(), which tries to reclaim memory from a *specific* zone. So the way it is used is incorrect - it will return true even if the cgroup doesn't have pages on the zone we're scanning. I think we can get rid of this check completely, because mem_cgroup_shrink_node_zone(), which is called by mem_cgroup_soft_reclaim() if mem_cgroup_reclaimable() returns true, is equivalent to shrink_lruvec(), which exits almost immediately if the lruvec passed to it is empty. So there's no need to optimize anything here. Besides, we don't have such a check in the general scan path (shrink_zone) either. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Acked-by: Michal Hocko <mhocko@suse.cz> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab_common.c')
0 files changed, 0 insertions, 0 deletions