summaryrefslogtreecommitdiffstats
path: root/mm/kmemleak.c
diff options
context:
space:
mode:
authorVlastimil Babka2017-07-07 00:40:13 +0200
committerLinus Torvalds2017-07-07 01:24:34 +0200
commite0dd7d53a6d2788f9616e6d7e3e725f8f84e4636 (patch)
treefb7a27bf39582c5c95141e3a78409bbe62101f45 /mm/kmemleak.c
parentmm, cpuset: always use seqlock when changing task's nodemask (diff)
downloadkernel-qcow2-linux-e0dd7d53a6d2788f9616e6d7e3e725f8f84e4636.tar.gz
kernel-qcow2-linux-e0dd7d53a6d2788f9616e6d7e3e725f8f84e4636.tar.xz
kernel-qcow2-linux-e0dd7d53a6d2788f9616e6d7e3e725f8f84e4636.zip
mm, mempolicy: don't check cpuset seqlock where it doesn't matter
Two wrappers of __alloc_pages_nodemask() are checking task->mems_allowed_seq themselves to retry allocation that has raced with a cpuset update. This has been shown to be ineffective in preventing premature OOM's which can happen in __alloc_pages_slowpath() long before it returns back to the wrappers to detect the race at that level. Previous patches have made __alloc_pages_slowpath() more robust, so we can now simply remove the seqlock checking in the wrappers to prevent further wrong impression that it can actually help. Link: http://lkml.kernel.org/r/20170517081140.30654-7-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dimitri Sivanich <sivanich@sgi.com> Cc: Hugh Dickins <hughd@google.com> Cc: Li Zefan <lizefan@huawei.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/kmemleak.c')
0 files changed, 0 insertions, 0 deletions