summaryrefslogtreecommitdiffstats
path: root/mm/memory.c
diff options
context:
space:
mode:
authorKirill A. Shutemov2016-01-16 01:53:49 +0100
committerLinus Torvalds2016-01-16 02:56:32 +0100
commite81c48024f43b4aabe1ec4709786fa1f96814717 (patch)
tree1e65d154b4f0786782526d72da3c9b09a330aab8 /mm/memory.c
parentmm: differentiate page_mapped() from page_mapcount() for compound pages (diff)
downloadkernel-qcow2-linux-e81c48024f43b4aabe1ec4709786fa1f96814717.tar.gz
kernel-qcow2-linux-e81c48024f43b4aabe1ec4709786fa1f96814717.tar.xz
kernel-qcow2-linux-e81c48024f43b4aabe1ec4709786fa1f96814717.zip
mm, numa: skip PTE-mapped THP on numa fault
We're going to have THP mapped with PTEs. It will confuse numabalancing. Let's skip them for now. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory.c')
-rw-r--r--mm/memory.c6
1 files changed, 6 insertions, 0 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 9b0dbc2f0b9a..9d5b40892d4d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3193,6 +3193,12 @@ static int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
return 0;
}
+ /* TODO: handle PTE-mapped THP */
+ if (PageCompound(page)) {
+ pte_unmap_unlock(ptep, ptl);
+ return 0;
+ }
+
/*
* Avoid grouping on RO pages in general. RO pages shouldn't hurt as
* much anyway since they can be in shared cache state. This misses