summaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
* mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUBAlexander Potapenko2016-07-296-53/+83
* mm, kasan: account for object redzone in SLUB's nearest_obj()Alexander Potapenko2016-07-291-1/+1
* mm: fix use-after-free if memory allocation failed in vma_adjust()Kirill A. Shutemov2016-07-291-5/+15
* zsmalloc: Delete an unnecessary check before the function call "iput"Markus Elfring2016-07-291-2/+1Star
* mm/memblock.c: fix index adjustment error in __next_mem_range_rev()zijun_hu2016-07-291-1/+1
* mem-hotplug: alloc new page from a nearest neighbor node when mem-offlineXishi Qiu2016-07-291-5/+33
* mm: add cond_resched() to generic_swapfile_activate()Mikulas Patocka2016-07-291-0/+2
* Revert "mm, mempool: only set __GFP_NOMEMALLOC if there are free elements"Michal Hocko2016-07-291-15/+3Star
* mm, compaction: don't isolate PageWriteback pages in MIGRATE_SYNC_LIGHT modeHugh Dickins2016-07-291-1/+1
* mm: hwpoison: remove incorrect commentsNaoya Horiguchi2016-07-292-3/+0Star
* make __section_nr() more efficientZhou Chengming2016-07-291-5/+7
* kmemleak: don't hang if user disables scanning earlyVegard Nossum2016-07-291-1/+3
* mm/memblock.c: add new infrastructure to address the mem limit issueDennis Chen2016-07-291-5/+52
* mm: fix memcg stack accounting for sub-page stacksAndy Lutomirski2016-07-291-1/+1
* mm: track NR_KERNEL_STACK in KiB instead of number of stacksAndy Lutomirski2016-07-291-2/+1Star
* mm: CONFIG_ZONE_DEVICE stop depending on CONFIG_EXPERTDan Williams2016-07-291-1/+1
* memblock: include <asm/sections.h> instead of <asm-generic/sections.h>Christoph Hellwig2016-07-291-1/+1
* mm, THP: clean up return value of madvise_free_huge_pmdHuang Ying2016-07-291-7/+8
* mm/zsmalloc: use helper to clear page->flags bitGanesh Mahendran2016-07-291-2/+2
* mm/zsmalloc: add __init,__exit attributeGanesh Mahendran2016-07-291-1/+1
* mm/zsmalloc: keep comments consistent with codeGanesh Mahendran2016-07-291-4/+3Star
* mm/zsmalloc: avoid calculate max objects of zspage twiceGanesh Mahendran2016-07-291-16/+10Star
* mm/zsmalloc: use class->objs_per_zspage to get num of max objectsGanesh Mahendran2016-07-291-11/+7Star
* mm/zsmalloc: take obj index back from find_alloced_objGanesh Mahendran2016-07-291-2/+6
* mm/zsmalloc: use obj_index to keep consistent with othersGanesh Mahendran2016-07-291-7/+7
* mm: bail out in shrink_inactive_list()Minchan Kim2016-07-291-0/+27
* mm, vmscan: account for skipped pages as a partial scanMel Gorman2016-07-291-2/+18
* mm: consider whether to decivate based on eligible zones inactive ratioMel Gorman2016-07-291-5/+29
* mm: remove reclaim and compaction retry approximationsMel Gorman2016-07-296-58/+37Star
* mm, vmscan: remove highmem_file_pagesMel Gorman2016-07-291-8/+4Star
* mm: add per-zone lru list statMinchan Kim2016-07-293-9/+15
* mm, vmscan: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-291-11/+10Star
* mm, vmscan: remove redundant check in shrink_zones()Mel Gorman2016-07-291-3/+0Star
* mm, vmscan: Update all zone LRU sizes before updating memcgMel Gorman2016-07-292-11/+34
* mm: show node_pages_scanned per node, not zoneMinchan Kim2016-07-291-3/+3
* mm, pagevec: release/reacquire lru_lock on pgdat changeMel Gorman2016-07-291-10/+10
* mm, page_alloc: fix dirtyable highmem calculationMinchan Kim2016-07-291-6/+10
* mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman2016-07-296-42/+67
* mm, vmstat: print node-based stats in zoneinfo fileMel Gorman2016-07-291-0/+24
* mm: vmstat: account per-zone stalls and pages skipped during reclaimMel Gorman2016-07-292-3/+15
* mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman2016-07-291-1/+1
* mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman2016-07-291-2/+11
* mm, page_alloc: remove fair zone allocation policyMel Gorman2016-07-293-78/+2Star
* mm, vmscan: add classzone information to tracepointsMel Gorman2016-07-291-5/+9
* mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads...Mel Gorman2016-07-291-8/+14
* mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep()Mel Gorman2016-07-291-8/+4Star
* mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_readyMel Gorman2016-07-291-20/+7Star
* mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_nodeMel Gorman2016-07-291-11/+9Star
* mm: convert zone_reclaim to node_reclaimMel Gorman2016-07-294-53/+60
* mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman2016-07-291-1/+1