summaryrefslogtreecommitdiffstats
path: root/mm/vmscan.c
Commit message (Expand)AuthorAgeFilesLines
* mm: vmscan: check if mem cgroup is disabled or not before calling memcg slab ...Yang Shi2019-08-031-1/+8
* mm/vmscan.c: add checks for incorrect handling of current->reclaim_stateAndrew Morton2019-07-171-13/+24
* mm/vmscan.c: calculate reclaimed slab caches in all reclaim pathsYafang Shao2019-07-171-0/+7
* mm/vmscan.c: add a new member reclaim_state in struct shrink_controlYafang Shao2019-07-171-12/+8Star
* mm: vmscan: correct some vmscan counters for THP swapoutYang Shi2019-07-121-14/+49
* mm: vmscan: remove double slab pressure by inc'ing sc->nr_scannedYang Shi2019-07-121-5/+0Star
* mm: vmscan: scan anonymous pages on file refaultsKuo-Hsin Yang2019-07-121-3/+3
* mm/vmscan.c: prevent useless kswapd loopsShakeel Butt2019-07-051-12/+15
* mm/vmscan.c: fix trying to reclaim unevictable LRU pageMinchan Kim2019-06-141-1/+1
* mm/vmscan.c: fix recent_rotated historyKirill Tkhai2019-06-141-2/+2
* mm: memcontrol: make cgroup stats and events query API explicitly localJohannes Weiner2019-05-151-3/+3
* mm/vmscan.c: don't disable irq again when count pgrefill for memcgYafang Shao2019-05-141-1/+1
* mm/vmscan.c: simplify shrink_inactive_list()Kirill Tkhai2019-05-141-22/+9Star
* mm/vmscan: drop may_writepage and classzone_idx from direct reclaim begin tem...Yafang Shao2019-05-141-11/+3Star
* mm: memcontrol: replace zone summing with lruvec_page_state()Johannes Weiner2019-05-141-1/+1
* mm/vmscan: add tracepoints for node reclaimYafang Shao2019-05-141-0/+6
* mm: generalize putback scan functionsKirill Tkhai2019-05-141-82/+40Star
* mm: remove pages_to_free argument of move_active_pages_to_lru()Kirill Tkhai2019-05-141-6/+13
* mm: move nr_deactivate accounting to shrink_active_list()Kirill Tkhai2019-05-141-6/+4Star
* mm: move recent_rotated pages calculation to shrink_inactive_list()Kirill Tkhai2019-05-141-8/+7Star
* Merge tag 'printk-for-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/p...Linus Torvalds2019-05-071-1/+1
|\
| * treewide: Switch printk users from %pf and %pF to %ps and %pS, respectivelySakari Ailus2019-04-091-1/+1
* | mm: fix inactive list balancing between NUMA nodes and cgroupsJohannes Weiner2019-04-191-20/+9Star
|/
* mm: remove zone_lru_lock() function, access ->lru_lock directlyAndrey Ryabinin2019-03-061-8/+8
* mm/workingset: remove unused @mapping argument in workingset_eviction()Andrey Ryabinin2019-03-061-1/+1
* numa: make "nr_node_ids" unsigned intAlexey Dobriyan2019-03-061-1/+1
* mm/vmscan.c: do not allocate duplicate stack variables in shrink_page_list()Kirill Tkhai2019-03-061-30/+14Star
* mm: vmscan: do not iterate all mem cgroups for global direct reclaimYang Shi2019-03-061-4/+3Star
* mm/vmscan.c: remove 7th argument of isolate_lru_pages()Kirill Tkhai2019-03-061-11/+4Star
* mm: fix some typos in mm directoryWei Yang2019-03-061-1/+1
* Revert "mm: slowly shrink slabs with a relatively small number of objects"Dave Chinner2019-02-131-10/+0Star
* mm: put_and_wait_on_page_locked() while page is migratedHugh Dickins2018-12-281-8/+2Star
* mm: reclaim small amounts of memory when an external fragmentation event occursMel Gorman2018-12-281-9/+124
* Merge drm/drm-next into drm-intel-next-queuedJani Nikula2018-11-201-14/+34
|\
| * Merge branch 'xarray' of git://git.infradead.org/users/willy/linux-daxLinus Torvalds2018-10-281-5/+5
| |\
| | * mm: Convert is_page_cache_freeable to XArrayMatthew Wilcox2018-10-211-4/+4
| | * mm: Convert delete_from_swap_cache to XArrayMatthew Wilcox2018-10-211-1/+1
| * | mm: zero-seek shrinkersJohannes Weiner2018-10-271-3/+12
| * | psi: pressure stall information for CPU, memory, and IOJohannes Weiner2018-10-271-0/+9
| * | mm: workingset: tell cache transitions from workingset thrashingJohannes Weiner2018-10-271-0/+1
| * | mm: don't miss the last page because of round-off errorRoman Gushchin2018-10-271-2/+4
| * | mm/vmscan.c: fix int overflow in callers of do_shrink_slab()Kirill Tkhai2018-10-061-4/+3Star
| |/
* / mm, drm/i915: mark pinned shmemfs pages as unevictableKuo-Hsin Yang2018-11-071-11/+11
|/
* mm: slowly shrink slabs with a relatively small number of objectsRoman Gushchin2018-09-201-0/+11
* mm: fix page_freeze_refs and page_unfreeze_refs in commentsJiang Biao2018-08-221-1/+1
* mm: check shrinker is memcg-aware in register_shrinker_prepared()Kirill Tkhai2018-08-221-1/+2
* mm: use special value SHRINKER_REGISTERING instead of list_empty() checkKirill Tkhai2018-08-181-22/+21Star
* mm/vmscan.c: move check for SHRINKER_NUMA_AWARE to do_shrink_slab()Kirill Tkhai2018-08-181-3/+3
* mm/vmscan.c: clear shrinker bit if there are no objects related to memcgKirill Tkhai2018-08-181-2/+24
* mm: add SHRINK_EMPTY shrinker methods return valueKirill Tkhai2018-08-181-3/+9