summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
* mm/slub.c: run free_partial() outside of the kmem_cache_node->list_lockChris Wilson2016-08-111-1/+5
* Merge tag 'usercopy-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/ke...Linus Torvalds2016-08-081-0/+40
|\
| * mm: SLUB hardened usercopy supportKees Cook2016-07-261-0/+40
* | slub: drop bogus inline for fixup_red_left()Geert Uytterhoeven2016-08-051-1/+1
* | mm/kasan: get rid of ->state in struct kasan_alloc_metaAndrey Ryabinin2016-08-021-0/+1
* | mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUBAlexander Potapenko2016-07-291-13/+44
* | mm, kasan: account for object redzone in SLUB's nearest_obj()Alexander Potapenko2016-07-291-1/+1
* | mm: charge/uncharge kmemcg from generic page allocator pathsVladimir Davydov2016-07-271-3/+3
* | slab: do not panic on invalid gfp_maskMichal Hocko2016-07-271-2/+3
* | slab: make GFP_SLAB_BUG_MASK information more human readableMichal Hocko2016-07-271-1/+2
* | mm: SLUB freelist randomizationThomas Garnier2016-07-271-7/+126
|/
* mm, kasan: don't call kasan_krealloc() from ksize().Alexander Potapenko2016-05-211-2/+3
* mm: rename _count, field of the struct page, to _refcountJoonsoo Kim2016-05-201-2/+2
* mm/slub.c: fix sysfs filename in commentLi Peng2016-05-201-5/+5
* mm/slub.c: replace kick_all_cpus_sync() with synchronize_sched() in kmem_cach...Vladimir Davydov2016-05-201-1/+1
* mm, kasan: add GFP flags to KASAN APIAlexander Potapenko2016-03-261-7/+8
* mm: coalesce split stringsJoe Perches2016-03-171-10/+9Star
* mm: thp: set THP defrag by default to madvise and add a stall-free defrag optionMel Gorman2016-03-171-1/+1
* mm/slub: query dynamic DEBUG_PAGEALLOC settingJoonsoo Kim2016-03-171-4/+3Star
* mm: memcontrol: report slab usage in cgroup2 memory.statVladimir Davydov2016-03-171-1/+2
* mm, sl[au]b: print gfp_flags as strings in slab_out_of_memory()Vlastimil Babka2016-03-161-2/+2
* mm/slub: support left redzoneJoonsoo Kim2016-03-161-29/+71
* slub: relax CMPXCHG consistency restrictionsLaura Abbott2016-03-161-3/+9
* slub: convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKSLaura Abbott2016-03-161-35/+59
* slub: fix/clean free_debug_processing return pathsLaura Abbott2016-03-161-11/+10Star
* slub: drop lock at the end of free_debug_processingLaura Abbott2016-03-161-14/+11Star
* mm: new API kfree_bulk() for SLAB+SLUB allocatorsJesper Dangaard Brouer2016-03-161-3/+18
* mm/slab: move SLUB alloc hooks to common mm/slab.hJesper Dangaard Brouer2016-03-161-54/+0Star
* slub: clean up code for kmem cgroup support to kmem_cache_free_bulkJesper Dangaard Brouer2016-03-161-11/+11
* mm: slab: free kmem_cache_node after destroy sysfs fileDmitry Safonov2016-02-191-21/+17Star
* mm: memcontrol: move kmem accounting code to CONFIG_MEMCGJohannes Weiner2016-01-211-5/+5
* page-flags: define PG_locked behavior on compound pagesKirill A. Shutemov2016-01-161-0/+2
* slab: add SLAB_ACCOUNT flagVladimir Davydov2016-01-151-0/+2
* slab/slub: adjust kmem_cache_alloc_bulk APIJesper Dangaard Brouer2015-11-221-4/+4
* slub: add missing kmem cgroup support to kmem_cache_free_bulkJesper Dangaard Brouer2015-11-221-1/+5
* slub: fix kmem cgroup bug in kmem_cache_alloc_bulkJesper Dangaard Brouer2015-11-221-18/+22
* slub: optimize bulk slowpath free by detached freelistJesper Dangaard Brouer2015-11-221-30/+79
* slub: support for bulk free with SLUB freelistsJesper Dangaard Brouer2015-11-221-18/+67
* slub: mark the dangling ifdef #else of CONFIG_SLUB_DEBUGJesper Dangaard Brouer2015-11-211-1/+1
* slub: avoid irqoff/on in bulk allocationChristoph Lameter2015-11-211-13/+11Star
* slub: create new ___slab_alloc function that can be called with irqs disabledChristoph Lameter2015-11-211-15/+29
* slab, slub: use page->rcu_head instead of page->lru plus castKirill A. Shutemov2015-11-071-4/+1Star
* mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep...Mel Gorman2015-11-071-5/+5
* mm, slub, kasan: enable user tracking by default with KASAN=yAndrey Ryabinin2015-11-061-1/+3
* memcg: unify slab and other kmem pages chargingVladimir Davydov2015-11-061-7/+5Star
* mm/slub: calculate start order with reserved in considerationWei Yang2015-11-061-5/+1Star
* mm/slub: use get_order() instead of fls()Wei Yang2015-11-061-2/+1Star
* mm/slub: correct the comment in calculate_order()Wei Yang2015-11-061-1/+1
* mm: rename alloc_pages_exact_node() to __alloc_pages_node()Vlastimil Babka2015-09-091-1/+1
* mm/slub: don't wait for high-order page allocationJoonsoo Kim2015-09-051-0/+2