summaryrefslogtreecommitdiffstats
path: root/arch/arm64/mm/mmu.c
diff options
context:
space:
mode:
authorMark Rutland2016-01-25 12:44:57 +0100
committerCatalin Marinas2016-02-16 16:10:44 +0100
commit5227cfa71f9e8574373f4d0e9e754942d76cdf67 (patch)
treec2d38ccb7f75c7c74bd0f666444dd3ea644a02f6 /arch/arm64/mm/mmu.c
parentarm64: mm: specialise pagetable allocators (diff)
downloadkernel-qcow2-linux-5227cfa71f9e8574373f4d0e9e754942d76cdf67.tar.gz
kernel-qcow2-linux-5227cfa71f9e8574373f4d0e9e754942d76cdf67.tar.xz
kernel-qcow2-linux-5227cfa71f9e8574373f4d0e9e754942d76cdf67.zip
arm64: mm: place empty_zero_page in bss
Currently the zero page is set up in paging_init, and thus we cannot use the zero page earlier. We use the zero page as a reserved TTBR value from which no TLB entries may be allocated (e.g. when uninstalling the idmap). To enable such usage earlier (as may be required for invasive changes to the kernel page tables), and to minimise the time that the idmap is active, we need to be able to use the zero page before paging_init. This patch follows the example set by x86, by allocating the zero page at compile time, in .bss. This means that the zero page itself is available immediately upon entry to start_kernel (as we zero .bss before this), and also means that the zero page takes up no space in the raw Image binary. The associated struct page is allocated in bootmem_init, and remains unavailable until this time. Outside of arch code, the only users of empty_zero_page assume that the empty_zero_page symbol refers to the zeroed memory itself, and that ZERO_PAGE(x) must be used to acquire the associated struct page, following the example of x86. This patch also brings arm64 inline with these assumptions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/mm/mmu.c')
-rw-r--r--arch/arm64/mm/mmu.c9
1 files changed, 1 insertions, 8 deletions
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b25d5cbe4db1..cdbf055a325d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -49,7 +49,7 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
* Empty_zero_page is a special page that is used for zero-initialized data
* and COW.
*/
-struct page *empty_zero_page;
+unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
EXPORT_SYMBOL(empty_zero_page);
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
@@ -459,18 +459,11 @@ void fixup_init(void)
*/
void __init paging_init(void)
{
- void *zero_page;
-
map_mem();
fixup_executable();
- /* allocate the zero page. */
- zero_page = early_pgtable_alloc();
-
bootmem_init();
- empty_zero_page = virt_to_page(zero_page);
-
/*
* TTBR0 is only used for the identity mapping at this stage. Make it
* point to zero page to avoid speculatively fetching new entries.