summaryrefslogtreecommitdiffstats
path: root/arch/arm64/mm/mmu.c
Commit message (Collapse)AuthorAgeFilesLines
...
* arm64: unmap idmap earlierMark Rutland2016-02-161-6/+0Star
| | | | | | | | | | | | | | | | | | | | During boot we leave the idmap in place until paging_init, as we previously had to wait for the zero page to become allocated and accessible. Now that we have a statically-allocated zero page, we can uninstall the idmap much earlier in the boot process, making it far easier to spot accidental use of physical addresses. This also brings the cold boot path in line with the secondary boot path. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: unify idmap removalMark Rutland2016-02-161-3/+1Star
| | | | | | | | | | | | | | | | | | We currently open-code the removal of the idmap and restoration of the current task's MMU state in a few places. Before introducing yet more copies of this sequence, unify these to call a new helper, cpu_uninstall_idmap. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: place empty_zero_page in bssMark Rutland2016-02-161-8/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the zero page is set up in paging_init, and thus we cannot use the zero page earlier. We use the zero page as a reserved TTBR value from which no TLB entries may be allocated (e.g. when uninstalling the idmap). To enable such usage earlier (as may be required for invasive changes to the kernel page tables), and to minimise the time that the idmap is active, we need to be able to use the zero page before paging_init. This patch follows the example set by x86, by allocating the zero page at compile time, in .bss. This means that the zero page itself is available immediately upon entry to start_kernel (as we zero .bss before this), and also means that the zero page takes up no space in the raw Image binary. The associated struct page is allocated in bootmem_init, and remains unavailable until this time. Outside of arch code, the only users of empty_zero_page assume that the empty_zero_page symbol refers to the zeroed memory itself, and that ZERO_PAGE(x) must be used to acquire the associated struct page, following the example of x86. This patch also brings arm64 inline with these assumptions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: specialise pagetable allocatorsMark Rutland2016-02-161-25/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We pass a size parameter to early_alloc and late_alloc, but these are only ever used to allocate single pages. In late_alloc we always allocate a single page. Both allocators provide us with zeroed pages (such that all entries are invalid), but we have no barriers between allocating a page and adding that page to existing (live) tables. A concurrent page table walk may see stale data, leading to a number of issues. This patch specialises the two allocators for page tables. The size parameter is removed and the necessary dsb(ishst) is folded into each. To make it clear that the functions are intended for use for page table allocation, they are renamed to {early,late}_pgtable_alloc, with the related function pointed renamed to pgtable_alloc. As the dsb(ishst) is now in the allocator, the existing barrier for the zero page is redundant and thus is removed. The previously missing include of barrier.h is added. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* Merge branch 'aarch64/efi' into aarch64/for-next/coreWill Deacon2015-12-151-0/+2
|\ | | | | | | | | Merge in EFI memblock changes from Ard, which form the preparatory work for UEFI support on 32-bit ARM.
| * arm64: only consider memblocks with NOMAP cleared for linear mappingArd Biesheuvel2015-12-091-0/+2
| | | | | | | | | | | | | | | | | | | | Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: mm: ensure that the zero page is visible to the page table walkerWill Deacon2015-12-111-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | In paging_init, we allocate the zero page, memset it to zero and then point TTBR0 to it in order to avoid speculative fetches through the identity mapping. In order to guarantee that the freshly zeroed page is indeed visible to the page table walker, we need to execute a dsb instruction prior to writing the TTBR. Cc: <stable@vger.kernel.org> # v3.14+, for older kernels need to drop the 'ishst' Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: mm: remove pointless PAGE_MASKingMark Rutland2015-12-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | As pgd_offset{,_k} shift the input address by PGDIR_SHIFT, the sub-page bits will always be shifted out. There is no need to apply PAGE_MASK before this. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Laura Abbott <labbott@fedoraproject.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: mm: allow sections for unaligned basesMark Rutland2015-12-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Callees of __create_mapping may decide to create section mappings if sufficient low bits of the physical and virtual addresses they were passed are zero. While __create_mapping rounds the virtual base address down, it does not similarly round the physical base address down, and hence non-zero bits in the physical address can prevent use of a section mapping, even where a whole next-level table would be used instead. Round down the physical base address in __create_mapping to enable all callees to always create section mappings when such a mapping is possible. Cc: Laura Abbott <labbott@fedoraproject.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: mm: detect bad __create_mapping usesMark Rutland2015-12-011-0/+7
|/ | | | | | | | | | | | | | | | | | | | | | If a caller of __create_mapping provides a PA and VA which have different sub-page offsets, it is not clear which offset they expect to apply to the mapping, and is indicative of a bad caller. In some cases, the region we wish to map may validly have a sub-page offset in the physical and virtual addresses. For example, EFI runtime regions have 4K granularity, yet may be mapped by a 64K page kernel. So long as the physical and virtual offsets are the same, the region will be mapped at the expected VAs. Disallow calls with differing sub-page offsets, and WARN when they are encountered, so that we can detect and fix such cases. Cc: Laura Abbott <labbott@fedoraproject.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* Revert "arm64: Mark kernel page ranges contiguous"Catalin Marinas2015-11-261-61/+8Star
| | | | | | | | | | | | | | | | This reverts commit 348a65cdcbbf243073ee39d1f7d4413081ad7eab. Incorrect page table manipulation that does not respect the ARM ARM recommended break-before-make sequence may lead to TLB conflicts. The contiguous PTE patch makes the system even more susceptible to such errors by changing the mapping from a single page to a contiguous range of pages. An additional TLB invalidation would reduce the risk window, however, the correct fix is to switch to a temporary swapper_pg_dir. Once the correct workaround is done, the reverted commit will be re-applied. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Jeremy Linton <jeremy.linton@arm.com>
* arm64: early_alloc: Fix check for allocation failureSuzuki K. Poulose2015-11-251-2/+6
| | | | | | | | | | In early_alloc we check if the memblock_alloc failed by checking the virtual address of the result, which will never fail. This patch fixes it to check the actual result for failure. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Fix R/O permissions in mark_rodata_roLaura Abbott2015-11-181-1/+1
| | | | | | | | | | | | | | | | The permissions in mark_rodata_ro trigger a build error with STRICT_MM_TYPECHECKS. Fix this by introducing PAGE_KERNEL_ROX for the same reasons as PAGE_KERNEL_RO. From Ard: "PAGE_KERNEL_EXEC has PTE_WRITE set as well, making the range writeable under the ARMv8.1 DBM feature, that manages the dirty bit in hardware (writing to a page with the PTE_RDONLY and PTE_WRITE bits both set will clear the PTE_RDONLY bit in that case)" Signed-off-by: Laura Abbott <labbott@fedoraproject.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: use correct mapping granularity under DEBUG_RODATAArd Biesheuvel2015-11-171-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA and resides at an offset that is not a multiple of 512 MB, the rounding that occurs in __map_memblock() and fixup_executable() results in incorrect regions being mapped. The following snippet from /sys/kernel/debug/kernel_page_tables shows how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000, the first 2 MB of memory (which may be inaccessible from non-secure EL1 or just reserved by the firmware) is inadvertently mapped into the end of the module region. ---[ Modules start ]--- 0xfffffdffffe00000-0xfffffe0000000000 2M RW NX ... UXN MEM/NORMAL ---[ Modules end ]--- ---[ Kernel Mapping ]--- 0xfffffe0000000000-0xfffffe0000090000 576K RW NX ... UXN MEM/NORMAL 0xfffffe0000090000-0xfffffe0000200000 1472K ro x ... UXN MEM/NORMAL 0xfffffe0000200000-0xfffffe0000800000 6M ro x ... UXN MEM/NORMAL 0xfffffe0000800000-0xfffffe0000810000 64K ro x ... UXN MEM/NORMAL 0xfffffe0000810000-0xfffffe0000a00000 1984K RW NX ... UXN MEM/NORMAL 0xfffffe0000a00000-0xfffffe00ffe00000 4084M RW NX ... UXN MEM/NORMAL The same issue is likely to occur on 16k pages kernels whose load address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to SWAPPER_BLOCK_SIZE instead of SECTION_SIZE. Fixes: da141706aea5 ("arm64: add better page protections to arm64") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Laura Abbott <labbott@redhat.com> Cc: <stable@vger.kernel.org> # 4.0+ Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mmu: make split_pud and fixup_executable staticJisheng Zhang2015-11-121-2/+2
| | | | | | | | split_pud and fixup_executable are only called from within mmu.c, so they can be declared static. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix R/O permissions of FDT mappingArd Biesheuvel2015-11-091-1/+1
| | | | | | | | | | | | | | | | The mapping permissions of the FDT are set to 'PAGE_KERNEL | PTE_RDONLY' in an attempt to map the FDT as read-only. However, not only does this break at build time under STRICT_MM_TYPECHECKS (since the two terms are of different types in that case), it also results in both the PTE_WRITE and PTE_RDONLY attributes to be set, which means the region is still writable under ARMv8.1 DBM (and an attempted write will simply clear the PT_RDONLY bit). So instead, define PAGE_KERNEL_RO (which already has an established meaning across architectures) and use that instead. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix STRICT_MM_TYPECHECKS issue in PTE_CONT manipulationArd Biesheuvel2015-11-091-1/+1
| | | | | | | | | The new page table code that manipulates the PTE_CONT flags does so in a way that is inconsistent with STRICT_MM_TYPECHECKS. Fix it by using the correct combination of __pgprot() and pgprot_val(). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Handle section maps for swapper/idmapSuzuki K. Poulose2015-10-191-41/+33Star
| | | | | | | | | | | | | | | We use section maps with 4K page size to create the swapper/idmaps. So far we have used !64K or 4K checks to handle the case where we use the section maps. This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to handle cases where we use section maps, instead of using the page size symbols. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Mark kernel page ranges contiguousJeremy Linton2015-10-081-8/+61
| | | | | | | | | | | | | | | | | | | | | | | | With 64k pages, the next larger segment size is 512M. The linux kernel also uses different protection flags to cover its code and data. Because of this requirement, the vast majority of the kernel code and data structures end up being mapped with 64k pages instead of the larger pages common with a 4k page kernel. Recent ARM processors support a contiguous bit in the page tables which allows the a TLB to cover a range larger than a single PTE if that range is mapped into physically contiguous ram. So, for the kernel its a good idea to set this flag. Some basic micro benchmarks show it can significantly reduce the number of L1 dTLB refills. Add boot option to enable/disable CONT marking, as well as fix a bug found by Steve Capper. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: flush: use local TLB and I-cache invalidationWill Deacon2015-10-071-1/+1
| | | | | | | | | | | | | | | | | | There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Daney <david.daney@cavium.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: mark create_mapping as __initMark Rutland2015-07-281-1/+1
| | | | | | | | | | | | | | | | | Currently create_mapping is marked with __ref, apparently because it refers to early_alloc. However, create_mapping has no logic to prevent erroneous use of early_alloc after it has been freed, and is only ever called by __init functions anyway. Thus the __ref marker is misleading and unnecessary. Instead, this patch marks create_mapping as __init, resulting in warnings if it is used from a a non __init functions, and allowing its memory to be reclaimed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: remove dead codeMark Salter2015-07-271-11/+0Star
| | | | | | | | | Commit 68234df4ea79 ("arm64: kill flush_cache_all()") removed soft_reset() from the kernel. This was the only caller of setup_mm_for_reboot(), so remove that also. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: fix incorrect use of pgprot_t variableArd Biesheuvel2015-06-301-1/+1
| | | | | | | | This fixes a build failure under STRICT_MM_TYPECHECKS, by adding a missing pgprot_val() around a pgport_t reference. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: use fixmap region for permanent FDT mappingArd Biesheuvel2015-06-021-0/+66
| | | | | | | | | | | | | | | | | | | | Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2015-04-161-5/+7
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "Here are the core arm64 updates for 4.1. Highlights include a significant rework to head.S (allowing us to boot on machines with physical memory at a really high address), an AES performance boost on Cortex-A57 and the ability to run a 32-bit userspace with 64k pages (although this requires said userspace to be built with a recent binutils). The head.S rework spilt over into KVM, so there are some changes under arch/arm/ which have been acked by Marc Zyngier (KVM co-maintainer). In particular, the linker script changes caused us some issues in -next, so there are a few merge commits where we had to apply fixes on top of a stable branch. Other changes include: - AES performance boost for Cortex-A57 - AArch32 (compat) userspace with 64k pages - Cortex-A53 erratum workaround for #845719 - defconfig updates (new platforms, PCI, ...)" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (39 commits) arm64: fix midr range for Cortex-A57 erratum 832075 arm64: errata: add workaround for cortex-a53 erratum #845719 arm64: Use bool function return values of true/false not 1/0 arm64: defconfig: updates for 4.1 arm64: Extract feature parsing code from cpu_errata.c arm64: alternative: Allow immediate branch as alternative instruction arm64: insn: Add aarch64_insn_decode_immediate ARM: kvm: round HYP section to page size instead of log2 upper bound ARM: kvm: assert on HYP section boundaries not actual code size arm64: head.S: ensure idmap_t0sz is visible arm64: pmu: add support for interrupt-affinity property dt: pmu: extend ARM PMU binding to allow for explicit interrupt affinity arm64: head.S: ensure visibility of page tables arm64: KVM: use ID map with increased VA range if required arm64: mm: increase VA range of identity map ARM: kvm: implement replacement for ld's LOG2CEIL() arm64: proc: remove unused cpu_get_pgd macro arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol arm64: remove __calc_phys_offset arm64: merge __enable_mmu and __turn_mmu_on ...
| * arm64: mm: increase VA range of identity mapArd Biesheuvel2015-03-231-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: Laura Abbott <lauraa@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: fixmap: check idx is definitely validMark Rutland2015-03-191-4/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixmap indices are in the interval (FIX_HOLE, __end_of_fixed_addresses), but in __set_fixmap we only check idx <= __end_of_fixed_addresses, and therefore indices <= FIX_HOLE are erroneously accepted. If called with such an idx, __set_fixmap may corrupt page tables outside of the fixmap region. This patch ensures that we validate the idx against both endpoints of the interval. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: expose number of page table levels on Kconfig levelKirill A. Shutemov2015-04-151-2/+2
|/ | | | | | | | | | | | | | | | We would want to use number of page table level to define mm_struct. Let's expose it as CONFIG_PGTABLE_LEVELS. ARM64_PGTABLE_LEVELS is renamed to PGTABLE_LEVELS and defined before sourcing init/Kconfig: arch/Kconfig will define default value and it's sourced from init/Kconfig. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* arm64: Fix section mismatch on alloc_init_p[mu]d()Catalin Marinas2015-01-291-4/+5
| | | | | | | | | | | | Commit 523d6e9fae93 (arm64:mm: free the useless initial page table) introduced a BUG_ON checking for the allocation type but it was referring the early_alloc() function in the __init section. This patch changes the check to slab_is_available() and also relaxes the BUG to a WARN_ON_ONCE. Reported-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: use *_sect to check for section mapsMark Rutland2015-01-281-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The {pgd,pud,pmd}_bad family of macros have slightly fuzzy cross-architecture semantics, and seem to imply a populated entry that is not a next-level table, rather than a particular type of entry (e.g. a section map). In arm64 code, for those cases where we care about whether an entry is a section mapping, we can instead use the {pud,pmd}_sect macros to explicitly check for this case. This helps to document precisely what we care about, making the code easier to read, and allows for future relaxation of the *_bad macros to check for other "bad" entries. To that end this patch updates the table dumping and initial table setup to check for section mappings with {pud,pmd}_sect, and adds/restores BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none cases so as to catch remaining "bad" cases. In the fault handling code, show_pte is left with *_bad checks as it only cares about whether it can walk the next level table, and this path is used for both kernel and userspace fault handling. The former case will be followed by a die() where we'll report the address that triggered the fault, which can be useful context for debugging. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Steve Capper <steve.capper@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: drop unnecessary cache+tlb maintenanceMark Rutland2015-01-281-7/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In paging_init, we call flush_cache_all, but this is backed by Set/Way operations which may not achieve anything in the presence of cache line migration and/or system caches. If the caches are already in an inconsistent state at this point, there is nothing we can do (short of flushing the entire physical address space by VA) to empty architected and system caches. As such, flush_cache_all only serves to mask other potential bugs. Hence, this patch removes the boot-time call to flush_cache_all. Immediately after the cache maintenance we flush the TLBs, but this is also unnecessary. Before enabling the MMU, the TLBs are invalidated, and thus are initially clean. When changing the contents of active tables (e.g. in fixup_executable() for DEBUG_RODATA) we perform the required TLB maintenance following the update, and therefore no additional maintenance is required to ensure the new table entries are in effect. Since activating the MMU we will not have modified system register fields permitted to be cached in a TLB, and therefore do not need maintenance for any cached system register fields. Hence, the TLB flush is unnecessary. Shortly after the unnecessary TLB flush, we update TTBR0 to point to an empty zero page rather than the idmap, and flush the TLBs. This maintenance is necessary to remove the global idmap entries from the TLBs (as they would conflict with userspace mappings), and is retained. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Steve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64:mm: free the useless initial page tablezhichang.yuan2015-01-281-3/+12
| | | | | | | | | For 64K page system, after mapping a PMD section, the corresponding initial page table is not needed any more. That page can be freed. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/efi: move virtmap init to early initcallArd Biesheuvel2015-01-221-1/+1
| | | | | | | | | | | Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by: Ard Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: add better page protections to arm64Laura Abbott2015-01-221-24/+187
| | | | | | | | | | | | | | | | | | | | | Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: remove broken cachepolicy codeMark Rutland2015-01-131-74/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cachepolicy kernel parameter was intended to aid in the debugging of coherency issues, but it is fundamentally broken for several reasons: * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary CPUs may therefore use differ w.r.t. the attributes they apply to MT_NORMAL memory, resulting in a loss of coherency. * The cache maintenance using flush_dcache_all (based on Set/Way operations) is not guaranteed to empty a given CPU's cache hierarchy while said CPU has caches enabled, it cannot empty the caches of other coherent PEs, nor is it guaranteed to flush data to the PoC even when caches are disabled. * The TLBs are not invalidated around the modification of MAIR_EL1 and TCR_EL1, as required by the architecture (as both are permitted to be cached in a TLB). This may result in CPUs using attributes other than those expected for some memory accesses, resulting in a loss of coherency. * Exclusive accesses are not architecturally guaranteed to function as expected on memory marked as Write-Through or Non-Cacheable. Thus changing the attributes of MT_NORMAL away from the (architecurally safe) defaults may cause uses of these instructions (e.g. atomics) to behave erratically. Given this, the cachepolicy code cannot be used for debugging purposes as it alone is likely to cause coherency issues. This patch removes the broken cachepolicy code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64/efi: remove idmap manipulations from UEFI codeArd Biesheuvel2015-01-121-12/+0Star
| | | | | | | | | | | Now that we have moved the call to SetVirtualAddressMap() to the stub, UEFI has no use for the ID map, so we can drop the code that installs ID mappings for UEFI memory regions. Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
* arm64/mm: add create_pgd_mapping() to create private page tablesArd Biesheuvel2015-01-121-21/+22
| | | | | | | | | | For UEFI, we need to install the memory mappings used for Runtime Services in a dedicated set of page tables. Add create_pgd_mapping(), which allows us to allocate and install those page table entries early. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
* arm64/mm: add explicit struct_mm argument to __create_mapping()Ard Biesheuvel2015-01-121-15/+16
| | | | | | | | | | | | | | Currently, swapper_pg_dir and idmap_pg_dir share the init_mm mm_struct instance. To allow the introduction of other pg_dir instances, for instance, for UEFI's mapping of Runtime Services, make the struct_mm instance an explicit argument that gets passed down to the pmd and pte instantiation functions. Note that the consumers (pmd_populate/pgd_populate) of the mm_struct argument don't actually inspect it, but let's fix it for correctness' sake. Acked-by: Steve Capper <steve.capper@linaro.org> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
* arm64: Factor out fixmap initialization from ioremapLaura Abbott2014-11-251-0/+94
| | | | | | | | | | | | | | | The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: fix data type for physical addressMin-Hua Chen2014-11-061-1/+1
| | | | | | | | | Use phys_addr_t for physical address in alloc_init_pud. Although phys_addr_t and unsigned long are 64 bit in arm64, it is better to use phys_addr_t to describe physical addresses. Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: Fix memblock current_limit with 64K pages and 48-bit VACatalin Marinas2014-10-241-4/+8
| | | | | | | | | | | With 48-bit VA space, the 64K page configuration uses 3 levels instead of 2 and PUD_SIZE != PMD_SIZE. Since with 64K pages we only cover PMD_SIZE with the initial swapper_pg_dir populated in head.S, the memblock current_limit needs to be set accordingly in map_mem() to avoid allocating unmapped memory. The memblock current_limit is progressively increased as more blocks are mapped. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: LLVMLinux: Fix inline arm64 assembly for use with clangMark Charlebois2014-09-151-1/+1
| | | | | | | | | | | Remove '#' from immediate parameter in AARCH64 inline assembly in mmu. This code now works with both gcc and clang. Signed-off-by: Mark Charlebois <charlebm@gmail.com> Signed-off-by: Behan Webster <behanw@converseincode.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: Implement 4 levels of translation tablesJungseok Lee2014-07-231-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: Jungseok Lee <jays.lee@samsung.com> Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Steve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2014-06-061-37/+30Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next Pull arm64 updates from Catalin Marinas: - Optimised assembly string/memory routines (based on the AArch64 Cortex Strings library contributed to glibc but re-licensed under GPLv2) - Optimised crypto algorithms making use of the ARMv8 crypto extensions (together with kernel API for using FPSIMD instructions in interrupt context) - Ftrace support - CPU topology parsing from DT - ESR_EL1 (Exception Syndrome Register) exposed to user space signal handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu) - 1GB section linear mapping if applicable - Barriers usage clean-up - Default pgprot clean-up Conflicts as per Catalin. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits) arm64: kernel: initialize broadcast hrtimer based clock event device arm64: ftrace: Add system call tracepoint arm64: ftrace: Add CALLER_ADDRx macros arm64: ftrace: Add dynamic ftrace support arm64: Add ftrace support ftrace: Add arm64 support to recordmcount arm64: Add 'notrace' attribute to unwind_frame() for ftrace arm64: add __ASSEMBLY__ in asm/insn.h arm64: Fix linker script entry point arm64: lib: Implement optimized string length routines arm64: lib: Implement optimized string compare routines arm64: lib: Implement optimized memcmp routine arm64: lib: Implement optimized memset routine arm64: lib: Implement optimized memmove routine arm64: lib: Implement optimized memcpy routine arm64: defconfig: enable a few more common/useful options in defconfig ftrace: Make CALLER_ADDRx macros more generic arm64: Fix deadlock scenario with smp_send_stop() arm64: Fix machine_shutdown() definition arm64: Support arch_irq_work_raise() via self IPIs ...
| * arm64: mm: Create gigabyte kernel logical mappings where possibleSteve Capper2014-05-091-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: Steve Capper <steve.capper@linaro.org> Tested-by: Jungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * arm64: Clean up the default pgprot settingCatalin Marinas2014-05-091-34/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
* | Merge branch 'arm64-efi-for-linus' of ↵Linus Torvalds2014-06-051-18/+47
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next Pull ARM64 EFI update from Peter Anvin: "By agreement with the ARM64 EFI maintainers, we have agreed to make -tip the upstream for all EFI patches. That is why this patchset comes from me :) This patchset enables EFI stub support for ARM64, like we already have on x86" * 'arm64-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64: efi: only attempt efi map setup if booting via EFI efi/arm64: ignore dtb= when UEFI SecureBoot is enabled doc: arm64: add description of EFI stub support arm64: efi: add EFI stub doc: arm: add UEFI support documentation arm64: add EFI runtime services efi: Add shared FDT related functions for ARM/ARM64 arm64: Add function to create identity mappings efi: add helper function to get UEFI params from FDT doc: efi-stub.txt updates for ARM lib: add fdt_empty_tree.c
| * arm64: Add function to create identity mappingsMark Salter2014-04-301-18/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At boot time, before switching to a virtual UEFI memory map, firmware expects UEFI memory and IO regions to be identity mapped whenever kernel makes runtime services calls. The existing early boot code creates an identity map of kernel text/data but this is not sufficient for UEFI. This patch adds a create_id_mapping() function which reuses the core code of the existing create_mapping(). Signed-off-by: Mark Salter <msalter@redhat.com> [ Fixed error message formatting (%pa). ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
* | arm64: Fix for the arm64 kern_addr_valid() functionDave Anderson2014-05-031-0/+3
|/ | | | | | | | | | | | | | Fix for the arm64 kern_addr_valid() function to recognize virtual addresses in the kernel logical memory map. The function fails as written because it does not check whether the addresses in that region are mapped at the pmd level to 2MB or 512MB pages, continues the page table walk to the pte level, and issues a garbage value to pfn_valid(). Tested on 4K-page and 64K-page kernels. Signed-off-by: Dave Anderson <anderson@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: add early_ioremap supportMark Salter2014-04-081-41/+0Star
| | | | | | | | | | | | | | | | | | Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>