summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'kvmarm-for-v4.19' of ↵Paolo Bonzini2018-08-221-0/+20
|\ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm updates for 4.19 - Support for Group0 interrupts in guests - Cache management optimizations for ARMv8.4 systems - Userspace interface for RAS, allowing error retrival and injection - Fault path optimization - Emulated physical timer fixes - Random cleanups
| * arm64: KVM: Add support for Stage-2 control of memory types and cacheabilityMarc Zyngier2018-07-091-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to ARMv8.3, the combinaison of Stage-1 and Stage-2 attributes results in the strongest attribute of the two stages. This means that the hypervisor has to perform quite a lot of cache maintenance just in case the guest has some non-cacheable mappings around. ARMv8.4 solves this problem by offering a different mode (FWB) where Stage-2 has total control over the memory attribute (this is limited to systems where both I/O and instruction fetches are coherent with the dcache). This is achieved by having a different set of memory attributes in the page tables, and a new bit set in HCR_EL2. On such a system, we can then safely sidestep any form of dcache management. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* | Merge tag 'arm64-fixes' of ↵Linus Torvalds2018-08-171-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fixes from Will Deacon: "A couple of arm64 fixes - Fix boot on Hikey-960 by avoiding an IPI with interrupts disabled - Fix address truncation in pfn_valid() implementation" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: mm: check for upper PAGE_SHIFT bits in pfn_valid() arm64: Avoid calling stop_machine() when patching jump labels
| * | arm64: Avoid calling stop_machine() when patching jump labelsWill Deacon2018-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patching a jump label involves patching a single instruction at a time, swizzling between a branch and a NOP. The architecture treats these instructions specially, so a concurrently executing CPU is guaranteed to see either the NOP or the branch, rather than an amalgamation of the two instruction encodings. However, in order to guarantee that the new instruction is visible, it is necessary to send an IPI to the concurrently executing CPU so that it discards any previously fetched instructions from its pipeline. This operation therefore cannot be completed from a context with IRQs disabled, but this is exactly what happens on the jump label path where the hotplug lock is held and irqs are subsequently disabled by stop_machine_cpuslocked(). This results in a deadlock during boot on Hikey-960. Due to the architectural guarantees around patching NOPs and branches, we don't actually need to stop_machine() at all on the jump label path, so we can avoid the deadlock by using the "nosync" variant of our instruction patching routine. Fixes: 693350a79980 ("arm64: insn: Don't fallback on nosync path for general insn patching") Reported-by: Tuomas Tynkkynen <tuomas.tynkkynen@iki.fi> Reported-by: John Stultz <john.stultz@linaro.org> Tested-by: Valentin Schneider <valentin.schneider@arm.com> Tested-by: Tuomas Tynkkynen <tuomas@tuxera.com> Tested-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | Merge tag 'kbuild-v4.19' of ↵Linus Torvalds2018-08-151-0/+3
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - verify depmod is installed before modules_install - support build salt in case build ids must be unique between builds - allow users to specify additional host compiler flags via HOST*FLAGS, and rename internal variables to KBUILD_HOST*FLAGS - update buildtar script to drop vax support, add arm64 support - update builddeb script for better debarch support - document the pit-fall of if_changed usage - fix parallel build of UML with O= option - make 'samples' target depend on headers_install to fix build errors - remove deprecated host-progs variable - add a new coccinelle script for refcount_t vs atomic_t check - improve double-test coccinelle script - misc cleanups and fixes * tag 'kbuild-v4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (41 commits) coccicheck: return proper error code on fail Coccinelle: doubletest: reduce side effect false positives kbuild: remove deprecated host-progs variable kbuild: make samples really depend on headers_install um: clean up archheaders recipe kbuild: add %asm-generic to no-dot-config-targets um: fix parallel building with O= option scripts: Add Python 3 support to tracing/draw_functrace.py builddeb: Add automatic support for sh{3,4}{,eb} architectures builddeb: Add automatic support for riscv* architectures builddeb: Add automatic support for m68k architecture builddeb: Add automatic support for or1k architecture builddeb: Add automatic support for sparc64 architecture builddeb: Add automatic support for mips{,64}r6{,el} architectures builddeb: Add automatic support for mips64el architecture builddeb: Add automatic support for ppc64 and powerpcspe architectures builddeb: Introduce functions to simplify kconfig tests in set_debarch builddeb: Drop check for 32-bit s390 builddeb: Change architecture detection fallback to use dpkg-architecture builddeb: Skip architecture detection when KBUILD_DEBARCH is set ...
| * | | arm64: Add build salt to the vDSOLaura Abbott2018-07-171-0/+3
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | The vDSO needs to have a unique build id in a similar manner to the kernel and modules. Use the build salt macro. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
* | | Merge tag 'arm64-upstream' of ↵Linus Torvalds2018-08-1527-610/+885
|\ \ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "A bunch of good stuff in here. Worth noting is that we've pulled in the x86/mm branch from -tip so that we can make use of the core ioremap changes which allow us to put down huge mappings in the vmalloc area without screwing up the TLB. Much of the positive diffstat is because of the rseq selftest for arm64. Summary: - Wire up support for qspinlock, replacing our trusty ticket lock code - Add an IPI to flush_icache_range() to ensure that stale instructions fetched into the pipeline are discarded along with the I-cache lines - Support for the GCC "stackleak" plugin - Support for restartable sequences, plus an arm64 port for the selftest - Kexec/kdump support on systems booting with ACPI - Rewrite of our syscall entry code in C, which allows us to zero the GPRs on entry from userspace - Support for chained PMU counters, allowing 64-bit event counters to be constructed on current CPUs - Ensure scheduler topology information is kept up-to-date with CPU hotplug events - Re-enable support for huge vmalloc/IO mappings now that the core code has the correct hooks to use break-before-make sequences - Miscellaneous, non-critical fixes and cleanups" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (90 commits) arm64: alternative: Use true and false for boolean values arm64: kexec: Add comment to explain use of __flush_icache_range() arm64: sdei: Mark sdei stack helper functions as static arm64, kaslr: export offset in VMCOREINFO ELF notes arm64: perf: Add cap_user_time aarch64 efi/libstub: Only disable stackleak plugin for arm64 arm64: drop unused kernel_neon_begin_partial() macro arm64: kexec: machine_kexec should call __flush_icache_range arm64: svc: Ensure hardirq tracing is updated before return arm64: mm: Export __sync_icache_dcache() for xen-privcmd drivers/perf: arm-ccn: Use devm_ioremap_resource() to map memory arm64: Add support for STACKLEAK gcc plugin arm64: Add stack information to on_accessible_stack drivers/perf: hisi: update the sccl_id/ccl_id when MT is supported arm64: fix ACPI dependencies rseq/selftests: Add support for arm64 arm64: acpi: fix alignment fault in accessing ACPI efi/arm: map UEFI memory map even w/o runtime services enabled efi/arm: preserve early mapping of UEFI memory map longer for BGRT drivers: acpi: add dependency of EFI for arm64 ...
| * | arm64: alternative: Use true and false for boolean valuesGustavo A. R. Silva2018-08-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Return statements in functions returning bool should use true or false instead of an integer value. This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: kexec: Add comment to explain use of __flush_icache_range()Will Deacon2018-07-311-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we understand the deadlock arising from flush_icache_range() on the kexec crash kernel path, add a comment to justify the use of __flush_icache_range() here. Reported-by: Dave Kleikamp <dave.kleikamp@oracle.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: sdei: Mark sdei stack helper functions as staticWill Deacon2018-07-311-6/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SDEI stack helper functions are only used by _on_sdei_stack() and refer to symbols (e.g. sdei_stack_normal_ptr) that are only defined if CONFIG_VMAP_STACK=y. Mark these functions as static, so we don't run into errors at link-time due to references to undefined symbols. Stick all the parameters onto the same line whilst we're passing through. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64, kaslr: export offset in VMCOREINFO ELF notesBhupesh Sharma2018-07-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Include KASLR offset in arm64 VMCOREINFO ELF notes to assist in debugging. vmcore parsing in user-space already expects this value in the notes and we are providing it for portability of those existing tools with x86. Ideally we would like core code to do this (so that way this information won't be missed when an architecture adds KASLR support), but mips has CONFIG_RANDOMIZE_BASE, and doesn't provide kaslr_offset(), so I am not sure if this is needed for mips (and other such similar arch cases in future). So, lets keep this architecture specific for now. As an example of a user-space use-case, consider the makedumpfile user-space utility which will need fixup to use this KASLR offset to work with cases where we need to find a way to translate symbol address from vmlinux to kernel run time address in case of KASLR boot on arm64. I have already submitted the makedumpfile user-space patch upstream and the maintainer has suggested to wait for the kernel changes to be included (see [0]). I tested this on my qualcomm amberwing board both for KASLR and non-KASLR boot cases: Without this patch: # cat > scrub.conf << EOF [vmlinux] erase jiffies erase init_task.utime for tsk in init_task.tasks.next within task_struct:tasks erase tsk.utime endfor EOF # makedumpfile --split -d 31 -x vmlinux --config scrub.conf vmcore dumpfile_{1,2,3} readpage_elf: Attempt to read non-existent page at 0xffffa8a5bf180000. readmem: type_addr: 1, addr:ffffa8a5bf180000, size:8 vaddr_to_paddr_arm64: Can't read pgd readmem: Can't convert a virtual address(ffff0000092a542c) to physical address. readmem: type_addr: 0, addr:ffff0000092a542c, size:390 check_release: Can't get the address of system_utsname After this patch check_release() is ok, and also we are able to erase symbol from vmcore (I checked this with kernel 4.18.0-rc4+): # makedumpfile --split -d 31 -x vmlinux --config scrub.conf vmcore dumpfile_{1,2,3} The kernel version is not supported. The makedumpfile operation may be incomplete. Checking for memory holes : [100.0 %] \ Checking for memory holes : [100.0 %] | Checking foExcluding unnecessary pages : [100.0 %] \ Excluding unnecessary pages : [100.0 %] \ The dumpfiles are saved to dumpfile_1, dumpfile_2, and dumpfile_3. makedumpfile Completed. [0] https://www.spinics.net/lists/kexec/msg21195.html Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: perf: Add cap_user_time aarch64Michael O'Farrell2018-07-311-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is useful to get the running time of a thread. Doing so in an efficient manner can be important for performance of user applications. Avoiding system calls in `clock_gettime` when handling CLOCK_THREAD_CPUTIME_ID is important. Other clocks are handled in the VDSO, but CLOCK_THREAD_CPUTIME_ID falls back on the system call. CLOCK_THREAD_CPUTIME_ID is not handled in the VDSO since it would have costs associated with maintaining updated user space accessible time offsets. These offsets have to be updated everytime the a thread is scheduled/descheduled. However, for programs regularly checking the running time of a thread, this is a performance improvement. This patch takes a middle ground, and adds support for cap_user_time an optional feature of the perf_event API. This way costs are only incurred when the perf_event api is enabled. This is done the same way as it is in x86. Ultimately this allows calculating the thread running time in userspace on aarch64 as follows (adapted from perf_event_open manpage): u32 seq, time_mult, time_shift; u64 running, count, time_offset, quot, rem, delta; struct perf_event_mmap_page *pc; pc = buf; // buf is the perf event mmaped page as documented in the API. if (pc->cap_usr_time) { do { seq = pc->lock; barrier(); running = pc->time_running; count = readCNTVCT_EL0(); // Read ARM hardware clock. time_offset = pc->time_offset; time_mult = pc->time_mult; time_shift = pc->time_shift; barrier(); } while (pc->lock != seq); quot = (count >> time_shift); rem = count & (((u64)1 << time_shift) - 1); delta = time_offset + quot * time_mult + ((rem * time_mult) >> time_shift); running += delta; // running now has the current nanosecond level thread time. } Summary of changes in the patch: For aarch64 systems, make arch_perf_update_userpage update the timing information stored in the perf_event page. Requiring the following calculations: - Calculate the appropriate time_mult, and time_shift factors to convert ticks to nano seconds for the current clock frequency. - Adjust the mult and shift factors to avoid shift factors of 32 bits. (possibly unnecessary) - The time_offset userspace should apply when doing calculations: negative the current sched time (now), because time_running and time_enabled fields of the perf_event page have just been updated. Toggle bits to appropriate values: - Enable cap_user_time Signed-off-by: Michael O'Farrell <micpof@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: kexec: machine_kexec should call __flush_icache_rangeDave Kleikamp2018-07-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | machine_kexec flushes the reboot_code_buffer from the icache after stopping the other cpus. Commit 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings") added an IPI call to flush_icache_range, which causes a hang here, so replace the call with __flush_icache_range Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | arm64: svc: Ensure hardirq tracing is updated before returnWill Deacon2018-07-301-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We always run userspace with interrupts enabled, but with the recent conversion of the syscall entry/exit code to C, we don't inform the hardirq tracing code that interrupts are about to become enabled by virtue of restoring the EL0 SPSR. This patch ensures that trace_hardirqs_on() is called on the syscall return path when we return to the assembly code with interrupts still disabled. Fixes: f37099b6992a ("arm64: convert syscall trace logic to C") Reported-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | Merge branch 'for-next/perf' of ↵Will Deacon2018-07-271-50/+201
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into aarch64/for-next/core Pull in arm perf updates, including support for 64-bit (chained) event counters and some non-critical fixes for some of the system PMU drivers. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: perf: Add support for chaining event countersSuzuki K Poulose2018-07-101-27/+158
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for 64bit event by using chained event counters and 64bit cycle counters. PMUv3 allows chaining a pair of adjacent 32-bit counters, effectively forming a 64-bit counter. The low/even counter is programmed to count the event of interest, and the high/odd counter is programmed to count the CHAIN event, taken when the low/even counter overflows. For CPU cycles, when 64bit mode is requested, the cycle counter is used in 64bit mode. If the cycle counter is not available, falls back to chaining. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: perf: Disable PMU while processing counter overflowsSuzuki K Poulose2018-07-101-22/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm64 PMU updates the event counters and reprograms the counters in the overflow IRQ handler without disabling the PMU. This could potentially cause skews in for group counters, where the overflowed counters may potentially loose some event counts, while they are reprogrammed. To prevent this, disable the PMU while we process the counter overflows and enable it right back when we are done. This patch also moves the PMU stop/start routines to avoid a forward declaration. Suggested-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm64: perf: Clean up armv8pmu_select_counterSuzuki K Poulose2018-07-101-10/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | armv8pmu_select_counter always returns the passed idx. So let us make that void and get rid of the pointless checks. Suggested-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm_pmu: Tidy up clear_event_idx call backsSuzuki K Poulose2018-07-101-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The armpmu uses get_event_idx callback to allocate an event counter for a given event, which marks the selected counter as "used". Now, when we delete the counter, the arm_pmu goes ahead and clears the "used" bit and then invokes the "clear_event_idx" call back, which kind of splits the job between the core code and the backend. To keep things tidy, mandate the implementation of clear_event_idx() and add it for exisiting backends. This will be useful for adding the chained event support, where we leave the event idx maintenance to the backend. Also, when an event is removed from the PMU, reset the hw.idx to indicate that a counter is not allocated for this event, to help the backends do better checks. This will be also used for the chain counter support. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm_pmu: Change API to support 64bit counter valuesSuzuki K Poulose2018-07-101-5/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the {read/write}_counter APIs to handle 64bit values to enable supporting chained event counters. The backends still use 32bit values and we pass them 32bit values only. So in effect there are no functional changes. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | arm_pmu: Clean up maximum period handlingSuzuki K Poulose2018-07-101-1/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Each PMU defines their max_period of the counter as the maximum value that can be counted. Since all the PMU backends support 32bit counters by default, let us remove the redundant field. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: Add support for STACKLEAK gcc pluginLaura Abbott2018-07-262-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds support for the STACKLEAK gcc plugin to arm64 by implementing stackleak_check_alloca(), based heavily on the x86 version, and adding the two helpers used by the stackleak common code: current_top_of_stack() and on_thread_stack(). The stack erasure calls are made at syscall returns. Additionally, this disables the plugin in hypervisor and EFI stub code, which are out of scope for the protection. Acked-by: Alexander Popov <alex.popov@linux.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: Add stack information to on_accessible_stackLaura Abbott2018-07-263-11/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for enabling the stackleak plugin on arm64, we need a way to get the bounds of the current stack. Extend on_accessible_stack to get this information. Acked-by: Alexander Popov <alex.popov@linux.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <labbott@redhat.com> [will: folded in fix for allmodconfig build breakage w/ sdei] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: acpi: fix alignment fault in accessing ACPIAKASHI Takahiro2018-07-231-8/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a fix against the issue that crash dump kernel may hang up during booting, which can happen on any ACPI-based system with "ACPI Reclaim Memory." (kernel messages after panic kicked off kdump) (snip...) Bye! (snip...) ACPI: Core revision 20170728 pud=000000002e7d0003, *pmd=000000002e7c0003, *pte=00e8000039710707 Internal error: Oops: 96000021 [#1] SMP Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.0-rc6 #1 task: ffff000008d05180 task.stack: ffff000008cc0000 PC is at acpi_ns_lookup+0x25c/0x3c0 LR is at acpi_ds_load1_begin_op+0xa4/0x294 (snip...) Process swapper/0 (pid: 0, stack limit = 0xffff000008cc0000) Call trace: (snip...) [<ffff0000084a6764>] acpi_ns_lookup+0x25c/0x3c0 [<ffff00000849b4f8>] acpi_ds_load1_begin_op+0xa4/0x294 [<ffff0000084ad4ac>] acpi_ps_build_named_op+0xc4/0x198 [<ffff0000084ad6cc>] acpi_ps_create_op+0x14c/0x270 [<ffff0000084acfa8>] acpi_ps_parse_loop+0x188/0x5c8 [<ffff0000084ae048>] acpi_ps_parse_aml+0xb0/0x2b8 [<ffff0000084a8e10>] acpi_ns_one_complete_parse+0x144/0x184 [<ffff0000084a8e98>] acpi_ns_parse_table+0x48/0x68 [<ffff0000084a82cc>] acpi_ns_load_table+0x4c/0xdc [<ffff0000084b32f8>] acpi_tb_load_namespace+0xe4/0x264 [<ffff000008baf9b4>] acpi_load_tables+0x48/0xc0 [<ffff000008badc20>] acpi_early_init+0x9c/0xd0 [<ffff000008b70d50>] start_kernel+0x3b4/0x43c Code: b9008fb9 2a000318 36380054 32190318 (b94002c0) ---[ end trace c46ed37f9651c58e ]--- Kernel panic - not syncing: Fatal exception Rebooting in 10 seconds.. (diagnosis) * This fault is a data abort, alignment fault (ESR=0x96000021) during reading out ACPI table. * Initial ACPI tables are normally stored in system ram and marked as "ACPI Reclaim memory" by the firmware. * After the commit f56ab9a5b73c ("efi/arm: Don't mark ACPI reclaim memory as MEMBLOCK_NOMAP"), those regions are differently handled as they are "memblock-reserved", without NOMAP bit. * So they are now excluded from device tree's "usable-memory-range" which kexec-tools determines based on a current view of /proc/iomem. * When crash dump kernel boots up, it tries to accesses ACPI tables by mapping them with ioremap(), not ioremap_cache(), in acpi_os_ioremap() since they are no longer part of mapped system ram. * Given that ACPI accessor/helper functions are compiled in without unaligned access support (ACPI_MISALIGNMENT_NOT_SUPPORTED), any unaligned access to ACPI tables can cause a fatal panic. With this patch, acpi_os_ioremap() always honors memory attribute information provided by the firmware (EFI) and retaining cacheability allows the kernel safe access to ACPI tables. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by and Tested-by: Bhupesh Sharma <bhsharma@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: export memblock_reserve()d regions via /proc/iomemJames Morse2018-07-231-0/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There has been some confusion around what is necessary to prevent kexec overwriting important memory regions. memblock: reserve, or nomap? Only memblock nomap regions are reported via /proc/iomem, kexec's user-space doesn't know about memblock_reserve()d regions. Until commit f56ab9a5b73ca ("efi/arm: Don't mark ACPI reclaim memory as MEMBLOCK_NOMAP") the ACPI tables were nomap, now they are reserved and thus possible for kexec to overwrite with the new kernel or initrd. But this was always broken, as the UEFI memory map is also reserved and not marked as nomap. Exporting both nomap and reserved memblock types is a nuisance as they live in different memblock structures which we can't walk at the same time. Take a second walk over memblock.reserved and add new 'reserved' subnodes for the memblock_reserved() regions that aren't already described by the existing code. (e.g. Kernel Code) We use reserve_region_with_split() to find the gaps in existing named regions. This handles the gap between 'kernel code' and 'kernel data' which is memblock_reserve()d, but already partially described by request_standard_resources(). e.g.: | 80000000-dfffffff : System RAM | 80080000-80ffffff : Kernel code | 81000000-8158ffff : reserved | 81590000-8237efff : Kernel data | a0000000-dfffffff : Crash kernel | e00f0000-f949ffff : System RAM reserve_region_with_split needs kzalloc() which isn't available when request_standard_resources() is called, use an initcall. Reported-by: Bhupesh Sharma <bhsharma@redhat.com> Reported-by: Tyler Baicar <tbaicar@codeaurora.org> Suggested-by: Akashi Takahiro <takahiro.akashi@linaro.org> Signed-off-by: James Morse <james.morse@arm.com> Fixes: d28f6df1305a ("arm64/kexec: Add core kexec support") Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> CC: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: fix possible spectre-v1 write in ptrace_hbp_set_event()Mark Rutland2018-07-231-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible for userspace to control idx. Sanitize idx when using it as an array index, to inhibit the potential spectre-v1 write gadget. Found by smatch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: Drop asmlinkage qualifier from syscall_trace_{enter,exit}Will Deacon2018-07-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | syscall_trace_{enter,exit} are only called from C code, so drop the asmlinkage qualifier from their definitions. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: implement syscall wrappersMark Rutland2018-07-123-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To minimize the risk of userspace-controlled values being used under speculation, this patch adds pt_regs based syscall wrappers for arm64, which pass the minimum set of required userspace values to syscall implementations. For each syscall, a wrapper which takes a pt_regs argument is automatically generated, and this extracts the arguments before calling the "real" syscall implementation. Each syscall has three functions generated: * __do_<compat_>sys_<name> is the "real" syscall implementation, with the expected prototype. * __se_<compat_>sys_<name> is the sign-extension/narrowing wrapper, inherited from common code. This takes a series of long parameters, casting each to the requisite types required by the "real" syscall implementation in __do_<compat_>sys_<name>. This wrapper *may* not be necessary on arm64 given the AAPCS rules on unused register bits, but it seemed safer to keep the wrapper for now. * __arm64_<compat_>_sys_<name> takes a struct pt_regs pointer, and extracts *only* the relevant register values, passing these on to the __se_<compat_>sys_<name> wrapper. The syscall invocation code is updated to handle the calling convention required by __arm64_<compat_>_sys_<name>, and passes a single struct pt_regs pointer. The compiler can fold the syscall implementation and its wrappers, such that the overhead of this approach is minimized. Note that we play games with sys_ni_syscall(). It can't be defined with SYSCALL_DEFINE0() because we must avoid the possibility of error injection. Additionally, there are a couple of locations where we need to call it from C code, and we don't (currently) have a ksys_ni_syscall(). While it has no wrapper, passing in a redundant pt_regs pointer is benign per the AAPCS. When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is defines for sys_ni_syscall(). Since we need to treat it differently for in-kernel calls and the syscall tables, the prototype is defined as-required. The wrappers are largely the same as their x86 counterparts, but simplified as we don't have a variety of compat calling conventions that require separate stubs. Unlike x86, we have some zero-argument compat syscalls, and must define COMPAT_SYSCALL_DEFINE0() to ensure that these are also given an __arm64_compat_sys_ prefix. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: convert compat wrappers to CMark Rutland2018-07-123-123/+104Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for converting to pt_regs syscall wrappers, convert our existing compat wrappers to C. This will allow the pt_regs wrappers to be automatically generated, and will allow for the compat register manipulation to be folded in with the pt_regs accesses. To avoid confusion with the upcoming pt_regs wrappers and existing compat wrappers provided by core code, the C wrappers are renamed to compat_sys_aarch32_<syscall>. With the assembly wrappers gone, we can get rid of entry32.S and the associated boilerplate. Note that these must call the ksys_* syscall entry points, as the usual sys_* entry points will be modified to take a single pt_regs pointer argument. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: use SYSCALL_DEFINE6() for mmapMark Rutland2018-07-121-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We don't currently annotate our mmap implementation as a syscall, as we need to do to use pt_regs syscall wrappers. Let's mark it as a real syscall. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturnMark Rutland2018-07-122-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We don't currently annotate our various sigreturn functions as syscalls, as we need to do to use pt_regs syscall wrappers. Let's mark them as real syscalls. For compat_sys_sigreturn and compat_sys_rt_sigreturn, this changes the return type from int to long, matching the prototypes in sys32.c. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: remove in-kernel call to sys_personality()Mark Rutland2018-07-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With pt_regs syscall wrappers, the calling convention for sys_personality() will change. Use ksys_personality(), which is functionally equivalent. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: drop alignment from syscall tablesMark Rutland2018-07-122-10/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our syscall tables are aligned to 4096 bytes, which allowed their addresses to be generated with a single adrp in entry.S. This has the unfortunate property of wasting space in .rodata for the necessary padding. Now that the address is generated by C code, we can rely on the compiler to do the right thing, and drop the alignemnt. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: zero GPRs upon entry from EL0Mark Rutland2018-07-121-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for userspace to control values consumed by speculative gadgets. We don't blat x30, since this is stashed much later, and we'll blat it before invoking C code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: don't reload GPRs after apply_ssbdMark Rutland2018-07-121-13/+7Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all of the syscall logic works on the saved pt_regs, apply_ssbd can safely corrupt x0-x3 in the entry paths, and we no longer need to restore them. So let's remove the logic doing so. With that logic gone, we can fold the branch target into the macro, so that callers need not deal with this. GAS provides \@, which provides a unique value per macro invocation, which we can use to create a unique label. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: don't restore GPRs when context trackingMark Rutland2018-07-121-11/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that syscalls are invoked with pt_regs, we no longer need to ensure that the argument regsiters are live in the entry assembly, and it's fine to not restore them after context_tracking_user_exit() has corrupted them. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: convert native/compat syscall entry to CMark Rutland2018-07-122-40/+39Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the syscall invocation logic is in C, we can migrate the rest of the syscall entry logic over, so that the entry assembly needn't look at the register values at all. The SVE reset across syscall logic now unconditionally clears TIF_SVE, but sve_user_disable() will only write back to CPACR_EL1 when SVE is actually enabled. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: convert syscall trace logic to CMark Rutland2018-07-122-56/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently syscall tracing is a tricky assembly state machine, which can be rather difficult to follow, and even harder to modify. Before we start fiddling with it for pt_regs syscalls, let's convert it to C. This is not intended to have any functional change. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: convert raw syscall invocation to CMark Rutland2018-07-124-43/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As a first step towards invoking syscalls with a pt_regs argument, convert the raw syscall invocation logic to C. We end up with a bit more register shuffling, but the unified invocation logic means we can unify the tracing paths, too. Previously, assembly had to open-code calls to ni_sys() when the system call number was out-of-bounds for the relevant syscall table. This case is now handled by invoke_syscall(), and the assembly no longer need to handle this case explicitly. This allows the tracing paths to be simplified and unified, as we no longer need the __ni_sys_trace path and the __sys_trace_return label. This only converts the invocation of the syscall. The rest of the syscall triage and tracing is left in assembly for now, and will be converted in subsequent patches. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: introduce syscall_fn_tMark Rutland2018-07-122-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for invoking arbitrary syscalls from C code, let's define a type for an arbitrary syscall, matching the parameter passing rules of the AAPCS. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: remove sigreturn wrappersMark Rutland2018-07-126-25/+9Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm64 sigreturn* syscall handlers are non-standard. Rather than taking a number of user parameters in registers as per the AAPCS, they expect the pt_regs as their sole argument. To make this work, we override the syscall definitions to invoke wrappers written in assembly, which mov the SP into x0, and branch to their respective C functions. On other architectures (such as x86), the sigreturn* functions take no argument and instead use current_pt_regs() to acquire the user registers. This requires less boilerplate code, and allows for other features such as interposing C code in this path. This patch takes the same approach for arm64. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tentatively-reviewed-by: Dave Martin <dave.martin@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: move sve_user_{enable,disable} to <asm/fpsimd.h>Mark Rutland2018-07-121-10/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In subsequent patches, we'll want to make use of sve_user_enable() and sve_user_disable() outside of kernel/fpsimd.c. Let's move these to <asm/fpsimd.h> where we can make use of them. To avoid ifdeffery in sequences like: if (system_supports_sve() && some_condition) sve_user_disable(); ... empty stubs are provided when support for SVE is not enabled. Note that system_supports_sve() contains as IS_ENABLED(CONFIG_ARM64_SVE), so the sve_user_disable() call should be optimized away entirely when CONFIG_ARM64_SVE is not selected. To ensure that this is the case, the stub definitions contain a BUILD_BUG(), as we do for other stubs for which calls should always be optimized away when the relevant config option is not selected. At the same time, the include list of <asm/fpsimd.h> is sorted while adding <asm/sysreg.h>. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: kill change_cpacr()Mark Rutland2018-07-121-11/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have sysreg_clear_set(), we can use this instead of change_cpacr(). Note that the order of the set and clear arguments differs between change_cpacr() and sysreg_clear_set(), so these are flipped as part of the conversion. Also, sve_user_enable() redundantly clears CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: kill config_sctlr_el1()Mark Rutland2018-07-123-7/+6Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have sysreg_clear_set(), we can consistently use this instead of config_sctlr_el1(). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: consistently use unsigned long for thread flagsMark Rutland2018-07-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In do_notify_resume, we manipulate thread_flags as a 32-bit unsigned int, whereas thread_info::flags is a 64-bit unsigned long, and elsewhere (e.g. in the entry assembly) we manipulate the flags as a 64-bit quantity. For consistency, and to avoid problems if we end up with more than 32 flags, let's make do_notify_resume take the flags as a 64-bit unsigned long. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | Revert "arm64: fix infinite stacktrace"Will Deacon2018-07-121-3/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 7e7df71fd57ff2894d96abb0080922bf39460a79. When unwinding out of the IRQ stack and onto the interrupted EL1 stack, we cannot rely on the frame pointer being strictly increasing, as this could terminate the backtrace early depending on how the stacks have been allocated. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: rseq: Implement backend rseq calls and select HAVE_RSEQWill Deacon2018-07-113-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement calls to rseq_signal_deliver, rseq_handle_notify_resume and rseq_syscall so that we can select HAVE_RSEQ on arm64. Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: numa: rework ACPI NUMA initializationLorenzo Pieralisi2018-07-092-46/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current ACPI ARM64 NUMA initialization code in acpi_numa_gicc_affinity_init() carries out NUMA nodes creation and cpu<->node mappings at the same time in the arch backend so that a single SRAT walk is needed to parse both pieces of information. This implies that the cpu<->node mappings must be stashed in an array (sized NR_CPUS) so that SMP code can later use the stashed values to avoid another SRAT table walk to set-up the early cpu<->node mappings. If the kernel is configured with a NR_CPUS value less than the actual processor entries in the SRAT (and MADT), the logic in acpi_numa_gicc_affinity_init() is broken in that the cpu<->node mapping is only carried out (and stashed for future use) only for a number of SRAT entries up to NR_CPUS, which do not necessarily correspond to the possible cpus detected at SMP initialization in acpi_map_gic_cpu_interface() (ie MADT and SRAT processor entries order is not enforced), which leaves the kernel with broken cpu<->node mappings. Furthermore, given the current ACPI NUMA code parsing logic in acpi_numa_gicc_affinity_init(), PXM domains for CPUs that are not parsed because they exceed NR_CPUS entries are not mapped to NUMA nodes (ie the PXM corresponding node is not created in the kernel) leaving the system with a broken NUMA topology. Rework the ACPI ARM64 NUMA initialization process so that the NUMA nodes creation and cpu<->node mappings are decoupled. cpu<->node mappings are moved to SMP initialization code (where they are needed), at the cost of an extra SRAT walk so that ACPI NUMA mappings can be batched before being applied, fixing current parsing pitfalls. Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: John Garry <john.garry@huawei.com> Fixes: d8b47fca8c23 ("arm64, ACPI, NUMA: NUMA support based on SRAT and SLIT") Link: http://lkml.kernel.org/r/1527768879-88161-2-git-send-email-xiexiuqi@huawei.com Reported-by: Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Punit Agrawal <punit.agrawal@arm.com> Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: topology: re-introduce numa mask check for scheduler MC selectionSudeep Holla2018-07-061-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 37c3ec2d810f ("arm64: topology: divorce MC scheduling domain from core_siblings") selected the smallest of LLC, socket siblings, and NUMA node siblings to ensure that the sched domain we build for the MC layer isn't larger than the DIE above it or it's shrunk to the socket or NUMA node if LLC exist acrosis NUMA node/chiplets. Commit acd32e52e4e0 ("arm64: topology: Avoid checking numa mask for scheduler MC selection") reverted the NUMA siblings checks since the CPU topology masks weren't updated on hotplug at that time. This patch re-introduces numa mask check as the CPU and NUMA topology is now updated in hotplug paths. Effectively, this patch does the partial revert of commit acd32e52e4e0. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | arm64: topology: rename llc_siblings to align with other struct membersSudeep Holla2018-07-061-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to core_sibling and thread_sibling, it's better to align and rename llc_siblings to llc_sibling. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>