summaryrefslogtreecommitdiffstats
path: root/target/arm/helper.c
Commit message (Collapse)AuthorAgeFilesLines
* semihosting: Move include/hw/semihosting/ -> include/semihosting/Philippe Mathieu-Daudé2021-03-101-2/+2
| | | | | | | | | | | | | | We want to move the semihosting code out of hw/ in the next patch. This patch contains the mechanical steps, created using: $ git mv include/hw/semihosting/ include/ $ sed -i s,hw/semihosting,semihosting, $(git grep -l hw/semihosting) Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210226131356.3964782-2-f4bug@amsat.org> Message-Id: <20210305135451.15427-2-alex.bennee@linaro.org>
* target/arm: Use TCF0 and TFSRE0 for unprivileged tag checksPeter Collingbourne2021-03-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Section D6.7 of the ARM ARM states: For the purpose of determining Tag Check Fault handling, unprivileged load and store instructions are treated as if executed at EL0 when executed at either: - EL1, when the Effective value of PSTATE.UAO is 0. - EL2, when both the Effective value of HCR_EL2.{E2H, TGE} is {1, 1} and the Effective value of PSTATE.UAO is 0. ARM has confirmed a defect in the pseudocode function AArch64.TagCheckFault that makes it inconsistent with the above wording. The remedy is to adjust references to PSTATE.EL in that function to instead refer to AArch64.AccessUsesEL(acctype), so that unprivileged instructions use SCTLR_EL1.TCF0 and TFSRE0_EL1. The exception type for synchronous tag check faults remains unchanged. This patch implements the described change by partially reverting commits 50244cc76abc and cc97b0019bb5. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210219201820.2672077-1-pcc@google.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add support for FEAT_SSBS, Speculative Store Bypass SafeRebecca Cran2021-03-051-0/+37
| | | | | | | | | | Add support for FEAT_SSBS. SSBS (Speculative Store Bypass Safe) is an optional feature in ARMv8.0, and mandatory in ARMv8.5. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210216224543.16142-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Correctly initialize MDCR_EL2.HPMNDaniel Müller2021-02-111-5/+4Star
| | | | | | | | | | | | | | | | | | | | When working with performance monitoring counters, we look at MDCR_EL2.HPMN as part of the check whether a counter is enabled. This check fails, because MDCR_EL2.HPMN is reset to 0, meaning that no counters are "enabled" for < EL2. That's in violation of the Arm specification, which states that > On a Warm reset, this field [MDCR_EL2.HPMN] resets to the value in > PMCR_EL0.N That's also what a comment in the code acknowledges, but the necessary adjustment seems to have been forgotten when support for more counters was added. This change fixes the issue by setting the reset value to PMCR.N, which is four. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Support AA32 DIT by moving PSTATE_SS from cpsr into env->pstateRebecca Cran2021-02-111-6/+18
| | | | | | | | | | | | | cpsr has been treated as being the same as spsr, but it isn't. Since PSTATE_SS isn't in cpsr, remove it and move it into env->pstate. This allows us to add support for CPSR_DIT, adding helper functions to merge SPSR_ELx to and from CPSR. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-3-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add support for FEAT_DIT, Data Independent TimingRebecca Cran2021-02-111-0/+22
| | | | | | | | | | | | Add support for FEAT_DIT. DIT (Data Independent Timing) is a required feature for ARMv8.4. Since virtual machine execution is largely nondeterministic and TCG is outside of the security domain, it's implemented as a NOP. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix SCR RES1 handlingMike Nawrocki2021-02-111-2/+14
| | | | | | | | | | | | | | | | The FW and AW bits of SCR_EL3 are RES1 only in some contexts. Force them to 1 only when there is no support for AArch32 at EL1 or above. The reset value will be 0x30 only if the CPU is AArch64-only; if there is support for AArch32 at EL1 or above, it will be reset to 0. Also adds helper function isar_feature_aa64_aa32_el1 to check if AArch32 is supported at EL1 or above. Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210203165552.16306-2-michael.nawrocki@gtri.gatech.edu Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: do not use cc->do_interrupt for KVM directlyClaudio Fontana2021-02-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cc->do_interrupt is in theory a TCG callback used in accel/tcg only, to prepare the emulated architecture to take an interrupt as defined in the hardware specifications, but in reality the _do_interrupt style of functions in targets are also occasionally reused by KVM to prepare the architecture state in a similar way where userspace code has identified that it needs to deliver an exception to the guest. In the case of ARM, that includes: 1) the vcpu thread got a SIGBUS indicating a memory error, and we need to deliver a Synchronous External Abort to the guest to let it know about the error. 2) the kernel told us about a debug exception (breakpoint, watchpoint) but it is not for one of QEMU's own gdbstub breakpoints/watchpoints so it must be a breakpoint the guest itself has set up, therefore we need to deliver it to the guest. So in order to reuse code, the same arm_do_interrupt function is used. This is all fine, but we need to avoid calling it using the callback registered in CPUClass, since that one is now TCG-only. Fortunately this is easily solved by replacing calls to CPUClass::do_interrupt() with explicit calls to arm_do_interrupt(). Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20210204163931.7358-9-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Replace magic value by MMU_DATA_LOAD definitionPhilippe Mathieu-Daudé2021-01-291-1/+1
| | | | | | | | | cpu_get_phys_page_debug() uses 'DATA LOAD' MMU access type. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210127232822.3530782-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Conditionalize DBGDIDRRichard Henderson2021-01-291-6/+15
| | | | | | | | | Only define the register if it exists for the cpu. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210120031656.737646-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement ID_PFR2Richard Henderson2021-01-291-2/+2
| | | | | | | | | | This was defined at some point before ARMv8.4, and will shortly be used by new processor descriptions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210120204400.1056582-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: refactor vae1_tlbmask()Rémi Denis-Courmont2021-01-191-14/+11Star
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-19-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement SCR_EL2.EEL2Rémi Denis-Courmont2021-01-191-3/+16
| | | | | | | | | | | | This adds handling for the SCR_EL3.EEL2 bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com [PMM: Applied fixes for review issues noted by RTH: - check for FEATURE_AARCH64 before checking sel2 isar feature - correct the commit message subject line] Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: set HPFAR_EL2.NS on secure stage 2 faultsRémi Denis-Courmont2021-01-191-0/+6
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-15-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: secure stage 2 translation regimeRémi Denis-Courmont2021-01-191-24/+54
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-14-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: generalize 2-stage page-walk conditionRémi Denis-Courmont2021-01-191-7/+6Star
| | | | | | | | | | The stage_1_mmu_idx() already effectively keeps track of which translation regimes have two stages. Don't hard-code another test. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: translate NS bit in page-walksRémi Denis-Courmont2021-01-191-0/+12
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-12-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: do S1_ptw_translate() before address space lookupRémi Denis-Courmont2021-01-191-3/+6
| | | | | | | | | | | In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW bits can invert the secure flag for pagetable walks. This patchset allows S1_ptw_translate() to change the non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: handle VMID change in secure stateRémi Denis-Courmont2021-01-191-4/+9
| | | | | | | | | | The VTTBR write callback so far assumes that the underlying VM lies in non-secure state. This handles the secure state scenario. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add ARMv8.4-SEL2 system registersRémi Denis-Courmont2021-01-191-0/+24
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-9-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add MMU stage 1 for Secure EL2Rémi Denis-Courmont2021-01-191-43/+84
| | | | | | | | | | | | | This adds the MMU indices for EL2 stage 1 in secure state. To keep code contained, which is largelly identical between secure and non-secure modes, the MMU indices are reassigned. The new assignments provide a systematic pattern with a non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add 64-bit S-EL2 to EL exception tableRémi Denis-Courmont2021-01-191-5/+5
| | | | | | | | | | | | | | | | With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in secure mode, though it can only be AArch64. This patch adds the target EL for exceptions from 64-bit S-EL2. It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure mode. Those values were never used in practice as the effective value of HCR was always 0 in secure mode. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: factor MDCR_EL2 common handlingRémi Denis-Courmont2021-01-191-16/+22
| | | | | | | | | | | This adds a common helper to compute the effective value of MDCR_EL2. That is the actual value if EL2 is enabled in the current security context, or 0 elsewise. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: use arm_hcr_el2_eff() where applicableRémi Denis-Courmont2021-01-191-13/+18
| | | | | | | | | This will simplify accessing HCR conditionally in secure state. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: use arm_is_el2_enabled() where applicableRémi Denis-Courmont2021-01-191-20/+13Star
| | | | | | | | | | Do not assume that EL2 is available in and only in non-secure context. That equivalence is broken by ARMv8.4-SEL2. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: remove redundant testsRémi Denis-Courmont2021-01-191-6/+4Star
| | | | | | | | | | In this context, the HCR value is the effective value, and thus is zero in secure mode. The tests for HCR.{F,I}MO are sufficient. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* semihosting: Change common-semi API to be architecture-independentKeith Packard2021-01-181-2/+3
| | | | | | | | | | | | | | The public API is now defined in hw/semihosting/common-semi.h. do_common_semihosting takes CPUState * instead of CPUARMState *. All internal functions have been renamed common_semi_ instead of arm_semi_ or arm_. Aside from the API change, there are no functional changes in this patch. Signed-off-by: Keith Packard <keithp@keithp.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20210107170717.2098982-3-keithp@keithp.com> Message-Id: <20210108224256.2321-14-alex.bennee@linaro.org>
* target/arm: use official org.gnu.gdb.aarch64.sve layout for registersAlex Bennée2021-01-181-1/+1
| | | | | | | | | | | | | | | | | | While GDB can work with any XML description given to it there is special handling for SVE registers on the GDB side which makes the users life a little better. The changes aren't that major and all the registers save the $vg reported the same. All that changes is: - report org.gnu.gdb.aarch64.sve - use gdb nomenclature for names and types - minor re-ordering of the types to match reference - re-enable ieee_half (as we know gdb supports it now) - $vg is now a 64 bit int - check $vN and $zN aliasing in test Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Machado <luis.machado@linaro.org> Message-Id: <20210108224256.2321-11-alex.bennee@linaro.org>
* target/arm: ARMv8.4-TTST extensionRémi Denis-Courmont2021-01-121-2/+13
| | | | | | | | This adds for the Small Translation tables extension in AArch64 state. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix MTE0_ACTIVERichard Henderson2021-01-081-1/+1
| | | | | | | | | | | | | | In 50244cc76abc we updated mte_check_fail to match the ARM pseudocode, using the correct EL to select the TCF field. But we failed to update MTE0_ACTIVE the same way, which led to g_assert_not_reached(). Cc: qemu-stable@nongnu.org Buglink: https://bugs.launchpad.net/bugs/1907137 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201221204426.88514-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* qapi: Use QAPI_LIST_PREPEND() where possibleEric Blake2020-12-191-5/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | Anywhere we create a list of just one item or by prepending items (typically because order doesn't matter), we can use QAPI_LIST_PREPEND(). But places where we must keep the list in order by appending remain open-coded until later patches. Note that as a side effect, this also performs a cleanup of two minor issues in qga/commands-posix.c: the old code was performing new = g_malloc0(sizeof(*ret)); which 1) is confusing because you have to verify whether 'new' and 'ret' are variables with the same type, and 2) would conflict with C++ compilation (not an actual problem for this file, but makes copy-and-paste harder). Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20201113011340.463563-5-eblake@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> [Straightforward conflicts due to commit a8aa94b5f8 "qga: update schema for guest-get-disks 'dependents' field" and commit a10b453a52 "target/mips: Move mips_cpu_add_definition() from helper.c to cpu.c" resolved. Commit message tweaked.] Signed-off-by: Markus Armbruster <armbru@redhat.com>
* target/arm: Implement v8.1M PXN extensionPeter Maydell2020-12-101-1/+6
| | | | | | | | | | | | | In v8.1M the PXN architecture extension adds a new PXN bit to the MPU_RLAR registers, which forbids execution of code in the region from a privileged mode. This is another feature which is just in the generic "in v8.1M" set and has no ID register field indicating its presence. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201119215617.29887-3-peter.maydell@linaro.org
* target/arm: fix stage 2 page-walks in 32-bit emulationRémi Denis-Courmont2020-11-231-2/+2
| | | | | | | | | | | | Using a target unsigned long would limit the Input Address to a LPAE page-walk to 32 bits on AArch32 and 64 bits on AArch64. This is okay for stage 1 or on AArch64, but it is insufficient for stage 2 on AArch32. In that later case, the Input Address can have up to 40 bits. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201118150414.18360-1-remi@remlab.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add spaces around operatorXinhao Zhang2020-11-101-1/+1
| | | | | | | | | | Fix code style. Operator needs spaces both sides. Signed-off-by: Xinhao Zhang <zhangxinhao1@huawei.com> Signed-off-by: Kai Deng <dengkai1@huawei.com> Message-id: 20201103114529.638233-1-zhangxinhao1@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: fix LORID_EL1 access checkRémi Denis-Courmont2020-11-021-14/+5Star
| | | | | | | | | Secure mode is not exempted from checking SCR_EL3.TLOR, and in the future HCR_EL2.TLOR when S-EL2 is enabled. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: fix handling of HCR.FBRémi Denis-Courmont2020-11-021-3/+2Star
| | | | | | | | HCR should be applied when NS is set, not when it is cleared. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Ignore HCR_EL2.ATA when {E2H,TGE} != 11Richard Henderson2020-10-201-4/+5
| | | | | | | | | | | | | | | | | Unlike many other bits in HCR_EL2, the description for this bit does not contain the phrase "if ... this field behaves as 0 for all purposes other than", so do not squash the bit in arm_hcr_el2_eff. Instead, replicate the E2H+TGE test in the two places that require it. Reported-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Message-id: 20201008162155.161886-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use tlb_flush_page_bits_by_mmuidx*Richard Henderson2020-10-201-7/+39
| | | | | | | | | | | | | | | | | When TBI is enabled in a given regime, 56 bits of the address are significant and we need to clear out any other matching virtual addresses with differing tags. The other uses of tlb_flush_page (without mmuidx) in this file are only used by aarch32 mode. Fixes: 38d931687fa1 Reported-by: Jordan Frank <jordanfrank@fb.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201016210754.818257-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* icount: rename functions to be consistent with the module nameClaudio Fontana2020-10-051-2/+2
| | | | | | | Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpu-timers, icount: new modulesClaudio Fontana2020-10-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | refactoring of cpus.c continues with cpu timer state extraction. cpu-timers: responsible for the softmmu cpu timers state, including cpu clocks and ticks. icount: counts the TCG instructions executed. As such it is specific to the TCG accelerator. Therefore, it is built only under CONFIG_TCG. One complication is due to qtest, which uses an icount field to warp time as part of qtest (qtest_clock_warp). In order to solve this problem, provide a separate counter for qtest. This requires fixing assumptions scattered in the code that qtest_enabled() implies icount_enabled(), checking each specific case. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [remove redundant initialization with qemu_spice_init] Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [fix lingering calls to icount_get] Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target/arm: Move id_pfr0, id_pfr1 into ARMISARegistersPeter Maydell2020-10-011-2/+2
| | | | | | | | | | | | | | | Move the id_pfr0 and id_pfr1 fields into the ARMISARegisters sub-struct. We're going to want id_pfr1 for an isar_features check, and moving both at the same time avoids an odd inconsistency. Changes other than the ones to cpu.h and kvm64.c made automatically with: perl -p -i -e 's/cpu->id_pfr/cpu->isar.id_pfr/' target/arm/*.c hw/intc/armv7m_nvic.c Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-3-peter.maydell@linaro.org
* target/arm: Replace ARM_FEATURE_PXN with ID_MMFR0.VMSA checkPeter Maydell2020-10-011-2/+3
| | | | | | | | | | | | The ARM_FEATURE_PXN bit indicates whether the CPU supports the PXN bit in short-descriptor translation table format descriptors. This is indicated by ID_MMFR0.VMSA being at least 0b0100. Replace the feature bit with an ID register check, in line with our preference for ID register checks over feature bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200910173855.4068-2-peter.maydell@linaro.org
* target/arm: Count PMU events when MDCR.SPME is setAaron Lindsay2020-09-141-1/+1
| | | | | | | | | | | | This check was backwards when introduced in commit 033614c47de78409ad3fb39bb7bd1483b71c6789: target/arm: Filter cycle counter based on PMCCFILTR_EL0 Cc: qemu-stable@nongnu.org Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Clarify HCR_EL2 ARMCPRegInfo typePhilippe Mathieu-Daudé2020-08-281-1/+0Star
| | | | | | | | | | | | | | | | | In commit ce4afed839 ("target/arm: Implement AArch32 HCR and HCR2") the HCR_EL2 register has been changed from type NO_RAW (no underlying state and does not support raw access for state saving/loading) to type CONST (TCG can assume the value to be constant), removing the read/write accessors. We forgot to remove the previous type ARM_CP_NO_RAW. This is not really a problem since the field is overwritten. However it makes code review confuse, so remove it. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200812111223.7787-1-f4bug@amsat.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Convert A32 coprocessor insns to decodetreePeter Maydell2020-08-241-0/+29
| | | | | | | | | | | | | | | | | | | | | | Convert the A32 coprocessor instructions to decodetree. Note that this corrects an underdecoding: for the 64-bit access case (MRRC/MCRR) we did not check that bits [24:21] were 0b0010, so we would incorrectly treat LDC/STC as MRRC/MCRR rather than UNDEFing them. The decodetree versions of these insns assume the coprocessor is in the range 0..7 or 14..15. This is architecturally sensible (as per the comments) and OK in practice for QEMU because the only uses of the ARMCPRegInfo infrastructure we have that aren't for coprocessors 14 or 15 are the pxa2xx use of coprocessor 6. We add an assertion to the define_one_arm_cp_reg_with_opaque() function to catch any accidental future attempts to use it to define coprocessor registers for invalid coprocessors. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200803111849.13368-4-peter.maydell@linaro.org
* target/arm: Fix Rt/Rt2 in ESR_ELx for copro traps from AArch32 to 64Peter Maydell2020-08-051-1/+91
| | | | | | | | | | | | | | | | | | | | | | | | | When a coprocessor instruction in an AArch32 guest traps to AArch32 Hyp mode, the syndrome register (HSR) includes Rt and Rt2 fields which are simply copies of the Rt and Rt2 fields from the trapped instruction. However, if the instruction is trapped from AArch32 to an AArch64 higher exception level, the Rt and Rt2 fields in the syndrome register (ESR_ELx) must be the AArch64 view of the register. This makes a difference if the AArch32 guest was in a mode other than User or System and it was using r13 or r14, or if it was in FIQ mode and using r8-r14. We don't know at translate time which AArch32 CPU mode we are in, so we leave the values we generate in our prototype syndrome register value at translate time as the raw Rt/Rt2 from the instruction, and instead correct them to the AArch64 view when we find we need to take an exception from AArch32 to AArch64 with one of these syndrome values. Fixes: https://bugs.launchpad.net/qemu/+bug/1879587 Reported-by: Julien Freche <julien@bedrocksystems.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200804193903.31240-1-peter.maydell@linaro.org
* target/arm: Always pass cacheattr in S1_ptw_translateRichard Henderson2020-07-271-13/+6Star
| | | | | | | | | | | | | When we changed the interface of get_phys_addr_lpae to require the cacheattr parameter, this spot was missed. The compiler is unable to detect the use of NULL vs the nonnull attribute here. Fixes: 7e98e21c098 Reported-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Jan Kiszka <jan.kiskza@siemens.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Don't do raw writes for PMINTENCLRAaron Lindsay2020-07-131-2/+2
| | | | | | | | | | | | Raw writes to this register when in KVM mode can cause interrupts to be raised (even when the PMU is disabled). Because the underlying state is already aliased to PMINTENSET (which already provides raw write functions), we can safely disable raw accesses to PMINTENCLR entirely. Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com> Message-id: 20200707152616.1917154-1-aaron@os.amperecomputing.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Cache the Tagged bit for a page in MemTxAttrsRichard Henderson2020-06-261-3/+45
| | | | | | | | | This "bit" is a particular value of the page's MemAttr. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-43-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Always pass cacheattr to get_phys_addrRichard Henderson2020-06-261-30/+30
| | | | | | | | | | | | | | | We need to check the memattr of a page in order to determine whether it is Tagged for MTE. Between Stage1 and Stage2, this becomes simpler if we always collect this data, instead of occasionally being presented with NULL. Use the nonnull attribute to allow the compiler to check that all pointer arguments are non-null. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200626033144.790098-42-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>