summaryrefslogtreecommitdiffstats
path: root/target/arm/cpu.h
Commit message (Collapse)AuthorAgeFilesLines
* target/arm: Optimize cpu_mmu_indexRichard Henderson2020-03-051-10/+13
| | | | | | | | | | | | We now cache the core mmu_idx in env->hflags. Rather than recompute from scratch, extract the field. All of the uses of cpu_mmu_index within target/arm are within helpers, and env->hflags is always stable within a translation block from whence helpers are called. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add HCR_EL2 bit definitions from ARMv8.6Richard Henderson2020-03-051-0/+7
| | | | | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement ARMv8.3-CCIDXPeter Maydell2020-02-281-1/+16
| | | | | | | | | | | | | | | | | | | | | | | The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers have a format that uses the full 64 bit width of the register, and adds a new CCSIDR2 register so AArch32 can get at the high 32 bits. QEMU doesn't implement caches, so we just treat these ID registers as opaque values that are set to the correct constant values for each CPU. The only thing we need to do is allow 64-bit values in our cssidr[] array and provide the CCSIDR2 accessors. We don't set the CCIDX field in our 'max' CPU because the CCSIDR constant values we use are the same as the ones used by the Cortex-A57 and they are in the old 32-bit format. This means that the extra regdef added here is unused currently, but it means that whenever in the future we add a CPU that does need the new 64-bit format it will just work when we set the cssidr values and the ID registers for it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224182626.29252-1-peter.maydell@linaro.org
* target/arm: Implement v8.4-RCPCPeter Maydell2020-02-281-0/+5
| | | | | | | | | | | | | | | | | The v8.4-RCPC extension implements some new instructions: * LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW * STLUR, STLURB, STLURH These are all in a new subgroup of encodings that sits below the top-level "Loads and Stores" group in the Arm ARM. The STLUR* instructions have standard store-release semantics; the LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose to implement them as the slightly stronger Load-Acquire. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-4-peter.maydell@linaro.org
* target/arm: Implement v8.3-RCPCPeter Maydell2020-02-281-0/+5
| | | | | | | | | | | The v8.3-RCPC extension implements three new load instructions which provide slightly weaker consistency guarantees than the existing load-acquire operations. For QEMU we choose to simply implement them with a full LDAQ barrier. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-3-peter.maydell@linaro.org
* target/arm: Fix wrong use of FIELD_EX32 on ID_AA64DFR0Peter Maydell2020-02-281-2/+2
| | | | | | | | | | We missed an instance of using FIELD_EX32 on a 64-bit ID register, in isar_feature_aa64_pmu_8_4(). Fix it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-2-peter.maydell@linaro.org
* target/arm: Remove ARM_FEATURE_VFP*Richard Henderson2020-02-281-3/+0Star
| | | | | | | | | | We have converted all tests against these features to ISAR tests. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Replace ARM_FEATURE_VFP4 with isar_feature_aa32_simdfmacRichard Henderson2020-02-281-0/+12
| | | | | | | | | | | | | | | | All remaining tests for VFP4 are for fused multiply-add insns. Since the MVFR1 field is used for both VFP and NEON, move its adjustment from the !has_neon block to the (!has_vfp && !has_neon) block. Test for vfp of the appropraite width alongside the test for simdfmac within translate-vfp.inc.c. Within disas_neon_data_insn, we have already tested for ARM_FEATURE_NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add isar_feature_aa64_fp_simd, isar_feature_aa32_vfpRichard Henderson2020-02-281-0/+11
| | | | | | | | | | | | | | | We cannot easily create "any" functions for these, because the ID_AA64PFR0 fields for FP and SIMD signal "enabled" with zero. Which means that an aarch32-only cpu will return incorrect results when testing the aarch64 registers. To use these, we must either have context or additionally test vs ARM_FEATURE_AARCH64. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add isar_feature_aa32_{fpsp_v2, fpsp_v3, fpdp_v3}Richard Henderson2020-02-281-0/+18
| | | | | | | | | | We will shortly use these to test for VFPv2 and VFPv3 in different situations. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename isar_feature_aa32_fpdp_v2Richard Henderson2020-02-281-2/+2
| | | | | | | | | | | The old name, isar_feature_aa32_fpdp, does not reflect that the test includes VFPv2. We will introduce another feature tests for VFPv3. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add isar_feature_aa32_vfp_simdRichard Henderson2020-02-281-0/+9
| | | | | | | | | | | Use this in the places that were checking ARM_FEATURE_VFP, and are obviously testing for the existance of the register set as opposed to testing for some particular instruction extension. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename isar_feature_aa32_simd_r32Richard Henderson2020-02-211-1/+1
| | | | | | | | | | | | The old name, isar_feature_aa32_fp_d32, does not reflect the MVFR0 field name, SIMDReg. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200214181547.21408-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: wrapped one long line] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Correctly implement ACTLR2, HACTLR2Peter Maydell2020-02-211-0/+5
| | | | | | | | | | | | | | | | | | | | | | | The ACTLR2 and HACTLR2 AArch32 system registers didn't exist in ARMv7 or the original ARMv8. They were later added as optional registers, whose presence is signaled by the ID_MMFR4.AC2 field. From ARMv8.2 they are mandatory (ie ID_MMFR4.AC2 must be non-zero). We implemented HACTLR2 in commit 0e0456ab8895a5e85, but we incorrectly made it exist for all v8 CPUs, and we didn't implement ACTLR2 at all. Sort this out by implementing both registers only when they are supposed to exist, and setting the ID_MMFR4 bit for -cpu max. Note that this removes HACTLR2 from our Cortex-A53, -A47 and -A72 CPU models; this is correct, because those CPUs do not implement this register. Fixes: 0e0456ab8895a5e85 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-22-peter.maydell@linaro.org
* target/arm: Use FIELD_EX32 for testing 32-bit fieldsPeter Maydell2020-02-211-9/+9
| | | | | | | | | | Cut-and-paste errors mean we're using FIELD_EX64() to extract fields from some 32-bit ID register fields. Use FIELD_EX32() instead. (This makes no difference in behaviour, it's just more consistent.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-21-peter.maydell@linaro.org
* target/arm: Use isar_feature function for testing AA32HPD featurePeter Maydell2020-02-211-0/+5
| | | | | | | | | | | | | Now we have moved ID_MMFR4 into the ARMISARegisters struct, we can define and use an isar_feature for the presence of the ARMv8.2-AA32HPD feature, rather than open-coding the test. While we're here, correct a comment typo which missed an 'A' from the feature name. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-20-peter.maydell@linaro.org
* target/arm: Test correct register in aa32_pan and aa32_ats1e1 checksPeter Maydell2020-02-211-7/+7
| | | | | | | | | | | | | | | The isar_feature_aa32_pan and isar_feature_aa32_ats1e1 functions are supposed to be testing fields in ID_MMFR3; but a cut-and-paste error meant we were looking at MVFR0 instead. Fix the functions to look at the right register; this requires us to move at least id_mmfr3 to the ARMISARegisters struct; we choose to move all the ID_MMFRn registers for consistency. Fixes: 3d6ad6bb466f Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-19-peter.maydell@linaro.org
* target/arm: Implement ARMv8.4-PMU extensionPeter Maydell2020-02-211-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | The ARMv8.4-PMU extension adds: * one new required event, STALL * one new system register PMMIR_EL1 (There are also some more L1-cache related events, but since we don't implement any cache we don't provide these, in the same way we don't provide the base-PMUv3 cache events.) The STALL event "counts every attributable cycle on which no attributable instruction or operation was sent for execution on this PE". QEMU doesn't stall in this sense, so this is another always-reads-zero event. The PMMIR_EL1 register is a read-only register providing implementation-specific information about the PMU; currently it has only one field, SLOTS, which defines behaviour of the STALL_SLOT PMU event. Since QEMU doesn't implement the STALL_SLOT event, we can validly make the register read zero. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-15-peter.maydell@linaro.org
* target/arm: Read debug-related ID registers from KVMPeter Maydell2020-02-211-0/+5
| | | | | | | | | | | | | | | | | Now we have isar_feature test functions that look at fields in the ID_AA64DFR0_EL1 and ID_DFR0 ID registers, add the code that reads these register values from KVM so that the checks behave correctly when we're using KVM. No isar_feature function tests ID_AA64DFR1_EL1 or DBGDIDR yet, but we add it to maintain the invariant that every field in the ARMISARegisters struct is populated for a KVM CPU and can be relied on. This requirement isn't actually written down yet, so add a note to the relevant comment. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-13-peter.maydell@linaro.org
* target/arm: Move DBGDIDR into ARMISARegistersPeter Maydell2020-02-211-1/+1
| | | | | | | | | | We're going to want to read the DBGDIDR register from KVM in a subsequent commit, which means it needs to be in the ARMISARegisters sub-struct. Move it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-12-peter.maydell@linaro.org
* target/arm: Stop assuming DBGDIDR always existsPeter Maydell2020-02-211-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AArch32 DBGDIDR defines properties like the number of breakpoints, watchpoints and context-matching comparators. On an AArch64 CPU, the register may not even exist if AArch32 is not supported at EL1. Currently we hard-code use of DBGDIDR to identify the number of breakpoints etc; this works for all our TCG CPUs, but will break if we ever add an AArch64-only CPU. We also have an assert() that the AArch32 and AArch64 registers match, which currently works only by luck for KVM because we don't populate either of these ID registers from the KVM vCPU and so they are both zero. Clean this up so we have functions for finding the number of breakpoints, watchpoints and context comparators which look in the appropriate ID register. This allows us to drop the "check that AArch64 and AArch32 agree on the number of breakpoints etc" asserts: * we no longer look at the AArch32 versions unless that's the right place to be looking * it's valid to have a CPU (eg AArch64-only) where they don't match * we shouldn't have been asserting the validity of ID registers in a codepath used with KVM anyway Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200214175116.9164-11-peter.maydell@linaro.org
* target/arm: Add _aa64_ and _any_ versions of pmu_8_1 isar checksPeter Maydell2020-02-211-2/+13
| | | | | | | | | | | | | | | | Add the 64-bit version of the "is this a v8.1 PMUv3?" ID register check function, and the _any_ version that checks for either AArch32 or AArch64 support. We'll use this in a later commit. We don't (yet) do any isar_feature checks on ID_AA64DFR1_EL1, but we move id_aa64dfr1 into the ARMISARegisters struct with id_aa64dfr0, for consistency. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-10-peter.maydell@linaro.org
* target/arm: Define an aa32_pmu_8_1 isar feature test functionPeter Maydell2020-02-211-1/+8
| | | | | | | | | | | | Instead of open-coding a check on the ID_DFR0 PerfMon ID register field, create a standardly-named isar_feature for "does AArch32 have a v8.1 PMUv3" and use it. This entails moving the id_dfr0 field into the ARMISARegisters struct. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-9-peter.maydell@linaro.org
* target/arm: Add and use FIELD definitions for ID_AA64DFR0_EL1Peter Maydell2020-02-211-0/+10
| | | | | | | | | | Add FIELD() definitions for the ID_AA64DFR0_EL1 and use them where we currently have hard-coded bit values. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-7-peter.maydell@linaro.org
* target/arm: Define and use any_predinv isar_feature testPeter Maydell2020-02-211-0/+5
| | | | | | | | | | Instead of open-coding "ARM_FEATURE_AARCH64 ? aa64_predinv: aa32_predinv", define and use an any_predinv isar_feature test function. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-5-peter.maydell@linaro.org
* target/arm: Add isar_feature_any_fp16 and document naming/usage conventionsPeter Maydell2020-02-211-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | Our current usage of the isar_feature feature tests almost always uses an _aa32_ test when the code path is known to be AArch32 specific and an _aa64_ test when the code path is known to be AArch64 specific. There is just one exception: in the vfp_set_fpscr helper we check aa64_fp16 to determine whether the FZ16 bit in the FP(S)CR exists, but this code is also used for AArch32. There are other places in future where we're likely to want a general "does this feature exist for either AArch32 or AArch64" check (typically where architecturally the feature exists for both CPU states if it exists at all, but the CPU might be AArch32-only or AArch64-only, and so only have one set of ID registers). Introduce a new category of isar_feature_* functions: isar_feature_any_foo() should be tested when what we want to know is "does this feature exist for either AArch32 or AArch64", and always returns the logical OR of isar_feature_aa32_foo() and isar_feature_aa64_foo(). Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-4-peter.maydell@linaro.org
* target/arm: Add _aa32_ to isar_feature functions testing 32-bit ID registersPeter Maydell2020-02-211-3/+10
| | | | | | | | | | | | | | | | | | Enforce a convention that an isar_feature function that tests a 32-bit ID register always has _aa32_ in its name, and one that tests a 64-bit ID register always has _aa64_ in its name. We already follow this except for three cases: thumb_div, arm_div and jazelle, which all need _aa32_ adding. (As noted in the comment, isar_feature_aa32_fp16_arith() is an exception in that it currently tests ID_AA64PFR0_EL1, but will switch to MVFR1 once we've properly implemented FP16 for AArch32.) Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200214175116.9164-2-peter.maydell@linaro.org
* target/arm: Update MSR access to UAORichard Henderson2020-02-131-0/+6
| | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add ID_AA64MMFR2_EL1Richard Henderson2020-02-131-0/+17
| | | | | | | | | | | Add definitions for all of the fields, up to ARMv8.5. Convert the existing RESERVED register to a full register. Query KVM for the value of the register for the host. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Update MSR access for PANRichard Henderson2020-02-131-0/+2
| | | | | | | | | | | For aarch64, there's a dedicated msr (imm, reg) insn. For aarch32, this is done via msr to cpsr. Writes from el0 are ignored, which is already handled by the CPSR_USER mask. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Remove CPSR_RESERVEDRichard Henderson2020-02-131-6/+0Star
| | | | | | | | | | | The only remaining use was in op_helper.c. Use PSTATE_SS directly, and move the commentary so that it is more obvious what is going on. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200208125816.14954-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Replace CPSR_ERET_MASK with aarch32_cpsr_valid_maskRichard Henderson2020-02-131-2/+0Star
| | | | | | | | | | | CPSR_ERET_MASK was a useless renaming of CPSR_RESERVED. The function also takes into account bits that the cpu does not support. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add isar_feature tests for PAN + ATS1E1Richard Henderson2020-02-131-0/+29
| | | | | | | | | | | Include definitions for all of the bits in ID_MMFR3. We already have a definition for ID_AA64MMFR1.PAN. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add mmu_idx for EL1 and EL2 w/ PAN enabledRichard Henderson2020-02-131-11/+22
| | | | | | | | | | | | | | | | To implement PAN, we will want to swap, for short periods of time, to a different privileged mmu_idx. In addition, we cannot do this with flushing alone, because the AT* instructions have both PAN and PAN-less versions. Add the ARMMMUIdx*_PAN constants where necessary next to the corresponding ARMMMUIdx* constant. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200208125816.14954-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Move arm_excp_unmasked to cpu.cRichard Henderson2020-02-071-111/+0Star
| | | | | | | | | | | This inline function has one user in cpu.c, and need not be exposed otherwise. Code movement only, with fixups for checkpatch. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-39-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Update get_a64_user_mem_index for VHERichard Henderson2020-02-071-4/+5
| | | | | | | | | | | | | The EL2&0 translation regime is affected by Load Register (unpriv). The code structure used here will facilitate later changes in this area for implementing UAO and NV. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-36-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add VHE system register redirection and aliasingRichard Henderson2020-02-071-0/+13
| | | | | | | | | | | | | | | | | | | | | Several of the EL1/0 registers are redirected to the EL2 version when in EL2 and HCR_EL2.E2H is set. Many of these registers have side effects. Link together the two ARMCPRegInfo structures after they have been properly instantiated. Install common dispatch routines to all of the relevant registers. The same set of registers that are redirected also have additional EL12/EL02 aliases created to access the original register that was redirected. Omit the generic timer registers from redirection here, because we'll need multiple kinds of redirection from both EL0 and EL2. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-29-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add the hypervisor virtual counterRichard Henderson2020-02-071-5/+6
| | | | | | | | Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-26-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Update arm_sctlr for VHERichard Henderson2020-02-071-9/+1Star
| | | | | | | | | | | | | Use the correct sctlr for EL2&0 regime. Due to header ordering, and where arm_mmu_idx_el is declared, we need to move the function out of line. Use the function in many more places in order to select the correct control. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Reorganize ARMMMUIdxRichard Henderson2020-02-071-76/+58Star
| | | | | | | | | | | | Prepare for, but do not yet implement, the EL2&0 regime. This involves adding the new MMUIdx enumerators and adjusting some of the MMUIdx related predicates to match. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-20-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Tidy ARMMMUIdx m-profile definitionsRichard Henderson2020-02-071-8/+8
| | | | | | | | | | | Replace the magic numbers with the relevant ARM_MMU_IDX_M_* constants. Keep the definitions short by referencing previous symbols. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-19-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rearrange ARMMMUIdxBitRichard Henderson2020-02-071-16/+23
| | | | | | | | | | | | Define via macro expansion, so that renumbering of the base ARMMMUIdx symbols is automatically reflected in the bit definitions. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Expand TBFLAG_ANY.MMUIDX to 4 bitsRichard Henderson2020-02-071-8/+8
| | | | | | | | | | | | We are about to expand the number of mmuidx to 10, and so need 4 bits. For the benefit of reading the number out of -d exec, align it to the penultimate nibble. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Recover 4 bits from TBFLAGsRichard Henderson2020-02-071-26/+44
| | | | | | | | | | | | | | | | | We had completely run out of TBFLAG bits. Split A- and M-profile bits into two overlapping buckets. This results in 4 free bits. We used to initialize all of the a32 and m32 fields in DisasContext by assignment, in arm_tr_init_disas_context. Now we only initialize either the a32 or m32 by assignment, because the bits overlap in tbflags. So zero the entire structure in gen_intermediate_code. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename ARMMMUIdx_S1E2 to ARMMMUIdx_E2Richard Henderson2020-02-071-2/+2
| | | | | | | | | | | | This is part of a reorganization to the set of mmu_idx. The non-secure EL2 regime only has a single stage translation; there is no point in pointing out that the idx is for stage1. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename ARMMMUIdx*_S1E3 to ARMMMUIdx*_SE3Richard Henderson2020-02-071-2/+2
| | | | | | | | | | | | This is part of a reorganization to the set of mmu_idx. The EL3 regime only has a single stage translation, and is always secure. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename ARMMMUIdx_S1SE[01] to ARMMMUIdx_SE10_[01]Richard Henderson2020-02-071-4/+4
| | | | | | | | | | | This is part of a reorganization to the set of mmu_idx. This emphasizes that they apply to the Secure EL1&0 regime. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename ARMMMUIdx_S1NSE* to ARMMMUIdx_Stage1_E*Richard Henderson2020-02-071-2/+2
| | | | | | | | | | | | This is part of a reorganization to the set of mmu_idx. The EL1&0 regime is the only one that uses 2-stage translation. Spelling out Stage avoids confusion with Secure. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename ARMMMUIdx_S2NS to ARMMMUIdx_Stage2Richard Henderson2020-02-071-2/+2
| | | | | | | | | | The EL1&0 regime is the only one that uses 2-stage translation. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename ARMMMUIdx*_S12NSE* to ARMMMUIdx*_E10_*Richard Henderson2020-02-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is part of a reorganization to the set of mmu_idx. This emphasizes that they apply to the EL1&0 regime. The ultimate goal is -- Non-secure regimes: ARMMMUIdx_E10_0, ARMMMUIdx_E20_0, ARMMMUIdx_E10_1, ARMMMUIdx_E2, ARMMMUIdx_E20_2, -- Secure regimes: ARMMMUIdx_SE10_0, ARMMMUIdx_SE10_1, ARMMMUIdx_SE3, -- Helper mmu_idx for non-secure EL1&0 stage1 and stage2 ARMMMUIdx_Stage2, ARMMMUIdx_Stage1_E0, ARMMMUIdx_Stage1_E1, The 'S' prefix is reserved for "Secure". Unless otherwise specified, each mmu_idx represents all stages of translation. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200206105448.4726-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>