summaryrefslogtreecommitdiffstats
path: root/target/arm
Commit message (Collapse)AuthorAgeFilesLines
* target/arm: Use signed quantity to represent VMSAv8-64 translation levelArd Biesheuvel2022-11-221-2/+2
| | | | | | | | | | | | | | | | | | The LPA2 extension implements 52-bit virtual addressing for 4k and 16k translation granules, and for the former, this means an additional level of translation is needed. This means we start counting at -1 instead of 0 when doing a walk, and so 'level' is now a signed quantity, and should be typed as such. So turn it from uint32_t into int32_t. This avoids a level of -1 getting misinterpreted as being >= 3, and terminating a page table walk prematurely with a bogus output address. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Don't do two-stage lookup if stage 2 is disabledPeter Maydell2022-11-221-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In get_phys_addr_with_struct(), we call get_phys_addr_twostage() if the CPU supports EL2. However, we don't check here that stage 2 is actually enabled. Instead we only check that inside get_phys_addr_twostage() to skip stage 2 translation. This means that even if stage 2 is disabled we still tell the stage 1 lookup to do its page table walks via stage 2. This works by luck for normal CPU accesses, but it breaks for debug accesses, which are used by the disassembler and also by semihosting file reads and writes, because the debug case takes a different code path inside S1_ptw_translate(). This means that setups that use semihosting for file loads are broken (a regression since 7.1, introduced in recent ptw refactoring), and that sometimes disassembly in debug logs reports "unable to read memory" rather than showing the guest insns. Fix the bug by hoisting the "is stage 2 enabled?" check up to get_phys_addr_with_struct(), so that we handle S2 disabled the same way we do the "no EL2" case, with a simple single stage lookup. Reported-by: Jens Wiklander <jens.wiklander@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221121212404.1450382-1-peter.maydell@linaro.org
* target/arm: Limit LPA2 effective output address when TCR.DS == 0Ard Biesheuvel2022-11-211-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | With LPA2, the effective output address size is at most 48 bits when TCR.DS == 0. This case is currently unhandled in the page table walker, where we happily assume LVA/64k granule when outputsize > 48 and param.ds == 0, resulting in the wrong conversion to be used from a page table descriptor to a physical address. if (outputsize > 48) { if (param.ds) { descaddr |= extract64(descriptor, 8, 2) << 50; } else { descaddr |= extract64(descriptor, 12, 4) << 48; } So cap the outputsize to 48 when TCR.DS is cleared, as per the architecture. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221116170316.259695-1-ardb@kernel.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Two fixes for secure ptwRichard Henderson2022-11-041-7/+8
| | | | | | | | | | | | Reversed the sense of non-secure in get_phys_addr_lpae, and failed to initialize attrs.secure for ARMMMUIdx_Phys_S. Fixes: 48da29e4 ("target/arm: Add ptw_idx to S1Translate") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1293 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Honor HCR_E2H and HCR_TGE in ats_write64()Ake Koomsin2022-11-041-6/+9
| | | | | | | | | | | | | | We need to check HCR_E2H and HCR_TGE to select the right MMU index for the correct translation regime. To check for EL2&0 translation regime: - For S1E0*, S1E1* and S12E* ops, check both HCR_E2H and HCR_TGE - For S1E2* ops, check only HCR_E2H Signed-off-by: Ake Koomsin <ake@igel.co.jp> Message-id: 20221101064250.12444-1-ake@igel.co.jp Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Copy the entire vector in DO_ZIPRichard Henderson2022-11-041-2/+2
| | | | | | | | | | | With odd_ofs set, we weren't copying enough data. Fixes: 09eb6d7025d1 ("target/arm: Move sve zip high_ofs into simd_data") Reported-by: Idan Horowitz <idan.horowitz@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221031054144.3574-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix Privileged Access Never (PAN) for aarch32Timofey Kutergin2022-11-042-7/+41
| | | | | | | | | | | | | | | | | | When we implemented the PAN support we theoretically wanted to support it for both AArch32 and AArch64, but in practice several bugs made it essentially unusable with an AArch32 guest. Fix all those problems: - Use CPSR.PAN to check for PAN state in aarch32 mode - throw permission fault during address translation when PAN is enabled and kernel tries to access user acessible page - ignore SCTLR_XP bit for armv7 and armv8 (conflicts with SCTLR_SPAN). Signed-off-by: Timofey Kutergin <tkutergin@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221027112619.2205229-1-tkutergin@gmail.com [PMM: tweak commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Make TLBIOS and TLBIRANGE ops trap on HCR_EL2.TTLBPeter Maydell2022-11-041-18/+18
| | | | | | | | | | The HCR_EL2.TTLB bit is supposed to trap all EL1 execution of TLB maintenance instructions. However we have added new TLB insns for FEAT_TLBIOS and FEAT_TLBIRANGE, and forgot to set their accessfn to access_ttlb. Add the missing accessfns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Remove will_exit argument from cpu_restore_stateRichard Henderson2022-10-312-5/+5
| | | | | | | | | The value passed is always true, and if the target's synchronize_from_tb hook is non-trivial, not exiting may be erroneous. Reviewed-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Use the max page size in a 2-stage ptwRichard Henderson2022-10-271-1/+10
| | | | | | | | | | | | | | | We had only been reporting the stage2 page size. This causes problems if stage1 is using a larger page size (16k, 2M, etc), but stage2 is using a smaller page size, because cputlb does not set large_page_{addr,mask} properly. Fix by using the max of the two page sizes. Reported-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement FEAT_HAFDBS, dirty bit portionRichard Henderson2022-10-272-1/+17
| | | | | | | | Perform the atomic update for hardware management of the dirty bit. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement FEAT_HAFDBS, access flag portionRichard Henderson2022-10-272-22/+156
| | | | | | | | | | | Perform the atomic update for hardware management of the access flag. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-13-richard.henderson@linaro.org [PMM: Fix accidental PROT_WRITE to PAGE_WRITE; add missing main-loop.h include] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Tidy merging of attributes from descriptor and tableRichard Henderson2022-10-271-18/+16Star
| | | | | | | | | Replace some gotos with some nested if statements. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Consider GP an attribute in get_phys_addr_lpaeRichard Henderson2022-10-271-4/+2Star
| | | | | | | | | | | | | Both GP and DBM are in the upper attribute block. Extend the computation of attrs to include them, then simplify the setting of guarded. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Don't shift attrs in get_phys_addr_lpaeRichard Henderson2022-10-271-16/+15Star
| | | | | | | | | | | | | Leave the upper and lower attributes in the place they originate from in the descriptor. Shifting them around is confusing, since one cannot read the bit numbers out of the manual. Also, new attributes have been added which would alter the shifts. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20221024051851.3074715-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix fault reporting in get_phys_addr_lpaeRichard Henderson2022-10-271-18/+13Star
| | | | | | | | | | | | | Always overriding fi->type was incorrect, as we would not properly propagate the fault type from S1_ptw_translate, or arm_ldq_ptw. Simplify things by providing a new label for a translation fault. For other faults, store into fi directly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Remove loop from get_phys_addr_lpaeRichard Henderson2022-10-271-92/+92
| | | | | | | | | | | | | | | The unconditional loop was used both to iterate over levels and to control parsing of attributes. Use an explicit goto in both cases. While this appears less clean for iterating over levels, we will need to jump back into the middle of this loop for atomic updates, which is even uglier. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add ARMFault_UnsuppAtomicUpdateRichard Henderson2022-10-271-0/+4
| | | | | | | | | | | | This fault type is to be used with FEAT_HAFDBS when the guest enables hw updates, but places the tables in memory where atomic updates are unsupported. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Move S1_ptw_translate outside arm_ld[lq]_ptwRichard Henderson2022-10-271-19/+22
| | | | | | | | | | Separate S1 translation from the actual lookup. Will enable lpae hardware updates. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Extract HA and HD in aa64_va_parametersRichard Henderson2022-10-272-1/+9
| | | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add isar predicates for FEAT_HAFDBSRichard Henderson2022-10-271-0/+10
| | | | | | | | | | The MMFR1 field may indicate support for hardware update of access flag alone, or access flag and dirty bit. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add ptw_idx to S1TranslateRichard Henderson2022-10-271-17/+54
| | | | | | | | | | | | | Hoist the computation of the mmu_idx for the ptw up to get_phys_addr_with_struct and get_phys_addr_twostage. This removes the duplicate check for stage2 disabled from the middle of the walk, performing it only once. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20221024051851.3074715-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Introduce regime_is_stage2Richard Henderson2022-10-273-17/+16Star
| | | | | | | | | | Reduce the amount of typing required for this check. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221024051851.3074715-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: honor HCR_E2H and HCR_TGE in arm_excp_unmasked()Ake Koomsin2022-10-271-7/+17
| | | | | | | | | | | | | | | | | | | | | | An exception targeting EL2 from lower EL is actually maskable when HCR_E2H and HCR_TGE are both set. This applies to both secure and non-secure Security state. We can remove the conditions that try to suppress masking of interrupts when we are Secure and the exception targets EL2 and Secure EL2 is disabled. This is OK because in that situation arm_phys_excp_target_el() will never return 2 as the target EL. The 'not if secure' check in this function was originally written before arm_hcr_el2_eff(), and back then the target EL returned by arm_phys_excp_target_el() could be 2 even if we were in Secure EL0/EL1; but it is no longer needed. Signed-off-by: Ake Koomsin <ake@igel.co.jp> Message-id: 20221017092432.546881-1-ake@igel.co.jp [PMM: Add commit message paragraph explaining why it's OK to remove the checks on secure and SCR_EEL2] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement FEAT_E0PDPeter Maydell2022-10-275-19/+34
| | | | | | | | | | | | | | FEAT_E0PD adds new bits E0PD0 and E0PD1 to TCR_EL1, which allow the OS to forbid EL0 access to half of the address space. Since this is an EL0-specific variation on the existing TCR_ELx.{EPD0,EPD1}, we can implement it entirely in aa64_va_parameters(). This requires moving the existing regime_is_user() to internals.h so that the code in helper.c can get at it. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221021160131.3531787-1-peter.maydell@linaro.org
* target/arm: Convert to tcg_ops restore_state_to_opcRichard Henderson2022-10-262-22/+26
| | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Simplify page_get/alloc_target_dataRichard Henderson2022-10-261-4/+0Star
| | | | | | | | | | Since the only user, Arm MTE, always requires allocation, merge the get and alloc functions to always produce a non-null result. Also assume that the user has already checked page validity. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Make page_alloc_target_data allocation constantRichard Henderson2022-10-263-6/+9
| | | | | | | | | | Use a constant target data allocation size for all pages. This will be necessary to reduce overhead of page tracking. Since TARGET_PAGE_DATA_SIZE is now required, we can use this to omit data tracking for targets that don't require it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Enable TARGET_TB_PCRELRichard Henderson2022-10-206-71/+178
| | | | | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Introduce gen_pc_plus_diff for aarch32Richard Henderson2022-10-201-17/+21
| | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Introduce gen_pc_plus_diff for aarch64Richard Henderson2022-10-201-12/+29
| | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Change gen_jmp* to work on displacementsRichard Henderson2022-10-201-16/+21
| | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Remove gen_exception_internal_insn pc argumentRichard Henderson2022-10-202-8/+8
| | | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Since we always pass dc->pc_curr, fold the arithmetic to zero displacement. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Change gen_exception_insn* to work on displacementsRichard Henderson2022-10-206-46/+43Star
| | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Change gen_*set_pc_im to gen_*update_pcRichard Henderson2022-10-205-54/+56
| | | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values by passing in pc difference. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Change gen_goto_tb to work on displacementsRichard Henderson2022-10-202-23/+27
| | | | | | | | | In preparation for TARGET_TB_PCREL, reduce reliance on absolute values. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Introduce curr_insn_lenRichard Henderson2022-10-203-4/+8
| | | | | | | | | A simple helper to retrieve the length of the current insn. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use bool consistently for get_phys_addr subroutinesRichard Henderson2022-10-201-4/+3Star
| | | | | | | | | | The return type of the functions is already bool, but in a few instances we used an integer type with the return statement. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Split out get_phys_addr_twostageRichard Henderson2022-10-201-91/+100
| | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use softmmu tlbs for page table walkingRichard Henderson2022-10-203-75/+145
| | | | | | | | | | | | | So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and arm_ldq_ptw. Use probe_access_full to find the host address, and if so use a host load. If the probe fails, we've got our fault info already. On the off chance that page tables are not in RAM, continue to use the address_space_ld* functions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Move be test for regime into S1TranslateResultRichard Henderson2022-10-201-2/+4
| | | | | | | | | Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Plumb debug into S1TranslateRichard Henderson2022-10-201-18/+37
| | | | | | | | | | | Before using softmmu page tables for the ptw, plumb down a debug parameter so that we can query page table entries from gdbstub without modifying cpu state. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Split out S1Translate typeRichard Henderson2022-10-201-61/+79
| | | | | | | | | | | Consolidate most of the inputs and outputs of S1_ptw_translate into a single structure. Plumb this through arm_ld*_ptw from the controlling get_phys_addr_* routine. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Restrict tlb flush from vttbr_write to vmid changeRichard Henderson2022-10-201-2/+2
| | | | | | | | | Compare only the VMID field when considering whether we need to flush. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idxRichard Henderson2022-10-203-49/+127
| | | | | | | | | | | We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb. Flush the tlb when invalidating stage 1+2 translations. Re-use alle1_tlbmask() for other instances of EL1&0 + Stage2. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add ARMMMUIdx_Phys_{S,NS}Richard Henderson2022-10-203-4/+24
| | | | | | | | | | Not yet used, but add mmu indexes for 1-1 mapping to physical addresses. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use probe_access_full for BTIRichard Henderson2022-10-205-31/+20Star
| | | | | | | | | | | | Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit. In is_guarded_page, use probe_access_full instead of just guessing that the tlb entry is still present. Also handles the FIXME about executing from device memory. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use probe_access_full for MTERichard Henderson2022-10-205-86/+36Star
| | | | | | | | | | | The CPUTLBEntryFull structure now stores the original pte attributes, as well as the physical address. Therefore, we no longer need a separate bit in MemTxAttrs, nor do we need to walk the tree of memory regions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Enable TARGET_PAGE_ENTRY_EXTRARichard Henderson2022-10-202-0/+15
| | | | | | | | | | | Copy attrs and shareability, into the TLB. This will eventually be used by S1_ptw_translate to report stage1 translation failures, and by do_ats_write to fill in PAR_EL1. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20221011031911.2408754-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: update the cortex-a15 MIDR to latest revAlex Bennée2022-10-201-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | QEMU doesn't model micro-architectural details which includes most chip errata. The ARM_ERRATA_798181 work around in the Linux kernel (see erratum_a15_798181_init) currently detects QEMU's cortex-a15 as broken and triggers additional expensive TLB flushes as a result. Change the MIDR to report what the latest silicon would (r4p0). We explicitly set the IMPDEF revidr bits to 0 because we don't need to set anything other than the silicon revision to indicate these flushes are not needed. This cuts about 5s from my Debian kernel boot with the latest 6.0rc1 kernel (29s->24s). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Anders Roxell <anders.roxell@linaro.org> Message-id: 20221010153225.506394-1-alex.bennee@linaro.org Cc: Arnd Bergmann <arnd@linaro.org> Cc: Anders Roxell <anders.roxell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Anders Roxell <anders.roxell@linaro.org> Message-Id: <20220906172257.2776521-1-alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>