summaryrefslogtreecommitdiffstats
path: root/target/arm/translate.c
Commit message (Collapse)AuthorAgeFilesLines
* target/arm: Assert thumb pc is alignedRichard Henderson2021-12-151-0/+3
| | | | | | | | | | | | | Misaligned thumb PC is architecturally impossible. Assert is better than proceeding, in case we've missed something somewhere. Expand a comment about aligning the pc in gdbstub. Fail an incoming migrate if a thumb pc is misaligned. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Take an exception if PC is misalignedRichard Henderson2021-12-151-1/+21
| | | | | | | | | | | | | | | | | | For A64, any input to an indirect branch can cause this. For A32, many indirect branch paths force the branch to be aligned, but BXWritePC does not. This includes the BX instruction but also other interworking changes to PC. Prior to v8, this case is UNDEFINED. With v8, this is CONSTRAINED UNPREDICTABLE and may either raise an exception or force align the PC. We choose to raise an exception because we have the infrastructure, it makes the generated code for gen_bx simpler, and it has the possibility of catching more guest bugs. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Split arm_pre_translate_insnRichard Henderson2021-12-151-3/+7
| | | | | | | | | | | | | Create arm_check_ss_active and arm_check_kernelpage. Reverse the order of the tests. While it doesn't matter in practice, because only user-only has a kernel page and user-only never sets ss_active, ss_active has priority over execution exceptions and it is best to keep them in the proper order. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Hoist pc_next to a local variable in thumb_tr_translate_insnRichard Henderson2021-12-151-8/+8
| | | | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Hoist pc_next to a local variable in arm_tr_translate_insnRichard Henderson2021-12-151-4/+5
| | | | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use tcg_constant_i32() in gen_rev16()Philippe Mathieu-Daudé2021-11-021-2/+1Star
| | | | | | | | | | Since the mask is a constant value, use tcg_constant_i32() instead of a TCG temporary. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211029231834.2476117-6-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Use the constant variant of store_cpu_field() when possiblePhilippe Mathieu-Daudé2021-11-021-15/+6Star
| | | | | | | | | | When using a constant variable, we can replace the store_cpu_field() call by store_cpu_field_constant() which avoid using TCG temporaries. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211029231834.2476117-4-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Use tcg_constant_i32() in op_smlad()Philippe Mathieu-Daudé2021-11-021-2/+1Star
| | | | | | | | | Avoid using a TCG temporary for a read-only constant. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211029231834.2476117-2-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Drop checks for singlestep_enabledRichard Henderson2021-10-161-30/+6Star
| | | | | | GDB single-stepping is now handled generically. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Add TB flag for "MVE insns not predicated"Peter Maydell2021-09-211-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | Our current codegen for MVE always calls out to helper functions, because some byte lanes might be predicated. The common case is that in fact there is no predication active and all lanes should be updated together, so we can produce better code by detecting that and using the TCG generic vector infrastructure. Add a TB flag that is set when we can guarantee that there is no active MVE predication, and a bool in the DisasContext. Subsequent patches will use this flag to generate improved code for some instructions. In most cases when the predication state changes we simply end the TB after that instruction. For the code called from vfp_access_check() that handles lazy state preservation and creating a new FP context, we can usually avoid having to try to end the TB because luckily the new value of the flag following the register changes in those sequences doesn't depend on any runtime decisions. We do have to end the TB if the guest has enabled lazy FP state preservation but not automatic state preservation, but this is an odd corner case that is not going to be common in real-world code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-4-peter.maydell@linaro.org
* target/arm: Avoid goto_tb if we're trying to exit to the main loopPeter Maydell2021-09-211-1/+33
| | | | | | | | | | | | | | | | | | | | | Currently gen_jmp_tb() assumes that if it is called then the jump it is handling is the only reason that we might be trying to end the TB, so it will use goto_tb if it can. This is usually the case: mostly "we did something that means we must end the TB" happens on a non-branch instruction. However, there are cases where we decide early in handling an instruction that we need to end the TB and return to the main loop, and then the insn is a complex one that involves gen_jmp_tb(). For instance, for M-profile FP instructions, in gen_preserve_fp_state() which is called from vfp_access_check() we want to force an exit to the main loop if lazy state preservation is active and we are in icount mode. Make gen_jmp_tb() look at the current value of is_jmp, and only use goto_tb if the previous is_jmp was DISAS_NEXT or DISAS_TOO_MANY. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210913095440.13462-2-peter.maydell@linaro.org
* accel/tcg: Add DisasContextBase argument to translator_ld*Ilya Leoshkevich2021-09-141-4/+5
| | | | | | Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> [rth: Split out of a larger patch.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Take an exception if PSTATE.IL is setPeter Maydell2021-09-131-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In v8A, the PSTATE.IL bit is set for various kinds of illegal exception return or mode-change attempts. We already set PSTATE.IL (or its AArch32 equivalent CPSR.IL) in all those cases, but we weren't implementing the part of the behaviour where attempting to execute an instruction with PSTATE.IL takes an immediate exception with an appropriate syndrome value. Add a new TB flags bit tracking PSTATE.IL/CPSR.IL, and generate code to take an exception instead of whatever the instruction would have been. PSTATE.IL and CPSR.IL change only on exception entry, attempted exception exit, and various AArch32 mode changes via cpsr_write(). These places generally already rebuild the hflags, so the only place we need an extra rebuild_hflags call is in the illegal-return codepath of the AArch64 exception_return helper. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210821195958.41312-2-richard.henderson@linaro.org Message-Id: <20210817162118.24319-1-peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [rth: Added missing returns; set IL bit in syndrome] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Implement HSTR.TJDBXPeter Maydell2021-08-261-0/+12
| | | | | | | | | | | In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1 access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ. Implement these traps. In v8A this HSTR bit doesn't exist, so don't trap for v8A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
* target/arm: Implement M-profile trapping on division by zeroPeter Maydell2021-08-251-2/+2
| | | | | | | | | | | | | Unlike A-profile, for M-profile the UDIV and SDIV insns can be configured to raise an exception on division by zero, using the CCR DIV_0_TRP bit. Implement support for setting this bit by making the helper functions raise the appropriate exception. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
* target/arm: Implement MVE VCTPPeter Maydell2021-08-251-0/+33
| | | | | | | | | | | | | | | | | | Implement the MVE VCTP insn, which sets the VPR.P0 predicate bits so as to predicate any element at index Rn or greater is predicated. As with VPNOT, this insn itself is predicable and subject to beatwise execution. The calculation of the mask is the same as is used to determine ltpmask in mve_element_mask(), but we precalculate masklen in generated code to avoid having to have 4 helpers specialized by size. We put the decode line in with the low-overhead-loop insns in t32.decode because it's logically part of that collection of insn patterns, even though it is an MVE only insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Enforce that M-profile SP low 2 bits are always zeroPeter Maydell2021-07-271-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | For M-profile, unlike A-profile, the low 2 bits of SP are defined to be RES0H, which is to say that they must be hardwired to zero so that guest attempts to write non-zero values to them are ignored. Implement this behaviour by masking out the low bits: * for writes to r13 by the gdbstub * for writes to any of the various flavours of SP via MSR * for writes to r13 via store_reg() in generated code Note that all the direct uses of cpu_R[] in translate.c are in places where the register is definitely not r13 (usually because that has been checked for as an UNDEFINED or UNPREDICTABLE case and handled as UNDEF). All the other writes to regs[13] in C code are either: * A-profile only code * writes of values we can guarantee to be aligned, such as - writes of previous-SP-value plus or minus a 4-aligned constant - writes of the value in an SP limit register (which we already enforce to be aligned) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
* accel/tcg: Remove TranslatorOps.breakpoint_checkRichard Henderson2021-07-211-29/+0Star
| | | | | | | | | The hook is now unused, with breakpoints checked outside translation. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Use translator_use_goto_tb for aarch32Richard Henderson2021-07-091-11/+1Star
| | | | | | | | Just use translator_use_goto_tb directly at the one call site, rather than maintaining a local wrapper. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Use DISAS_TOO_MANY for ISB and SBRichard Henderson2021-07-091-2/+2
| | | | | | | | Using gen_goto_tb directly misses the single-step check. Let the branch or debug exception be emitted by arm_tr_tb_stop. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Avoid including 'trace-tcg.h' in target translate.cPhilippe Mathieu-Daudé2021-07-091-1/+0Star
| | | | | | | | | | | | | | | | | | | | The root trace-events only declares a single TCG event: $ git grep -w tcg trace-events trace-events:115:# tcg/tcg-op.c trace-events:137:vcpu tcg guest_mem_before(TCGv vaddr, uint16_t info) "info=%d", "vaddr=0x%016"PRIx64" info=%d" and only a tcg/tcg-op.c uses it: $ git grep -l trace_guest_mem_before_tcg tcg/tcg-op.c therefore it is pointless to include "trace-tcg.h" in each target (because it is not used). Remove it. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210629050935.2570721-1-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Implement MVE shifts by registerPeter Maydell2021-07-021-0/+30
| | | | | | | | | Implement the MVE shifts by register, which perform shifts on a single general-purpose register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-19-peter.maydell@linaro.org
* target/arm: Implement MVE shifts by immediatePeter Maydell2021-07-021-2/+66
| | | | | | | | | | | | Implement the MVE shifts by immediate, which perform shifts on a single general-purpose register. These patterns overlap with the long-shift-by-immediates, so we have to rearrange the grouping a little here. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-18-peter.maydell@linaro.org
* target/arm: Implement MVE long shifts by registerPeter Maydell2021-07-021-0/+69
| | | | | | | | | | | | | | | | | Implement the MVE long shifts by register, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with the shift count in another general-purpose register, which might be either positive or negative. Like the long-shifts-by-immediate, these encodings sit in the space that was previously the UNPREDICTABLE MOVS/ORRS with Rm==13,15. Because LSLL_rr and ASRL_rr overlap with both MOV_rxri/ORR_rrri and also with CSEL (as one of the previously-UNPREDICTABLE Rm==13 cases), we have to move the CSEL pattern into the same decodetree group. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-17-peter.maydell@linaro.org
* target/arm: Implement MVE long shifts by immediatePeter Maydell2021-07-021-0/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MVE extension to v8.1M includes some new shift instructions which sit entirely within the non-coprocessor part of the encoding space and which operate only on general-purpose registers. They take up the space which was previously UNPREDICTABLE MOVS and ORRS encodings with Rm == 13 or 15. Implement the long shifts by immediate, which perform shifts on a pair of general-purpose registers treated as a 64-bit quantity, with an immediate shift count between 1 and 32. Awkwardly, because the MOVS and ORRS trans functions do not UNDEF for the Rm==13,15 case, we need to explicitly emit code to UNDEF for the cases where v8.1M now requires that. (Trying to change MOVS and ORRS is too difficult, because the functions that generate the code are shared between a dozen different kinds of arithmetic or logical instruction for all A32, T16 and T32 encodings, and for some insns and some encodings Rm==13,15 are valid.) We make the helper functions we need for UQSHLL and SQSHLL take a 32-bit value which the helper casts to int8_t because we'll need these helpers also for the shift-by-register insns, where the shift count might be < 0 or > 32. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-16-peter.maydell@linaro.org
* target/arm: Use asimd_imm_const for A64 decodePeter Maydell2021-07-021-2/+15
| | | | | | | | | | | The A64 AdvSIMD modified-immediate grouping uses almost the same constant encoding that A32 Neon does; reuse asimd_imm_const() (to which we add the AArch64-specific case for cmode 15 op 1) instead of reimplementing it all. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-5-peter.maydell@linaro.org
* target/arm: Make asimd_imm_const() publicPeter Maydell2021-07-021-0/+57
| | | | | | | | | | | The function asimd_imm_const() in translate-neon.c is an implementation of the pseudocode AdvSIMDExpandImm(), which we will also want for MVE. Move the implementation to translate.c, with a prototype in translate.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210628135835.6690-4-peter.maydell@linaro.org
* target/arm: Improve REVSHRichard Henderson2021-06-291-3/+1Star
| | | | | | | | The new bswap flags can implement the semantics exactly. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Add flags argument to tcg_gen_bswap16_*, tcg_gen_bswap32_i64Richard Henderson2021-06-291-1/+1
| | | | | | | | | | Implement the new semantics in the fallback expansion. Change all callers to supply the flags that keep the semantics unchanged locally. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Add framework for MVE decodePeter Maydell2021-06-161-0/+1
| | | | | | | | | Add the framework for decoding MVE insns, with the necessary new files and the meson.build rules, but no actual content yet. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210614151007.4545-11-peter.maydell@linaro.org
* target/arm: Implement MVE LETP insnPeter Maydell2021-06-161-8/+96
| | | | | | | | | | | | | | | | | Implement the MVE LETP insn. This is like the existing LE loop-end insn, but it must perform an FPU-enabled check, and on loop-exit it resets LTPSIZE to 4. To accommodate the requirement to do something on loop-exit, we drop the use of condlabel and instead manage both the TB exits manually, in the same way we already do in trans_WLS(). The other MVE-specific change to the LE insn is that we must raise an INVSTATE UsageFault insn if LTPSIZE is not 4. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210614151007.4545-10-peter.maydell@linaro.org
* target/arm: Implement MVE DLSTPPeter Maydell2021-06-161-2/+21
| | | | | | | | | | Implement the MVE DLSTP insn; this is like the existing DLS insn, except that it must do an FPU access check and it sets LTPSIZE to the value specified in the insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210614151007.4545-9-peter.maydell@linaro.org
* target/arm: Implement MVE WLSTP insnPeter Maydell2021-06-161-1/+36
| | | | | | | | | | Implement the MVE WLSTP insn; this is like the existing WLS insn, except that it specifies a size value which is used to set FPSCR.LTPSIZE. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210614151007.4545-8-peter.maydell@linaro.org
* target/arm: Implement MVE LCTPPeter Maydell2021-06-161-0/+24
| | | | | | | | | | | | | Implement the MVE LCTP instruction. We put its decode and implementation with the other low-overhead-branch insns because although it is only present if MVE is implemented it is logically in the same group as the other LOB insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210614151007.4545-7-peter.maydell@linaro.org
* target/arm: Add handling for PSR.ECI/ICIPeter Maydell2021-06-161-5/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On A-profile, PSR bits [15:10][26:25] are always the IT state bits. On M-profile, some of the reserved encodings of the IT state are used to instead indicate partial progress through instructions that were interrupted partway through by an exception and can be resumed. These resumable instructions fall into two categories: (1) load/store multiple instructions, where these bits are called "ICI" and specify the register in the ldm/stm list where execution should resume. (Specifically: LDM, STM, VLDM, VSTM, VLLDM, VLSTM, CLRM, VSCCLRM.) (2) MVE instructions subject to beatwise execution, where these bits are called "ECI" and specify which beats in this and possibly also the following MVE insn have been executed. There are also a few insns (LE, LETP, and BKPT) which do not use the ICI/ECI bits but must leave them alone. Otherwise, we should raise an INVSTATE UsageFault for any attempt to execute an insn with non-zero ICI/ECI bits. So far we have been able to ignore ECI/ICI, because the architecture allows the IMPDEF choice of "always restart load/store multiple from the beginning regardless of ICI state", so the only thing we have been missing is that we don't raise the INVSTATE fault for bad guest code. However, MVE requires that we honour ECI bits and do not rexecute beats of an insn that have already been executed. Add the support in the decoder for handling ECI/ICI: * identify the ECI/ICI case in the CONDEXEC TB flags * when a load/store multiple insn succeeds, it updates the ECI/ICI state (both in DisasContext and in the CPU state), and sets a flag to say that the ECI/ICI state was handled * if we find that the insn we just decoded did not handle the ECI/ICI state, we delete all the code that we just generated for it and instead emit the code to raise the INVFAULT. This allows us to avoid having to update every non-MVE non-LDM/STM insn to make it check for "is ECI/ICI set?". We continue with our existing IMPDEF choice of not caring about the ICI state for the load/store multiples and simply restarting them from the beginning. Because we don't allow interrupts in the middle of an insn, the only way we would see this state is if the guest set ICI manually on return from an exception handler, so it's a corner case which doesn't merit optimisation. ICI update for LDM/STM is simple -- it always zeroes the state. ECI update for MVE beatwise insns will be a little more complex, since the ECI state may include information for the following insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210614151007.4545-5-peter.maydell@linaro.org
* target/arm: Make sure that commpage's tb->size != 0Ilya Leoshkevich2021-05-201-0/+2
| | | | | | | | | | | | | | tb_gen_code() assumes that tb->size must never be zero, otherwise it may produce spurious exceptions. For ARM this may happen when creating a translation block for the commpage. Fix by pretending that commpage translation blocks have at least one instruction. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210416154939.32404-3-iii@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
* target/arm: Make translate-neon.c.inc its own compilation unitPeter Maydell2021-05-101-3/+0Star
| | | | | | | | | | Switch translate-neon.c.inc from being #included into translate.c to being its own compilation unit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-14-peter.maydell@linaro.org
* target/arm: Make functions used by translate-neon globalPeter Maydell2021-05-101-8/+2Star
| | | | | | | | | | Make the remaining functions needed by the translate-neon code global. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-13-peter.maydell@linaro.org
* target/arm: Move NeonGenThreeOpEnvFn typedef to translate.hPeter Maydell2021-05-101-3/+0Star
| | | | | | | | | | Move the NeonGenThreeOpEnvFn typedef to translate.h together with the other similar typedefs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210430132740.10391-12-peter.maydell@linaro.org
* target/arm: Delete unused typedefPeter Maydell2021-05-101-2/+0Star
| | | | | | | | | The VFPGenFixPointFn typedef is unused; delete it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210430132740.10391-11-peter.maydell@linaro.org
* target/arm: Move vfp_reg_ptr() to translate-neon.c.incPeter Maydell2021-05-101-7/+0Star
| | | | | | | | | | The function vfp_reg_ptr() is used only in translate-neon.c.inc; move it there. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-10-peter.maydell@linaro.org
* target/arm: Make translate-vfp.c.inc its own compilation unitPeter Maydell2021-05-101-2/+1Star
| | | | | | | | | | Switch translate-vfp.c.inc from being #included into translate.c to being its own compilation unit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-9-peter.maydell@linaro.org
* target/arm: Make functions used by translate-vfp globalPeter Maydell2021-05-101-17/+8Star
| | | | | | | | | | Make the remaining functions which are needed by translate-vfp.c.inc global. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-8-peter.maydell@linaro.org
* target/arm: Move vfp_{load, store}_reg{32, 64} to translate-vfp.c.incPeter Maydell2021-05-101-20/+0Star
| | | | | | | | | | | The functions vfp_load_reg32(), vfp_load_reg64(), vfp_store_reg32() and vfp_store_reg64() are used only in translate-vfp.c.inc. Move them to that file. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-7-peter.maydell@linaro.org
* target/arm: Move gen_aa32 functions to translate-a32.hPeter Maydell2021-05-101-35/+16Star
| | | | | | | | | | Move the various gen_aa32* functions and macros out of translate.c and into translate-a32.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-6-peter.maydell@linaro.org
* target/arm: Split m-nocp trans functions into their own filePeter Maydell2021-05-101-1/+0Star
| | | | | | | | | | | | Currently the trans functions for m-nocp.decode all live in translate-vfp.inc.c; move them out into their own translation unit, translate-m-nocp.c. The trans_* functions here are pure code motion with no changes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-5-peter.maydell@linaro.org
* target/arm: Make functions used by m-nocp globalPeter Maydell2021-05-101-32/+7Star
| | | | | | | | | | | | | | | | | | | | We want to split out the .c.inc files which are currently included into translate.c so they are separate compilation units. To do this we need to make some functions which are currently file-local to translate.c have global scope; create a translate-a32.h paralleling the existing translate-a64.h as a place for these declarations to live, so that code moved into the new compilation units can call them. The functions made global here are those required by the m-nocp.decode functions, except that I have converted the whole family of {read,write}_neon_element* and also both the load_cpu and store_cpu functions for consistency, even though m-nocp only wants a few functions from each. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-4-peter.maydell@linaro.org
* target/arm: Share unallocated_encoding() and gen_exception_insn()Peter Maydell2021-05-101-5/+9
| | | | | | | | | | | | | | | | | | The unallocated_encoding() function is the same in both translate-a64.c and translate.c; make the translate.c function global and drop the translate-a64.c version. To do this we need to also share gen_exception_insn(), which currently exists in two slightly different versions for A32 and A64: merge those into a single function that can work for both. This will be useful for splitting up translate.c, which will require unallocated_encoding() to no longer be file-local. It's also hopefully less confusing to have only one version of the function rather than two. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-3-peter.maydell@linaro.org
* target/arm: Move constant expanders to translate.hPeter Maydell2021-05-101-24/+0Star
| | | | | | | | | | | Some of the constant expanders defined in translate.c are generically useful and will be used by the separate C files for VFP and Neon once they are created; move the expander definitions to translate.h. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210430132740.10391-2-peter.maydell@linaro.org
* target/arm: Enforce alignment for VLDn (all lanes)Richard Henderson2021-04-301-0/+15
| | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-23-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>