summaryrefslogtreecommitdiffstats
path: root/target/arm/vfp.decode
Commit message (Collapse)AuthorAgeFilesLines
* target/arm: Implement VLDR/VSTR system registerPeter Maydell2020-12-101-0/+14
| | | | | | | | | Implement the new-in-v8.1M VLDR/VSTR variants which directly read or write FP system registers to memory. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201119215617.29887-10-peter.maydell@linaro.org
* arm tcg cpus: Fix Lesser GPL version numberChetan Pant2020-11-151-1/+1
| | | | | | | | | | | | There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by: Chetan Pant <chetan4windows@gmail.com> Message-Id: <20201023122913.19561-1-chetan4windows@gmail.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
* target/arm: Implement VFP fp16 VMOV between gp and halfprec registersPeter Maydell2020-09-011-0/+1
| | | | | | | | | | | | | Implement the VFP fp16 variant of VMOV that transfers a 16-bit value between a general purpose register and a VFP register. Note that Rt == 15 is UNPREDICTABLE; since this insn is v8 and later only we have no need to replicate the old "updates CPSR.NZCV" behaviour that the singleprec version of this insn does. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-22-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 VRINT*Peter Maydell2020-09-011-0/+3
| | | | | | | | Implement the fp16 version of the VFP VRINT* insns. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-19-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 VCVT between float and fixed-pointPeter Maydell2020-09-011-0/+2
| | | | | | | | | Implement the fp16 versions of the VFP VCVT instruction forms which convert between floating point and fixed-point. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-16-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 VCVT between float and integerPeter Maydell2020-09-011-0/+4
| | | | | | | | | Implement the fp16 versions of the VFP VCVT instruction forms which convert between floating point and integer. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-13-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 VLDR and VSTRPeter Maydell2020-09-011-2/+1Star
| | | | | | | | Implement the fp16 versions of the VFP VLDR/VSTR (immediate). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-12-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 VCMPPeter Maydell2020-09-011-0/+2
| | | | | | | | Implement fp16 version of VCMP. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-11-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 for VMOV immediatePeter Maydell2020-09-011-0/+2
| | | | | | | | Implement VFP fp16 support for the VMOV immediate insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-10-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 for VABS, VNEG, VSQRTPeter Maydell2020-09-011-0/+3
| | | | | | | | | | | | | | | | Implement VFP fp16 for VABS, VNEG and VSQRT. This is all the fp16 insns that use the DO_VFP_2OP macro, because there is no fp16 version of VMOV_reg. Notes: * the gen_helper_vfp_negh already exists as we needed to create it for the fp16 multiply-add insns * as usual we need to use the f16 version of the fp_status; this is only relevant for VSQRT Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-9-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 for fused-multiply-addPeter Maydell2020-09-011-0/+5
| | | | | | | | | Implement VFP fp16 support for fused multiply-add insns VFNMA, VFNMS, VFMA, VFMS. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-7-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 VMLA, VMLS, VNMLS, VNMLA, VNMULPeter Maydell2020-09-011-0/+5
| | | | | | | | | | Implement fp16 versions of the VFP VMLA, VMLS, VNMLS, VNMLA, VNMUL instructions. (These are all the remaining ones which we implement via do_vfp_3op_[hsd]p().) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-5-peter.maydell@linaro.org
* target/arm: Implement VFP fp16 for VFP_BINOP operationsPeter Maydell2020-09-011-0/+4
| | | | | | | | | | | | | | | | | | | | Implmeent VFP fp16 support for simple binary-operator VFP insns VADD, VSUB, VMUL, VDIV, VMINNM and VMAXNM: * make the VFP_BINOP() macro generate float16 helpers as well as float32 and float64 * implement a do_vfp_3op_hp() function similar to the existing do_vfp_3op_sp() * add decode for the half-precision insn patterns Note that the VFP_BINOP macro use creates a couple of unused helper functions vfp_maxh and vfp_minh, but they're small so it's not worth splitting the BINOP operations into "needs halfprec" and "no halfprec" groups. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200828183354.27913-4-peter.maydell@linaro.org
* target/arm: Do M-profile NOCP checks early and via decodetreePeter Maydell2020-08-241-2/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For M-profile CPUs, the architecture specifies that the NOCP exception when a coprocessor is not present or disabled should cover the entire wide range of coprocessor-space encodings, and should take precedence over UNDEF exceptions. (This is the opposite of A-profile, where checking for a disabled FPU has to happen last.) Implement this with decodetree patterns that cover the specified ranges of the encoding space. There are a few instructions (VLLDM, VLSTM, and in v8.1 also VSCCLRM) which are in copro-space but must not be NOCP'd: these must be handled also in the new m-nocp.decode so they take precedence. This is a minor behaviour change: for unallocated insn patterns in the VFP area (cp=10,11) we will now NOCP rather than UNDEF when the FPU is disabled. As well as giving us the correct architectural behaviour for v8.1M and the recommended behaviour for v8.0M, this refactoring also removes the old NOCP handling from the remains of the 'legacy decoder' in disas_thumb2_insn(), paving the way for cleaning that up. Since we don't currently have a v8.1M feature bit or any v8.1M CPUs, the minor changes to this logic that we'll need for v8.1M are marked up with TODO comments. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200803111849.13368-6-peter.maydell@linaro.org
* target/arm: Split VFM decodeRichard Henderson2020-02-281-8/+9
| | | | | | | | | | | Passing the raw o1 and o2 fields from the manual is less instructive than it might be. Do the full decode and let the trans_* functions pass in booleans to a helper. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add formats for some vfp 2 and 3-register insnsRichard Henderson2020-02-281-90/+60Star
| | | | | | | | | | Those vfp instructions without extra opcode fields can share a common @format for brevity. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Move VLLDM and VLSTM to vfp.decodeRichard Henderson2020-02-281-0/+2
| | | | | | | | | | Now that we no longer have an early check for ARM_FEATURE_VFP, we can use the proper ISA check in trans_VLLDM_VLSTM. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Use vfp_expand_imm() for AArch32 VFP VMOV_immPeter Maydell2019-06-171-4/+6
| | | | | | | | | | | | The AArch32 VMOV (immediate) instruction uses the same VFP encoded immediate format we already handle in vfp_expand_imm(). Use that function rather than hand-decoding it. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190613163917.28589-3-peter.maydell@linaro.org
* target/arm: Convert float-to-integer VCVT insns to decodetreePeter Maydell2019-06-131-0/+6
| | | | | | | | | Convert the float-to-integer VCVT instructions to decodetree. Since these are the last unconverted instructions, we can delete the old decoder structure entirely now. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VCVT fp/fixed-point conversion insns to decodetreePeter Maydell2019-06-131-0/+10
| | | | | | | | Convert the VCVT (between floating-point and fixed-point) instructions to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VJCVT to decodetreePeter Maydell2019-06-131-0/+4
| | | | | | | Convert the VJCVT instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert integer-to-float insns to decodetreePeter Maydell2019-06-131-0/+6
| | | | | | | Convert the VCVT integer-to-float instructions to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert double-single precision conversion insns to decodetreePeter Maydell2019-06-131-0/+6
| | | | | | | Convert the VCVT double/single precision conversion insns to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP round insns to decodetreePeter Maydell2019-06-131-0/+15
| | | | | | | | | | | | Convert the VFP round-to-integer instructions VRINTR, VRINTZ and VRINTX to decodetree. These instructions were only introduced as part of the "VFP misc" additions in v8A, so we check this. The old decoder's implementation was incorrectly providing them even for v7A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert the VCVT-to-f16 insns to decodetreePeter Maydell2019-06-131-0/+6
| | | | | | | | | | | | | | Convert the VCVTT and VCVTB instructions which convert from f32 and f64 to f16 to decodetree. Since we're no longer constrained to the old decoder's style using cpu_F0s and cpu_F0d we can perform a direct 16 bit store of the right half of the input single-precision register rather than doing a load/modify/store sequence on the full 32 bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert the VCVT-from-f16 insns to decodetreePeter Maydell2019-06-131-0/+6
| | | | | | | | | | | | | | Convert the VCVTT, VCVTB instructions that deal with conversion from half-precision floats to f32 or 64 to decodetree. Since we're no longer constrained to the old decoder's style using cpu_F0s and cpu_F0d we can perform a direct 16 bit load of the right half of the input single-precision register rather than loading the full 32 bits and then doing a separate shift or sign-extension. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP comparison insns to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | | | | | | | Convert the VFP comparison instructions to decodetree. Note that comparison instructions should not honour the VFP short-vector length and stride information: they are scalar-only operations. This applies to all the 2-operand instructions except for VMOV, VABS, VNEG and VSQRT. (In the old decoder this is implemented via the "if (op == 15 && rn > 3) { veclen = 0; }" check.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VMOV (register) to decodetreePeter Maydell2019-06-131-0/+5
| | | | | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VSQRT to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VSQRT instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VNEG to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VNEG instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VABS to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | | | | | Convert the VFP VABS instruction to decodetree. Unlike the 3-op versions, we don't pass fpst to the VFPGen2OpSPFn or VFPGen2OpDPFn because none of the operations which use this format and support short vectors will need it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VMOV (imm) to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VFP VMOV (immediate) instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP fused multiply-add insns to decodetreePeter Maydell2019-06-131-0/+9
| | | | | | | | | | | | | | | Convert the VFP fused multiply-add instructions (VFNMA, VFNMS, VFMA, VFMS) to decodetree. Note that in the old decode structure we were implementing these to honour the VFP vector stride/length. These instructions were introduced in VFPv4, and in the v7A architecture they are UNPREDICTABLE if the vector stride or length are non-zero. In v8A they must UNDEF if stride or length are non-zero, like all VFP instructions; we choose to UNDEF always. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VDIV to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VDIV instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VSUB to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VSUB instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VADD to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VADD instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VNMUL to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VNMUL instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VMUL to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VMUL instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP VNMLA to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VFP VNMLA instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP VNMLS to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VFP VNMLS instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP VMLS to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | Convert the VFP VMLS instruction to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP VMLA to decodetreePeter Maydell2019-06-131-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the VFP VMLA instruction to decodetree. This is the first of the VFP 3-operand data processing instructions, so we include in this patch the code which loops over the elements for an old-style VFP vector operation. The existing code to do this looping uses the deprecated cpu_F0s/F0d/F1s/F1d TCG globals; since we are going to be converting instructions one at a time anyway we can take the opportunity to make the new loop use TCG temporaries, which means we can do that conversion one operation at a time rather than needing to do it all in one go. We include an UNDEF check which was missing in the old code: short-vector operations (with stride or length non-zero) were deprecated in v7A and must UNDEF in v8A, so if the MVFR0 FPShVec field does not indicate that support for short vectors is present we UNDEF the operations that would use them. (This is a change of behaviour for Cortex-A7, Cortex-A15 and the v8 CPUs, which previously were all incorrectly allowing short-vector operations.) Note that the conversion fixes a bug in the old code for the case of VFP short-vector "mixed scalar/vector operations". These happen where the destination register is in a vector bank but but the second operand is in a scalar bank. For example vmla.f64 d10, d1, d16 with length 2 stride 2 is equivalent to the pair of scalar operations vmla.f64 d10, d1, d16 vmla.f64 d8, d3, d16 where the destination and first input register cycle through their vector but the second input is scalar (d16). In the old decoder the gen_vfp_F1_mul() operation uses cpu_F1{s,d} as a temporary output for the multiply, which trashes the second input operand. For the fully-scalar case (where we never do a second iteration) and the fully-vector case (where the loop loads the new second input operand) this doesn't matter, but for the mixed scalar/vector case we will end up using the wrong value for later loop iterations. In the new code we use TCG temporaries and so avoid the bug. This bug is present for all the multiply-accumulate insns that operate on short vectors: VMLA, VMLS, VNMLA, VNMLS. Note 2: the expression used to calculate the next register number in the vector bank is not in fact correct; we leave this behaviour unchanged from the old decoder and will fix this bug later in the series. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert the VFP load/store multiple insns to decodetreePeter Maydell2019-06-131-0/+18
| | | | | | | | | | | | | | | | | | | Convert the VFP load/store multiple insns to decodetree. This includes tightening up the UNDEF checking for pre-VFPv3 CPUs which only have D0-D15 : they now UNDEF for any access to D16-D31, not merely when the smallest register in the transfer list is in D16-D31. This conversion does not try to share code between the single precision and the double precision versions; this looks a bit duplicative of code, but it leaves the door open for a future refactoring which gets rid of the use of the "F0" registers by inlining the various functions like gen_vfp_ld() and gen_mov_F0_reg() which are hiding "if (dp) { ... } else { ... }" conditionalisation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP VLDR and VSTR to decodetreePeter Maydell2019-06-131-0/+7
| | | | | | | Convert the VFP single load/store insns VLDR and VSTR to decodetree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert VFP two-register transfer insns to decodetreePeter Maydell2019-06-131-0/+5
| | | | | | | | | | | | Convert the VFP two-register transfer instructions to decodetree (in the v8 Arm ARM these are the "Advanced SIMD and floating-point 64-bit move" encoding group). Again, we expand out the sequences involving gen_vfp_msr() and gen_msr_vfp(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert "single-precision" register moves to decodetreePeter Maydell2019-06-131-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the "single-precision" register moves to decodetree: * VMSR * VMRS * VMOV between general purpose register and single precision Note that the VMSR/VMRS conversions make our handling of the "should this UNDEF?" checks consistent between the two instructions: * VMSR to MVFR0, MVFR1, MVFR2 now UNDEF from EL0 (previously was a nop) * VMSR to FPSID now UNDEFs from EL0 or if VFPv3 or better (previously was a nop) * VMSR to FPINST and FPINST2 now UNDEF if VFPv3 or better (previously would write to the register, which had no guest-visible effect because we always UNDEF reads) We also tighten up the decode: we were previously underdecoding some SBZ or SBO bits. The conversion of VMOV_single includes the expansion out of the gen_mov_F0_vreg()/gen_vfp_mrs() and gen_mov_vreg_F0()/gen_vfp_msr() sequences into the simpler direct load/store of the TCG temp via neon_{load,store}_reg32(): we know in the new function that we're always single-precision, we don't need to use the old-and-deprecated cpu_F0* TCG globals, and we don't happen to have the declaration of gen_vfp_msr() and gen_vfp_mrs() at the point in the file where the new function is. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Convert "double-precision" register moves to decodetreePeter Maydell2019-06-131-0/+36
| | | | | | | | | | | | | | | | | | | | | Convert the "double-precision" register moves to decodetree: this covers VMOV scalar-to-gpreg, VMOV gpreg-to-scalar and VDUP. Note that the conversion process has tightened up a few of the UNDEF encoding checks: we now correctly forbid: * VMOV-to-gpr with U:opc1:opc2 == 10x00 or x0x10 * VMOV-from-gpr with opc1:opc2 == 0x10 * VDUP with B:E == 11 * VDUP with Q == 1 and Vn<0> == 1 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- The accesses of elements < 32 bits could be improved by doing direct ld/st of the right size rather than 32-bit read-and-shift or read-modify-write, but we leave this for later cleanup, since this series is generally trying to stick to fixing the decode. Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Add stubs for AArch32 VFP decodetreePeter Maydell2019-06-131-0/+28
Add the infrastructure for building and invoking a decodetree decoder for the AArch32 VFP encodings. At the moment the new decoder covers nothing, so we always fall back to the existing hand-written decode. We need to have one decoder for the unconditional insns and one for the conditional insns, as otherwise the patterns for conditional insns would incorrectly match against the unconditional ones too. Since translate.c is over 14,000 lines long and we're going to be touching pretty much every line of the VFP code as part of the decodetree conversion, we create a new translate-vfp.inc.c to hold the code which deals with VFP in the new scheme. It should be possible to convert this into a standalone translation unit eventually, but the conversion process will be much simpler if we simply #include it midway through translate.c to start with. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>