summaryrefslogtreecommitdiffstats
path: root/tcg/ppc/tcg-target.h
Commit message (Collapse)AuthorAgeFilesLines
* tcg: Remove TCG_TARGET_SUPPORT_MIRRORRichard Henderson2021-01-071-1/+0Star
| | | | | | | | Now that all native tcg hosts support splitwx, remove the define. Replace the one use with a test for CONFIG_TCG_INTERPRETER. Reviewed-by: Joelle van Dyne <j@getutm.app> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Support split-wx code generationRichard Henderson2021-01-071-1/+1
| | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Add --accel tcg,split-wx propertyRichard Henderson2021-01-071-0/+1
| | | | | | | | | Plumb the value through to alloc_code_gen_buffer. This is not supported by any os or tcg backend, so for now enabling it will result in an error. Reviewed-by: Joelle van Dyne <j@getutm.app> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Adjust tb_target_set_jmp_target for split-wxRichard Henderson2021-01-071-1/+1
| | | | | | | Pass both rx and rw addresses to tb_target_set_jmp_target. Reviewed-by: Joelle van Dyne <j@getutm.app> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Introduce INDEX_op_qemu_st8_i32Richard Henderson2021-01-071-0/+1
| | | | | | | | | Enable this on i386 to restrict the set of input registers for an 8-bit store, as required by the architecture. This removes the last use of scratch registers for user-only mode. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* util: Extract flush_icache_range to cacheflush.cRichard Henderson2021-01-021-1/+0Star
| | | | | | | | | | | | This has been a tcg-specific function, but is also in use by hardware accelerators via physmem.c. This can cause link errors when tcg is disabled. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Joelle van Dyne <j@getutm.app> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20201214140314.18544-3-richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: Remove TCG_TARGET_HAS_cmp_vecRichard Henderson2020-10-081-1/+0Star
| | | | | | | The cmp_vec opcode is mandatory; this symbol is unused. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/ppc: add vmulld to INDEX_op_mul_vec caseLijun Pan2020-08-121-0/+2
| | | | | | | | | | Group vmuluwm and vmulld. Make vmulld-specific changes since it belongs to new ISA 3.1. Signed-off-by: Lijun Pan <ljp@linux.ibm.com> Message-Id: <20200724045845.89976-3-ljp@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tcg/ppc: Implement INDEX_op_rot[lr]v_vecRichard Henderson2020-06-021-1/+1
| | | | | | | We already had support for rotlv, using a target-specific opcode; convert to use the generic opcode. Handle rotrv via simple negation. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Implement gvec support for rotate by scalarRichard Henderson2020-06-021-0/+1
| | | | | | | | | | No host backend support yet, but the interfaces for rotls are in place. Only implement left-rotate for now, as the only known use of vector rotate by scalar is s390x, so any right-rotate would be unused and untestable. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Implement gvec support for rotate by vectorRichard Henderson2020-06-021-0/+1
| | | | | | | | | | | No host backend support yet, but the interfaces for rotlv and rotrv are in place. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- v3: Drop the generic expansion from rot to shift; we can do better for each backend, and then this code becomes unused.
* tcg: Implement gvec support for rotate by immediateRichard Henderson2020-06-021-0/+1
| | | | | | | | | | No host backend support yet, but the interfaces for rotli are in place. Canonicalize immediate rotate to the left, based on a survey of architectures, but provide both left and right shift interfaces to the translators. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Update vector support for v3.00 AltivecRichard Henderson2019-10-141-1/+1
| | | | | | | | | | These new instructions are conditional only on MSR.VEC and are thus part of the Altivec instruction set, and not VSX. This includes negation and compare not equal. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Update vector support for v2.07 AltivecRichard Henderson2019-10-141-1/+3
| | | | | | | | | | | These new instructions are conditional only on MSR.VEC and are thus part of the Altivec instruction set, and not VSX. This includes lots of double-word arithmetic and a few extra logical operations. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Update vector support for VSXRichard Henderson2019-10-141-2/+3
| | | | | | | | | | | | | | | | | The VSX instruction set instructions include double-word loads and stores, double-word load and splat, double-word permute, and bit select. All of which require multiple operations in the Altivec instruction set. Because the VSX registers map %vsr32 to %vr0, and we have no current intention or need to use vector registers outside %vr0-%vr19, force on the {ax,bx,cx,tx} bits within the added VSX insns so that we don't have to otherwise modify the VR[TABC] macros. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Support vector multiplyRichard Henderson2019-10-141-1/+1
| | | | | | | | | For Altivec, this is always an expansion. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Support vector shift by immediateRichard Henderson2019-10-141-1/+1
| | | | | | | | | | For Altivec, this is done via vector shift by vector, and loading the immediate into a register. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Add support for vector saturated add/subtractRichard Henderson2019-10-141-1/+1
| | | | | | | | | | | | Add support for vector saturated add/subtract using Altivec instructions: VADDSBS, VADDSHS, VADDSWS, VADDUBS, VADDUHS, VADDUWS, and VSUBSBS, VSUBSHS, VSUBSWS, VSUBUBS, VSUBUHS, VSUBUWS. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Add support for vector maximum/minimumRichard Henderson2019-10-141-1/+1
| | | | | | | | | | | Add support for vector maximum/minimum using Altivec instructions VMAXSB, VMAXSH, VMAXSW, VMAXUB, VMAXUH, VMAXUW, and VMINSB, VMINSH, VMINSW, VMINUB, VMINUH, VMINUW. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Add support for load/store/logic/comparisonRichard Henderson2019-10-141-3/+3
| | | | | | | | | | | | Add various bits and peaces related mostly to load and store operations. In that context, logic, compare, and splat Altivec instructions are used, and, therefore, the support for emitting them is included in this patch too. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Enable tcg backend vector compilationRichard Henderson2019-10-141-0/+25
| | | | | | | | | | | | | | | | | | | | Introduce all of the flags required to enable tcg backend vector support, and a runtime flag to indicate the host supports Altivec instructions. For now, do not actually set have_isa_altivec to true, because we have not yet added all of the code to actually generate all of the required insns. However, we must define these flags in order to disable ifndefs that create stub versions of the functions added here. The change to tcg_out_movi works around a buglet in tcg.c wherein if we do not define tcg_out_dupi_vec we get a declared but not defined Werror, but if we only declare it we get a defined but not used Werror. We need to this change to tcg_out_movi eventually anyway, so it's no biggie. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg/ppc: Create TCGPowerISA and have_isaRichard Henderson2019-10-141-2/+10
| | | | | | | | | Introduce an enum to hold base < 2.06 < 3.00. Use macros to preserve the existing have_isa_2_06 and have_isa_3_00 predicates. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Introduce Altivec registersRichard Henderson2019-10-141-1/+10
| | | | | | | | | | Altivec supports 32 128-bit vector registers, whose names are by convention v0 through v31. Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
* tcg: Add INDEX_op_extract2_{i32,i64}Richard Henderson2019-04-241-0/+2
| | | | | | | This will let backends implement the double-word shift operation. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Add TCG_TARGET_HAS_MEMORY_BSWAPRichard Henderson2018-12-171-0/+1
| | | | | | | | For now, defined universally as true, since we previously required backends to implement swapped memory operations. Future patches may now remove that support where it is onerous. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Use constant pool for moviRichard Henderson2017-09-071-0/+1
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Rearrange ldst label trackingRichard Henderson2017-09-071-0/+4
| | | | | | | | | | Dispense with TCGBackendData, as it has never been used for more than holding a single pointer. Use a define in the cpu/tcg-target.h to signal requirement for TCGLabelQemuLdst, so that we can drop the no-op tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.hRichard Henderson2017-09-071-0/+2
| | | | | | | | | | | | | | | | | Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional function tb_target_set_jmp_target. While we're touching all backends, add a parameter for tb->tc_ptr; we're going to need it shortly for some backends. Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c. This opens the possibility for TCG_TARGET_HAS_direct_jump to be a runtime decision -- based on host cpu capabilities, the size of code_gen_buffer, or a future debugging switch. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add tcg target default memory orderingPranith Kumar2017-09-051-0/+2
| | | | | | | Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20170829063313.10237-3-bobby.prani@gmail.com> [rth: Dropped ia64 hunk] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Implement goto_ptrRichard Henderson2017-06-051-1/+1
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptrEmilio G. Cota2017-06-051-0/+1
| | | | | | | | | | | | | | | | | | Instead of exporting goto_ptr directly to TCG frontends, export tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer returned by the lookup_tb_ptr() helper. This is the only use case we have for goto_ptr and lookup_tb_ptr, so having this function is very convenient. Furthermore, it trivially allows us to avoid calling the lookup helper if goto_ptr is not implemented by the backend. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1493263764-18657-2-git-send-email-cota@braap.org> Message-Id: <1493263764-18657-3-git-send-email-cota@braap.org> Message-Id: <1493263764-18657-4-git-send-email-cota@braap.org> Message-Id: <1493263764-18657-5-git-send-email-cota@braap.org> [rth: Squashed 4 related commits.] Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Handle ctpop opcodeRichard Henderson2017-01-101-2/+3
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add opcode for ctpopRichard Henderson2017-01-101-0/+2
| | | | | | | | | The number of actual invocations of ctpop itself does not warrent an opcode, but it is very helpful for POWER7 to use in generating an expansion for ctz. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Handle ctz and clz opcodesRichard Henderson2017-01-101-4/+6
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add clz and ctz opcodesRichard Henderson2017-01-101-0/+4
| | | | | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Implement field extraction opcodesRichard Henderson2017-01-101-2/+2
| | | | | Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add field extraction primitivesRichard Henderson2017-01-101-0/+4
| | | | | | | | Adds tcg_gen_extract_* and tcg_gen_sextract_* for extraction of fixed position bitfields, much like we already have for deposit. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Clean up tcg-target.h header guardsMarkus Armbruster2016-07-121-2/+3
| | | | | | | | | | | | | These use guard symbols like TCG_TARGET_$target. scripts/clean-header-guards.pl doesn't like them because they don't match their file name (they should, to make guard collisions less likely). Clean them up: use guard symbol $target_TCG_TARGET_H for tcg/$target/tcg-target.h. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
* tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32Richard Henderson2015-08-241-1/+2
| | | | | | | Rather than allow arbitrary shift+trunc, only concern ourselves with low and high parts. This is all that was being used anyway. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: rename trunc_shr_i32 into trunc_shr_i64_i32Aurelien Jarno2015-08-241-1/+1
| | | | | | | | | | | The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32, and the name in the README doesn't match the name offered to the frontends. Always use the long name to make it clear it is a size changing op. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITSPaolo Bonzini2015-06-031-0/+1
| | | | | | | | | | | | This will be used to size the TLB when more than 8 MMU modes are used by the target. Limitations come from the limited size of the immediate fields (which sometimes, as in the case of Aarch64, extend to instructions that shift the immediate). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
* tcg-ppc: Merge cache-utils into the backendRichard Henderson2014-06-231-0/+2
| | | | | | | | | As a "utility", it only supported ppc, and in a way that other tcg backends provided directly in tcg-target.h. Removing this disparity is easier now that the two ppc backends are merged. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ppc: Rename the tcg/ppc64 backendRichard Henderson2014-06-231-0/+109
| | | | | | | | The other tcg backends that support 32- and 64-bit modes use the 32-bit name for the port. Follow suit. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ppc: Remove the backendRichard Henderson2014-06-231-109/+0Star
| | | | | | | Vectoring the 32-bit build to the ppc64 directory. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove TCG_TARGET_HAS_new_ldstRichard Henderson2014-06-041-2/+0Star
| | | | | | | Since all backends have been converted, remove the compatibility code. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ppc: Define TCG_TARGET_INSN_UNIT_SIZERichard Henderson2014-05-121-0/+1
| | | | | | | | And use tcg pointer differencing functions as appropriate. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Use HOST_WORDS_BIGENDIANRichard Henderson2014-04-191-1/+0Star
| | | | | | Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Relax requirement for mulu2_i32 on 32-bit hostsRichard Henderson2014-04-191-0/+1
| | | | | | | Instead require either mulu2_i32 or muluh_i32. The code in tcg-op.h already supports looking for both. Previous incomplete conversion? Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ppc: Support new ldst opcodesRichard Henderson2013-10-131-1/+1
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add qemu_ld_st_i32/64Richard Henderson2013-10-101-0/+2
| | | | | | | Step two in the transition, adding the new ldst opcodes. Keep the old opcodes around until all backends support the new opcodes. Signed-off-by: Richard Henderson <rth@twiddle.net>