summaryrefslogtreecommitdiffstats
path: root/tcg/ppc
Commit message (Collapse)AuthorAgeFilesLines
* tcg/ppc: Fully convert tcg_target_op_defRichard Henderson2017-09-171-153/+168
| | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Remove tcg_regset_set32Richard Henderson2017-09-171-18/+19
| | | | | | | It's not even clear what the interface REG and VAL32 were supposed to mean. All uses had REG = 0 and VAL32 was the bitset assigned to the destination. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Remove tcg_regset_clearRichard Henderson2017-09-171-1/+1
| | | | | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: disable atomic write check on ppc32Philippe Mathieu-Daudé2017-09-171-1/+3
| | | | | | | | | | | | | | | | | | | This fixes building for ppc64 on ppc32 (changed in 5964fca8a12c): tcg/ppc/tcg-target.inc.c: In function 'tb_target_set_jmp_target': include/qemu/compiler.h:86:30: error: static assertion failed: \ "not expecting: sizeof(*(uint64_t *)jmp_addr) > ATOMIC_REG_SIZE" QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \ ^ tcg/ppc/tcg-target.inc.c:1377:9: note: in expansion of macro 'atomic_set' atomic_set((uint64_t *)jmp_addr, pair); ^ Suggested-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170911204936.5020-1-f4bug@amsat.org> [rth: Added commentary requested by pmm.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/ppc: Use constant pool for moviRichard Henderson2017-09-072-4/+31
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Look for shifted constantsRichard Henderson2017-09-071-10/+48
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Change TCG_REG_RA to TCG_REG_TBRichard Henderson2017-09-071-151/+122Star
| | | | | | | | | | | At this point the conversion is a wash. Loading of TB+ofs is smaller, but the actual return address from exit_tb is larger. There are a few more insns required to transition between TBs. But the expectation is that accesses to the constant pool will on the whole be smaller. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Rearrange ldst label trackingRichard Henderson2017-09-072-2/+6
| | | | | | | | | | Dispense with TCGBackendData, as it has never been used for more than holding a single pointer. Use a define in the cpu/tcg-target.h to signal requirement for TCGLabelQemuLdst, so that we can drop the no-op tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.hRichard Henderson2017-09-072-2/+6
| | | | | | | | | | | | | | | | | Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional function tb_target_set_jmp_target. While we're touching all backends, add a parameter for tb->tc_ptr; we're going to need it shortly for some backends. Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c. This opens the possibility for TCG_TARGET_HAS_direct_jump to be a runtime decision -- based on host cpu capabilities, the size of code_gen_buffer, or a future debugging switch. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add tcg target default memory orderingPranith Kumar2017-09-051-0/+2
| | | | | | | Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20170829063313.10237-3-bobby.prani@gmail.com> [rth: Dropped ia64 hunk] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* util: add cacheinfoEmilio G. Cota2017-06-191-69/+2Star
| | | | | | | | | | | | | | | Add helpers to gather cache info from the host at init-time. For now, only export the host's I/D cache line sizes, which we will use to improve cache locality to avoid false sharing. Suggested-by: Richard Henderson <rth@twiddle.net> Suggested-by: Geert Martin Ijewski <gm.ijewski@web.de> Tested-by: Geert Martin Ijewski <gm.ijewski@web.de> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1496794624-4083-1-git-send-email-cota@braap.org> [rth: Move all implementations from tcg/ppc/] Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Implement goto_ptrRichard Henderson2017-06-052-1/+8
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Introduce goto_ptr opcode and tcg_gen_lookup_and_goto_ptrEmilio G. Cota2017-06-051-0/+1
| | | | | | | | | | | | | | | | | | Instead of exporting goto_ptr directly to TCG frontends, export tcg_gen_lookup_and_goto_ptr(), which calls goto_ptr with the pointer returned by the lookup_tb_ptr() helper. This is the only use case we have for goto_ptr and lookup_tb_ptr, so having this function is very convenient. Furthermore, it trivially allows us to avoid calling the lookup helper if goto_ptr is not implemented by the backend. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1493263764-18657-2-git-send-email-cota@braap.org> Message-Id: <1493263764-18657-3-git-send-email-cota@braap.org> Message-Id: <1493263764-18657-4-git-send-email-cota@braap.org> Message-Id: <1493263764-18657-5-git-send-email-cota@braap.org> [rth: Squashed 4 related commits.] Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Handle ctpop opcodeRichard Henderson2017-01-102-3/+14
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add opcode for ctpopRichard Henderson2017-01-101-0/+2
| | | | | | | | | The number of actual invocations of ctpop itself does not warrent an opcode, but it is very helpful for POWER7 to use in generating an expansion for ctz. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Handle ctz and clz opcodesRichard Henderson2017-01-102-4/+73
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add clz and ctz opcodesRichard Henderson2017-01-101-0/+4
| | | | | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Pass the opcode width to target_parse_constraintRichard Henderson2017-01-101-9/+5Star
| | | | | | | | | | | | This will let us choose how to interpret a given constraint depending on whether the opcode is 32- or 64-bit. Which will let us share more constraint combinations between opcodes. At the same time, change the interface to return the advanced pointer instead of passing it in/out by reference. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Transition flat op_defs array to a target callbackRichard Henderson2017-01-101-2/+12
| | | | | | | | This will allow the target to tailor the constraints to the auto-detected ISA extensions. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Implement field extraction opcodesRichard Henderson2017-01-102-2/+12
| | | | | Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add field extraction primitivesRichard Henderson2017-01-101-0/+4
| | | | | | | | Adds tcg_gen_extract_* and tcg_gen_sextract_* for extraction of fixed position bitfields, much like we already have for deposit. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Add support for fencePranith Kumar2016-09-161-0/+21
| | | | | | Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160714202026.9727-8-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Support arbitrary size + alignmentRichard Henderson2016-09-161-27/+31
| | | | | | | | | | | Previously we allowed fully unaligned operations, but not operations that are aligned but with less alignment than the operation size. In addition, arm32, ia64, mips, and sparc had been omitted from the previous overalignment patch, which would have led to that alignment being enforced. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Clean up tcg-target.h header guardsMarkus Armbruster2016-07-121-2/+3
| | | | | | | | | | | | | These use guard symbols like TCG_TARGET_$target. scripts/clean-header-guards.pl doesn't like them because they don't match their file name (they should, to make guard collisions less likely). Clean them up: use guard symbol $target_TCG_TARGET_H for tcg/$target/tcg-target.h. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
* tcg: Improve the alignment check infrastructureSergey Sorokin2016-07-061-5/+9
| | | | | | | | | | | | | | | Some architectures (e.g. ARMv8) need the address which is aligned to a size more than the size of the memory access. To support such check it's enough the current costless alignment check implementation in QEMU, but we need to support an alignment size specifying. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru> Signed-off-by: Richard Henderson <rth@twiddle.net> [rth: Assert in tcg_canonicalize_memop. Leave get_alignment_bits available for, though unused by, user-mode. Retain logging difference based on ALIGNED_ONLY.]
* tcg: Optimize spills of constantsRichard Henderson2016-07-061-0/+6
| | | | | | | | | | | While we can store constants via constrants on INDEX_op_st_i32 et al, we weren't able to spill constants to backing store. Add a new backend interface, tcg_out_sti, which may store the constant (and is allowed to fail). Rearrange the temp_* helpers so that we only attempt to directly store a constant when the temp is becoming dead/free. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Clean up direct block chaining data fieldsSergey Fedorov2016-05-131-3/+3
| | | | | | | | | | | | | | | | | | | | | | Briefly describe in a comment how direct block chaining is done. It should help in understanding of the following data fields. Rename some fields in TranslationBlock and TCGContext structures to better reflect their purpose (dropping excessive 'tb_' prefix in TranslationBlock but keeping it in TCGContext): tb_next_offset => jmp_reset_offset tb_jmp_offset => jmp_insn_offset tb_next => jmp_target_addr jmp_next => jmp_list_next jmp_first => jmp_list_first Avoid using a magic constant as an invalid offset which is used to indicate that there's no n-th jump generated. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Make direct jump patching thread-safeSergey Fedorov2016-05-131-4/+18
| | | | | | | | | | | | | Ensure direct jump patching in PPC is atomic by: * limiting translation buffer size in 32-bit mode to be addressable by Branch I-form instruction; * using atomic_read()/atomic_set() for code patching. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1461341333-19646-5-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: check for CONFIG_DEBUG_TCG instead of NDEBUGAurelien Jarno2016-04-211-1/+1
| | | | | | | | | | Check for CONFIG_DEBUG_TCG instead of NDEBUG, drop now useless code. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-2-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg: use tcg_debug_assert instead of assert (fix performance regression)Aurelien Jarno2016-04-211-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The TCG code is quite performance sensitive, but at the same time can also be quite tricky. That is why asserts that can be enabled with the --enable-debug-tcg configure option. This used to work the following way: | #include "config.h" | | ... | | #if !defined(CONFIG_DEBUG_TCG) && !defined(NDEBUG) | /* define it to suppress various consistency checks (faster) */ | #define NDEBUG | #endif | | ... | | #include <assert.h> Since commit 757e725b (tcg: Clean up includes) "config.h" as been replaced by "qemu/osdep.h" which itself includes <assert.h>. As a consequence the assertions are always enabled, even when using --disable-debug-tcg, causing a performance regression, especially on targets with many registers. For instance on qemu-system-ppc the speed difference is about 15%. tcg_debug_assert is controlled directly by CONFIG_DEBUG_TCG and already uses in some places. This patch replaces all the calls to assert into calss to tcg_debug_assert. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-id: 1461228530-14852-1-git-send-email-aurelien@aurel32.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg: Remove unnecessary osdep.h includes from tcg-target.inc.cPeter Maydell2016-02-231-1/+0Star
| | | | | | | | | | | | | | Commit 757e725b58c57d added a number of #include "qemu/osdep.h" files to the tcg-target.c files (as they were named at the time). These are unnecessary because these files are not standalone C files, and the tcg/tcg.c file which includes them will have already included osdep.h on their behalf. Remove the unneeded include directives. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1456238983-10160-4-git-send-email-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Rename tcg-target.c to tcg-target.inc.cPeter Maydell2016-02-231-0/+0
| | | | | | | | | | | Rename the per-architecture tcg-target.c files to tcg-target.inc.c. This makes it clearer that they are not intended to be standalone C files, but are instead #included into another source file. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <1456238983-10160-2-git-send-email-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Clean up includesPeter Maydell2016-01-291-7/+1Star
| | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-16-git-send-email-peter.maydell@linaro.org
* tcg/ppc: Prefer mask over andi.Richard Henderson2015-10-191-10/+10
| | | | | | Prefer the instruction that isn't required to modify cr0. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Revise goto_tb implementationRichard Henderson2015-10-191-11/+38
| | | | | | | | | | | | | | Restrict the size of code_gen_buffer to 2GB on ppc64, which lets us assert that everything is reachable with addis+addi from tb_ret_addr. This lets us use a max of 4 insns for goto_tb instead of 7. Emit the indirect branch portion of goto_tb up front, which means we only have to update two insns to update any link. With a 64-bit store, we can update the link atomically, which may be required in future. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Adjust exit_tb for change in prologue placementRichard Henderson2015-10-191-6/+4Star
| | | | | | | | Changing the prologue to the beginning of the code_gen_buffer changes the direction of the "return" branch. Need to change the logic to match. Signed-off-by: Richard Henderson <rth@twiddle.net>
* linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier2015-08-241-8/+4Star
| | | | | | | | | | | As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* linux-user: remove --enable-guest-base/--disable-guest-baseLaurent Vivier2015-08-241-4/+2Star
| | | | | | | | | | | | | | | | | | All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Improve unaligned load/store handling on 64-bit backendBenjamin Herrenschmidt2015-08-241-10/+31
| | | | | | | | | | | | | | | | | | | | Currently, we get to the slow path for any unaligned access in the backend, because we effectively preserve the bottom address bits below the alignment requirement when comparing with the TLB entry, so any non-0 bit there will cause the compare to fail. For the same number of instructions, we can instead add the access size - 1 to the address and stick to clearing all the bottom bits. That means that normal unaligned accesses will not fallback (the HW will handle them fine). Only when crossing a page boundary well we end up having a mismatch because we'll end up pointing to the next page which cannot possibly be in that same TLB entry. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Message-Id: <1437455978.5809.2.camel@kernel.crashing.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32Richard Henderson2015-08-241-1/+2
| | | | | | | Rather than allow arbitrary shift+trunc, only concern ourselves with low and high parts. This is all that was being used anyway. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: implement real ext_i32_i64 and extu_i32_i64 opsAurelien Jarno2015-08-241-0/+6
| | | | | | | | | | | | | | Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a 32-bit value is always converted to a 64-bit value and not propagated through the register allocator or the optimizer. Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: rename trunc_shr_i32 into trunc_shr_i64_i32Aurelien Jarno2015-08-241-1/+1
| | | | | | | | | | | The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32, and the name in the README doesn't match the name offered to the frontends. Always use the long name to make it clear it is a size changing op. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Mask TCGMemOp appropriately for indexingRichard Henderson2015-06-091-4/+4
| | | | | | | | | | The addition of MO_AMASK means that places that used inverted masks need to be changed to use positive masks, and places that failed to mask the intended bits need updating. Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com> Tested-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITSPaolo Bonzini2015-06-031-0/+1
| | | | | | | | | | | | This will be used to size the TLB when more than 8 MMU modes are used by the target. Limitations come from the limited size of the immediate fields (which sometimes, as in the case of Aarch64, extend to instructions that shift the immediate). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
* tcg: Push merged memop+mmu_idx parameter to softmmu routinesRichard Henderson2015-05-141-13/+13
| | | | | | | | The extra information is not yet used but it is now available. This requires minor changes through all of the tcg backends. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Merge memop and mmu_idx parameters to qemu_ld/stRichard Henderson2015-05-141-4/+8
| | | | | | | | | At the tcg opcode level, not at the tcg-op.h generator level. This requires minor changes through all of the tcg backends, but none of the cpu translators. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change generator-side labels to a pointerRichard Henderson2015-03-131-11/+9Star
| | | | | | | | | | | | | | | This is less about improved type checking than enabling a subsequent change to the representation of labels. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com> Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Fix support for 64-bit PPC MacOSX hostsPeter Maydell2014-06-291-3/+3
| | | | | | | | | | Add back in the support for 64-bit PPC MacOSX hosts that was broken in the recent merge of the 32-bit and 64-bit TCG backends. Reported-by: Andreas Färber <andreas.faerber@web.de> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Tested-by: Andreas Färber <andreas.faerber@web.de>
* tcg/ppc: Fix failure in tcg_out_mem_longRichard Henderson2014-06-271-1/+4
| | | | | | | | | | With rt != r0 on loads, we use rt for scratch. If we need an index register different from base, we can't use rt, but r0 is usable. Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1403843160-30332-1-git-send-email-rth@twiddle.net Tested-by: Cédric Le Goater <clg@fr.ibm.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg-ppc: Use the return address as a base pointerRichard Henderson2014-06-231-12/+93
| | | | | | | | | This can significantly reduce code size for generation of (some) 64-bit constants. With the side effect that we know for a fact that exit_tb can use the register to good effect. Tested-by: Tom Musta <tommusta@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>