summaryrefslogtreecommitdiffstats
path: root/tcg/optimize.c
Commit message (Collapse)AuthorAgeFilesLines
* tcg: Add opcodes for vector nand, nor, eqvRichard Henderson2022-03-041-6/+6
| | | | | | | | | | We've had placeholders for these opcodes for a while, and should have support on ppc, s390x and avx512 hosts. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: only read val after const checkAlex Bennée2022-03-041-4/+4
| | | | | | | | | | | | valgrind pointed out that arg_info()->val can be undefined which will be the case if the arguments are not constant. The ordering of the checks will have ensured we never relied on an undefined value but for the sake of completeness re-order the code to be clear. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20220209112142.3367525-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Fix folding of vector opsRichard Henderson2022-01-051-11/+38
| | | | | | | | | | | | Bitwise operations are easy to fold, because the operation is identical regardless of element size. But add and sub need extra element size info that is not currently propagated. Fixes: 2f9f08ba43d Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/799 Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Add an extra cast to fold_extract2Richard Henderson2021-11-111-1/+1
| | | | | | | | | There is no bug, but silence a warning about computation in int32_t being assigned to a uint64_t. Reported-by: Coverity CID 1465220 Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Propagate sign info for shiftingRichard Henderson2021-10-291-3/+47
| | | | | | | | | | | | | | | | | For constant shifts, we can simply shift the s_mask. For variable shifts, we know that sar does not reduce the s_mask, which helps for sequences like ext32s_i64 t, in sar_i64 t, t, v ext32s_i64 out, t allowing the final extend to be eliminated. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Propagate sign info for bit countingRichard Henderson2021-10-291-1/+2
| | | | | | | | | | The results are generally 6 bit unsigned values, though the count leading and trailing bits may produce any value for a zero input. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Propagate sign info for setcondRichard Henderson2021-10-291-0/+2
| | | | | | | | | | The result is either 0 or 1, which means that we have a 2 bit signed result, and thus 62 bits of sign. For clarity, use the smask_from_zmask function. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Propagate sign info for logical operationsRichard Henderson2021-10-291-0/+29
| | | | | | | | | | Sign repetitions are perforce all identical, whether they are 1 or 0. Bitwise operations preserve the relative quantity of the repetitions. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Optimize sign extensionsRichard Henderson2021-10-291-21/+102
| | | | | | | | | | | | | Certain targets, like riscv, produce signed 32-bit results. This can lead to lots of redundant extensions as values are manipulated. Begin by tracking only the obvious sign-extensions, and converting them to simple copies when possible. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Use fold_xx_to_i for remRichard Henderson2021-10-291-1/+5
| | | | | | | | Recognize the constant function for remainder. Suggested-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Use fold_xi_to_x for divRichard Henderson2021-10-291-1/+5
| | | | | | | | | Recognize the identity function for division. Suggested-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Use fold_xi_to_x for mulRichard Henderson2021-10-291-1/+2
| | | | | | | | | Recognize the identity function for low-part multiply. Suggested-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Use fold_xx_to_i for orcRichard Henderson2021-10-291-0/+1
| | | | | | | | | Recognize the constant function for or-complement. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit valuesRichard Henderson2021-10-291-19/+16Star
| | | | | | | | | | | | | This "garbage" setting pre-dates the addition of the type changing opcodes INDEX_op_ext_i32_i64, INDEX_op_extu_i32_i64, and INDEX_op_extr{l,h}_i64_i32. So now we have a definitive points at which to adjust z_mask to eliminate such bits from the 32-bit operands. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Sink commutative operand swapping into fold functionsRichard Henderson2021-10-281-72/+70Star
| | | | | | | | | | | | | | | Most of these are handled by creating a fold_const2_commutative to handle all of the binary operators. The rest were already handled on a case-by-case basis in the switch, and have their own fold function in which to place the call. We now have only one major switch on TCGOpcode. Introduce NO_DEST and a block comment for swap_commutative in order to make the handling of brcond and movcond opcodes cleaner. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Expand fold_addsub2_i32 to 64-bit opsRichard Henderson2021-10-281-21/+44
| | | | | | | | | | Rename to fold_addsub2. Use Int128 to implement the wider operation. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multipliesRichard Henderson2021-10-281-9/+35
| | | | | | | | | Rename to fold_multiply2, and handle muls2_i32, mulu2_i64, and muls2_i64. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_masksRichard Henderson2021-10-281-251/+294
| | | | | | | | | | Move all of the known-zero optimizations into the per-opcode functions. Use fold_masks when there is a possibility of the result being determined, and simply set ctx->z_mask otherwise. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_ix_to_iRichard Henderson2021-10-281-18/+10Star
| | | | | | | | | Pull the "op r, 0, b => movi r, 0" optimization into a function, and use it in fold_shift. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_xi_to_xRichard Henderson2021-10-281-35/+26Star
| | | | | | | | Pull the "op r, a, i => mov r, a" optimization into a function, and use them in the outer-most logical operations. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_sub_to_negRichard Henderson2021-10-281-42/+47
| | | | | | | | Even though there is only one user, place this more complex conversion into its own helper. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_to_notRichard Henderson2021-10-281-72/+86
| | | | | | | | | Split out the conditional conversion from a more complex logical operation to a simple NOT. Create a couple more helpers to make this easy for the outer-most logical operations. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Add type to OptContextRichard Henderson2021-10-281-59/+88
| | | | | | | | | | | | | | Compute the type of the operation early. There are at least 4 places that used a def->flags ladder to determine the type of the operation being optimized. There were two places that assumed !TCG_OPF_64BIT means TCG_TYPE_I32, and so could potentially compute incorrect results for vector operations. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_xi_to_iRichard Henderson2021-10-281-18/+20
| | | | | | | | | Pull the "op r, a, 0 => movi r, 0" optimization into a function, and use it in the outer opcode fold functions. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_xx_to_xRichard Henderson2021-10-281-15/+24
| | | | | | | | | Pull the "op r, a, a => mov r, a" optimization into a function, and use it in the outer opcode fold functions. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_xx_to_iRichard Henderson2021-10-281-17/+24
| | | | | | | | | Pull the "op r, a, a => movi r, 0" optimization into a function, and use it in the outer opcode fold functions. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_movRichard Henderson2021-10-281-13/+14
| | | | | | | | | | This is the final entry in the main switch that was in a different form. After this, we have the option to convert the switch into a function dispatch table. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_dup, fold_dup2Richard Henderson2021-10-281-22/+31
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_bswapRichard Henderson2021-10-281-11/+16
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_count_zerosRichard Henderson2021-10-281-14/+18
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_depositRichard Henderson2021-10-281-10/+15
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_extract, fold_sextractRichard Henderson2021-10-281-18/+30
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_extract2Richard Henderson2021-10-281-17/+22
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_movcondRichard Henderson2021-10-281-25/+31
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_addsub2_i32Richard Henderson2021-10-281-26/+44
| | | | | | | | | Add two additional helpers, fold_add2_i32 and fold_sub2_i32 which will not be simple wrappers forever. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_mulu2_i32Richard Henderson2021-10-281-16/+21
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_setcondRichard Henderson2021-10-281-9/+14
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_brcondRichard Henderson2021-10-281-14/+19
| | | | | | Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_brcond2Richard Henderson2021-10-281-78/+81
| | | | | | | Reduce some code duplication by folding the NE and EQ cases. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_setcond2Richard Henderson2021-10-281-73/+72Star
| | | | | | | | Reduce some code duplication by folding the NE and EQ cases. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_const{1,2}Richard Henderson2021-10-281-52/+219
| | | | | | | | | | | | Split out a whole bunch of placeholder functions, which are currently identical. That won't last as more code gets moved. Use CASE_32_64_VEC for some logical operators that previously missed the addition of vectors. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}Richard Henderson2021-10-281-38/+51
| | | | | | | | | | This puts the separate mb optimization into the same framework as the others. While fold_qemu_{ld,st} are currently identical, that won't last as more code gets moved. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Use a boolean to avoid a mass of continuesRichard Henderson2021-10-281-3/+6
| | | | | | | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out finish_foldingRichard Henderson2021-10-281-16/+33
| | | | | | | | | Copy z_mask into OptContext, for writeback to the first output within the new function. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Return true from tcg_opt_gen_{mov,movi}Richard Henderson2021-10-281-4/+5
| | | | | | | | | | This will allow callers to tail call to these functions and return true indicating processing complete. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Change fail return for do_constant_folding_cond*Richard Henderson2021-10-281-71/+74
| | | | | | | | | Return -1 instead of 2 for failure, so that we can use comparisons against 0 for all cases. Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Drop nb_oargs, nb_iargs localsRichard Henderson2021-10-281-10/+4Star
| | | | | | | | | | | | Rather than try to keep these up-to-date across folding, re-read nb_oargs at the end, after re-reading the opcode. A couple of asserts need dropping, but that will take care of itself as we split the function further. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out fold_callRichard Henderson2021-10-281-22/+41
| | | | | | | | | Calls are special in that they have a variable number of arguments, and need to be able to clobber globals. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out copy_propagateRichard Henderson2021-10-281-8/+14
| | | | | | | | | Continue splitting tcg_optimize. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Split out init_argumentsRichard Henderson2021-10-281-14/+11Star
| | | | | | | | | | | There was no real reason for calls to have separate code here. Unify init for calls vs non-calls using the call path, which handles TCG_CALL_DUMMY_ARG. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Pires <luis.pires@eldorado.org.br> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>