summaryrefslogtreecommitdiffstats
path: root/target
Commit message (Collapse)AuthorAgeFilesLines
...
| * target/nios2: Honour -semihosting-config userspace=onPeter Maydell2022-09-131-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Honour the commandline -semihosting-config userspace=on option, instead of always permitting userspace semihosting calls in system emulation mode, by passing the correct value to the is_userspace argument of semihosting_enabled(). Note that this is a behaviour change: if the user wants to do semihosting calls from userspace they must now specifically enable them on the command line. nios2 semihosting is not implemented for linux-user builds. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822141230.3658237-6-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/mips: Honour -semihosting-config userspace=onPeter Maydell2022-09-134-10/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Honour the commandline -semihosting-config userspace=on option, instead of always permitting userspace semihosting calls in system emulation mode, by passing the correct value to the is_userspace argument of semihosting_enabled(). Note that this is a behaviour change: if the user wants to do semihosting calls from userspace they must now specifically enable them on the command line. MIPS semihosting is not implemented for linux-user builds. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822141230.3658237-5-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/m68k: Honour -semihosting-config userspace=onPeter Maydell2022-09-131-2/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Honour the commandline -semihosting-config userspace=on option, instead of never permitting userspace semihosting calls in system emulation mode, by passing the correct value to the is_userspace argument of semihosting_enabled(), instead of manually checking and always forbidding semihosting if the guest is in userspace. (Note that target/m68k doesn't support semihosting at all in the linux-user build.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822141230.3658237-4-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/arm: Honour -semihosting-config userspace=onPeter Maydell2022-09-132-23/+5Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Honour the commandline -semihosting-config userspace=on option, instead of never permitting userspace semihosting calls in system emulation mode, by passing the correct value to the is_userspace argument of semihosting_enabled(), instead of manually checking and always forbidding semihosting if the guest is in userspace and this isn't the linux-user build. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822141230.3658237-3-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * semihosting: Allow optional use of semihosting from userspacePeter Maydell2022-09-135-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently our semihosting implementations generally prohibit use of semihosting calls in system emulation from the guest userspace. This is a very long standing behaviour justified originally "to provide some semblance of security" (since code with access to the semihosting ABI can do things like read and write arbitrary files on the host system). However, it is sometimes useful to be able to run trusted guest code which performs semihosting calls from guest userspace, notably for test code. Add a command line suboption to the existing semihosting-config option group so that you can explicitly opt in to semihosting from guest userspace with -semihosting-config userspace=on (There is no equivalent option for the user-mode emulator, because there by definition all code runs in userspace and has access to semihosting already.) This commit adds the infrastructure for the command line option and adds a bool 'is_user' parameter to the function semihosting_userspace_enabled() that target code can use to check whether it should be permitting the semihosting call for userspace. It mechanically makes all the callsites pass 'false', so they continue checking "is semihosting enabled in general". Subsequent commits will make each target that implements semihosting honour the userspace=on option by passing the correct value and removing whatever "don't do this for userspace" checking they were doing by hand. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822141230.3658237-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/m68k: Convert semihosting errno to gdb remote errnoRichard Henderson2022-09-131-2/+31
| | | | | | | | | | | | | | | | The semihosting abi used by m68k uses the gdb remote protocol filesys errnos. Acked-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/m68k: Use semihosting/syscalls.hRichard Henderson2022-09-131-232/+49Star
| | | | | | | | | | | | | | | | This separates guest file descriptors from host file descriptors, and utilizes shared infrastructure for integration with gdbstub. Acked-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/nios2: Convert semihosting errno to gdb remote errnoRichard Henderson2022-09-131-2/+31
| | | | | | | | | | | | | | The semihosting abi used by nios2 uses the gdb remote protocol filesys errnos. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
| * target/nios2: Use semihosting/syscalls.hRichard Henderson2022-09-131-246/+50Star
| | | | | | | | | | | | | | This separates guest file descriptors from host file descriptors, and utilizes shared infrastructure for integration with gdbstub. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Report FEAT_PMUv3p5 for TCG '-cpu max'Peter Maydell2022-09-142-2/+2
| | | | | | | | | | | | | | | | | | | | Update the ID registers for TCG's '-cpu max' to report a FEAT_PMUv3p5 compliant PMU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-11-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Support 64-bit event counters for FEAT_PMUv3p5Peter Maydell2022-09-143-9/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With FEAT_PMUv3p5, the event counters are now 64 bit, rather than 32 bit. (Previously, only the cycle counter could be 64 bit, and other event counters were always 32 bits). For any given event counter, whether the overflow event is noted for overflow from bit 31 or from bit 63 is controlled by a combination of PMCR.LP, MDCR_EL2.HLP and MDCR_EL2.HPMN. Implement the 64-bit event counter handling. We choose to make our counters always 64 bits, and mask out the top 32 bits on read or write of PMXEVCNTR for CPUs which don't have FEAT_PMUv3p5. (Note that the changes to pmenvcntr_op_start() and pmenvcntr_op_finish() bring their logic closer into line with that of pmccntr_op_start() and pmccntr_op_finish(), which already had to cope with the overflow being either at 32 or 64 bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-10-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Implement FEAT_PMUv3p5 cycle counter disable bitsPeter Maydell2022-09-142-4/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FEAT_PMUv3p5 introduces new bits which disable the cycle counter from counting: * MDCR_EL2.HCCD disables the counter when in EL2 * MDCR_EL3.SCCD disables the counter when Secure Add the code to support these bits. (Note that there is a third documented counter-disable bit, MDCR_EL3.MCCD, which disables the counter when in EL3. This is not present until FEAT_PMUv3p7, so is out of scope for now.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-9-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Rename pmu_8_n feature test functionsPeter Maydell2022-09-142-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our feature test functions that check the PMU version are named isar_feature_{aa32,aa64,any}_pmu_8_{1,4}. This doesn't match the current Arm ARM official feature names, which are FEAT_PMUv3p1 and FEAT_PMUv3p4. Rename these functions to _pmuv3p1 and _pmuv3p4. This commit was created with: sed -i -e 's/pmu_8_/pmuv3p/g' target/arm/*.[ch] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-8-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Detect overflow when calculating next PMU interruptPeter Maydell2022-09-141-8/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In pmccntr_op_finish() and pmevcntr_op_finish() we calculate the next point at which we will get an overflow and need to fire the PMU interrupt or set the overflow flag. We do this by calculating the number of nanoseconds to the overflow event and then adding it to qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL). However, we don't check whether that signed addition overflows, which can happen if the next PMU interrupt would happen massively far in the future (250 years or more). Since QEMU assumes that "when the QEMU_CLOCK_VIRTUAL rolls over" is "never", the sensible behaviour in this situation is simply to not try to set the timer if it would be beyond that point. Detect the overflow, and skip setting the timer in that case. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-7-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Honour MDCR_EL2.HPMD in Secure EL2Peter Maydell2022-09-141-10/+7Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic in pmu_counter_enabled() for handling the 'prohibit event counting' bits MDCR_EL2.HPMD and MDCR_EL3.SPME is written in a way that assumes that EL2 is never Secure. This used to be true, but the architecture now permits Secure EL2, and QEMU can emulate this. Refactor the prohibit logic so that we effectively OR together the various prohibit bits when they apply, rather than trying to construct an if-else ladder where any particular state of the CPU ends up in exactly one branch of the ladder. This fixes the Secure EL2 case and also is a better structure for adding the PMUv8.5 bits MDCR_EL2.HCCD and MDCR_EL3.SCCD. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-6-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Ignore PMCR.D when PMCR.LC is setPeter Maydell2022-09-141-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | The architecture requires that if PMCR.LC is set (for a 64-bit cycle counter) then PMCR.D (which enables the clock divider so the counter ticks every 64 cycles rather than every cycle) should be ignored. We were always honouring PMCR.D; fix the bug so we correctly ignore it in this situation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-5-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Don't mishandle count when enabling or disabling PMU countersPeter Maydell2022-09-141-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PMU cycle and event counter infrastructure design requires that operations on the PMU register fields are wrapped in pmu_op_start() and pmu_op_finish() calls (or their more specific pmmcntr and pmevcntr equivalents). This includes any changes to registers which affect whether the counter should be enabled or disabled, but we forgot to do this. The effect of this bug is that in sequences like: * disable the cycle counter (PMCCNTR) using the PMCNTEN register * write a value such as 0xfffff000 to the PMCCNTR * restart the counter by writing to PMCNTEN the value written to the cycle counter is corrupted, and it starts counting from the wrong place. (Essentially, we fail to record that the QEMU_CLOCK_VIRTUAL timestamp when the counter should be considered to have started counting is the point when PMCNTEN is written to enable the counter.) Add the necessary bracketing calls, so that updates to the various registers which affect whether the PMU is counting are handled correctly. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-4-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Correct value returned by pmu_counter_mask()Peter Maydell2022-09-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pmu_counter_mask() accidentally returns a value with bits [63:32] set, because the expression it returns is evaluated as a signed value that gets sign-extended to 64 bits. Force the whole expression to be evaluated with 64-bit arithmetic with ULL suffixes. The main effect of this bug was that a guest could write to the bits in the high half of registers like PMCNTENSET_EL0 that are supposed to be RES0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-3-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Don't corrupt high half of PMOVSR when cycle counter overflowsPeter Maydell2022-09-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | When the cycle counter overflows, we are intended to set bit 31 in PMOVSR to indicate this. However a missing ULL suffix means that we end up setting all of bits 63-31. Fix the bug. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220822132358.3524971-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Add missing space in commentPeter Maydell2022-09-141-1/+1
| | | | | | | | | | | | | | | | | | Fix a missing space before a comment terminator. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220819110052.2942289-7-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Advertise FEAT_ETS for '-cpu max'Peter Maydell2022-09-142-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The architectural feature FEAT_ETS (Enhanced Translation Synchronization) is a set of tightened guarantees about memory ordering involving translation table walks: * if memory access RW1 is ordered-before memory access RW2 then it is also ordered-before any translation table walk generated by RW2 that generates a translation fault, address size fault or access fault * TLB maintenance on non-exec-permission translations is guaranteed complete after a DSB (ie it does not need the context synchronization event that you have to have if you don’t have FEAT_ETS) For QEMU’s implementation we don’t reorder translation table walk accesses, and we guarantee to finish the TLB maintenance as soon as the TLB op is done (the tlb_flush functions will complete at the end of the TLB, and TLB ops always end the TB because they’re sysreg writes). So we’re already compliant and all we need to do is say so in the ID registers for the 'max' CPU. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220819110052.2942289-6-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Implement ID_DFR1Peter Maydell2022-09-143-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | In Armv8.6, a new AArch32 ID register ID_DFR1 is defined; implement it. We don't have any CPUs with features that they need to advertise here yet, but plumbing in the ID register gives it the right name when debugging and will help in future when we do add a CPU that has non-zero ID_DFR1 fields. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220819110052.2942289-5-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Implement ID_MMFR5Peter Maydell2022-09-143-2/+5
| | | | | | | | | | | | | | | | | | | | | | In Armv8.6 a new AArch32 ID register ID_MMFR5 is defined. Implement this; we want to be able to use it to report to the guest that we implement FEAT_ETS. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220819110052.2942289-4-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Sort KVM reads of AArch32 ID registers into encoding orderPeter Maydell2022-09-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | The code that reads the AArch32 ID registers from KVM in kvm_arm_get_host_cpu_features() does so almost but not quite in encoding order. Move the read of ID_PFR2 down so it's really in encoding order. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220819110052.2942289-3-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Make cpregs 0, c0, c{3-15}, {0-7} correctly RAZ in v8Peter Maydell2022-09-141-5/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the AArch32 ID register scheme, coprocessor registers with encoding cp15, 0, c0, c{0-7}, {0-7} are all in the space covered by what in v6 and v7 was called the "CPUID scheme", and are supposed to RAZ if they're not allocated to a specific ID register. For our pre-v8 CPUs we get this right, because the regdefs in id_pre_v8_midr_cp_reginfo[] cover these RAZ requirements. However for v8 we failed to put in the necessary patterns to cover this, so we end up UNDEFing on everything we didn't have an ID register for. This is a problem because in Armv8 some encodings in 0, c0, c3, {0-7} are now being used for new ID registers, and guests might thus start trying to read them. (We already have one of these: ID_PFR2.) For v8 CPUs, we already have regdefs for 0, c0, c{0-2}, {0-7} (that is, the space is completely allocated with no reserved spaces). Add entries to v8_idregs[] covering 0, c0, c3, {0-7}: * c3, {0-2} is the reserved AArch32 space corresponding to the AArch64 MVFR[012]_EL1 * c3, {3,5,6,7} are reserved RAZ for both AArch32 and AArch64 (in fact some of these are given defined meanings in Armv8.6, but we don't implement them yet) * c3, 4 is ID_PFR2 (already defined) We then programmatically add RAZ patterns for AArch32 for 0, c0, c{4..15}, {0-7}: * c4-c7 are unused, and not shared with AArch64 (these are the encodings corresponding to where the AArch64 specific ID registers live in the system register space) * c8-c15 weren't required to RAZ in v6/v7, but v8 extends the AArch32 reserved-should-RAZ space to cover these; the equivalent area of the AArch64 sysreg space is not defined as must-RAZ Note that the architecture allows some registers in this space to return an UNKNOWN value; we always return 0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220819110052.2942289-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | target/arm: Add cortex-a35Hao Wu2022-09-141-0/+80
|/ | | | | | | | | | Add cortex A35 core and enable it for virt board. Signed-off-by: Hao Wu <wuhaotsh@google.com> Reviewed-by: Joe Komlodi <komlodi@google.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20220819002015.1663247-1-wuhaotsh@google.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/riscv: Update the privilege field for sscofpmf CSRsAtish Patra2022-09-071-30/+60
| | | | | | | | | | | The sscofpmf extension was ratified as a part of priv spec v1.12. Mark the csr_ops accordingly. Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221701.41932-6-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* hw/riscv: virt: Add PMU DT node to the device treeAtish Patra2022-09-072-0/+58
| | | | | | | | | | | | | | Qemu virt machine can support few cache events and cycle/instret counters. It also supports counter overflow for these events. Add a DT node so that OpenSBI/Linux kernel is aware of the virt machine capabilities. There are some dummy nodes added for testing as well. Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221701.41932-5-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Add few cache related PMU eventsAtish Patra2022-09-071-0/+25
| | | | | | | | | | | | | | | | | Qemu can monitor the following cache related PMU events through tlb_fill functions. 1. DTLB load/store miss 3. ITLB prefetch miss Increment the PMU counter in tlb_fill function. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221701.41932-4-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Simplify counter predicate functionAtish Patra2022-09-071-101/+9Star
| | | | | | | | | | | | | | | All the hpmcounters and the fixed counters (CY, IR, TM) can be represented as a unified counter. Thus, the predicate function doesn't need handle each case separately. Simplify the predicate function so that we just handle things differently between RV32/RV64 and S/HS mode. Reviewed-by: Bin Meng <bmeng.cn@gmail.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221701.41932-3-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Add sscofpmf extension supportAtish Patra2022-09-077-11/+623
| | | | | | | | | | | | | | | | | | | The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions, and 'cofpmf' for Count OverFlow and Privilege Mode Filtering) extension allows the perf to handle overflow interrupts and filtering support. This patch provides a framework for programmable counters to leverage the extension. As the extension doesn't have any provision for the overflow bit for fixed counters, the fixed events can also be monitoring using programmable counters. The underlying counters for cycle and instruction counters are always running. Thus, a separate timer device is programmed to handle the overflow. Tested-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221701.41932-2-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Add vstimecmp supportAtish Patra2022-09-076-6/+118
| | | | | | | | | | | | vstimecmp CSR allows the guest OS or to program the next guest timer interrupt directly. Thus, hypervisor no longer need to inject the timer interrupt to the guest if vstimecmp is used. This was ratified as a part of the Sstc extension. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221357.41070-4-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Add stimecmp supportAtish Patra2022-09-078-1/+235
| | | | | | | | | | | stimecmp allows the supervisor mode to update stimecmp CSR directly to program the next timer interrupt. This CSR is part of the Sstc extension which was ratified recently. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221357.41070-3-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* hw/intc: Move mtimer/mtimecmp to aclintAtish Patra2022-09-072-5/+2Star
| | | | | | | | | | | | | | | | Historically, The mtime/mtimecmp has been part of the CPU because they are per hart entities. However, they actually belong to aclint which is a MMIO device. Move them to the ACLINT device. This also emulates the real hardware more closely. Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220824221357.41070-2-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Use official extension names for AIA CSRsAnup Patel2022-09-074-14/+26
| | | | | | | | | | | | | | | | | | | | The arch review of AIA spec is completed and we now have official extension names for AIA: Smaia (M-mode AIA CSRs) and Ssaia (S-mode AIA CSRs). Refer, section 1.6 of the latest AIA v0.3.1 stable specification at https://github.com/riscv/riscv-aia/releases/download/0.3.1-draft.32/riscv-interrupts-032.pdf) Based on above, we update QEMU RISC-V to: 1) Have separate config options for Smaia and Ssaia extensions which replace RISCV_FEATURE_AIA in CPU features 2) Not generate AIA INTC compatible string in virt machine Signed-off-by: Anup Patel <apatel@ventanamicro.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20220820042958.377018-1-apatel@ventanamicro.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Add xicondops in ISA entryRahul Pathak2022-09-071-0/+1
| | | | | | | | | | XVentanaCondOps is Ventana custom extension. Add its extension entry in the ISA Ext array Signed-off-by: Rahul Pathak <rpathak@ventanamicro.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20220816045408.1231135-1-rpathak@ventanamicro.com Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Remove additional priv version check for mcountinhibitAtish Patra2022-09-071-8/+0Star
| | | | | | | | | | | | With .min_priv_version, additiona priv version check is uncessary for mcountinhibit read/write functions. Reviewed-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Atish Patra <atishp@rivosinc.com> Message-Id: <20220816232321.558250-7-atishp@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Fix priority of csr related check in riscv_csrrw_checkWeiwei Li2022-09-071-19/+25
| | | | | | | | | | | | | | | | | | | Normally, riscv_csrrw_check is called when executing Zicsr instructions. And we can only do access control for existed CSRs. So the priority of CSR related check, from highest to lowest, should be as follows: 1) check whether Zicsr is supported: raise RISCV_EXCP_ILLEGAL_INST if not 2) check whether csr is existed: raise RISCV_EXCP_ILLEGAL_INST if not 3) do access control: raise RISCV_EXCP_ILLEGAL_INST or RISCV_EXCP_VIRT_ INSTRUCTION_FAULT if not allowed The predicates contain parts of function of both 2) and 3), So they need to be placed in the middle of riscv_csrrw_check Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20220803123652.3700-1-liweiwei@iscas.ac.cn> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Add Zihintpause supportDao Lu2022-09-074-1/+25
| | | | | | | | | | | Added support for RISC-V PAUSE instruction from Zihintpause extension, enabled by default. Tested-by: Heiko Stuebner <heiko@sntech.de> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Dao Lu <daolu@rivosinc.com> Message-Id: <20220725034728.2620750-2-daolu@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add option 'rvv_ma_all_1s' to enable optional mask ↵eopXD2022-09-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | agnostic behavior According to v-spec, mask agnostic behavior can be either kept as undisturbed or set elements' bits to all 1s. To distinguish the difference of mask policies, QEMU should be able to simulate the mask agnostic behavior as "set mask elements' bits to all 1s". There are multiple possibility for agnostic elements according to v-spec. The main intent of this patch-set tries to add option that can distinguish between mask policies. Setting agnostic elements to all 1s allows QEMU to express this. This commit adds option 'rvv_ma_all_1s' is added to enable the behavior, it is default as disabled. Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-10@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector permutation instructionsYueh-Ting (eop) Chen2022-09-072-2/+25
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-9@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector mask instructionsYueh-Ting (eop) Chen2022-09-072-0/+14
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-8@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector floating-point instructionsYueh-Ting (eop) Chen2022-09-072-0/+38
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-7@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector fix-point arithmetic ↵Yueh-Ting (eop) Chen2022-09-071-10/+16
| | | | | | | | | | | instructions Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-6@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector integer comparison instructionsYueh-Ting (eop) Chen2022-09-072-0/+11
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-5@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector integer shift instructionsYueh-Ting (eop) Chen2022-09-072-0/+8
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-4@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vx instructionsYueh-Ting (eop) Chen2022-09-072-0/+5
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-3@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vector load / store instructionsYueh-Ting (eop) Chen2022-09-072-11/+29
| | | | | | | | | Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Acked-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-2@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: rvv: Add mask agnostic for vv instructionsYueh-Ting (eop) Chen2022-09-076-2/+20
| | | | | | | | | | | | | | | | | | | | | | | According to v-spec, mask agnostic behavior can be either kept as undisturbed or set elements' bits to all 1s. To distinguish the difference of mask policies, QEMU should be able to simulate the mask agnostic behavior as "set mask elements' bits to all 1s". There are multiple possibility for agnostic elements according to v-spec. The main intent of this patch-set tries to add option that can distinguish between mask policies. Setting agnostic elements to all 1s allows QEMU to express this. This is the first commit regarding the optional mask agnostic behavior. Follow-up commits will add this optional behavior for all rvv instructions. Signed-off-by: eop Chen <eop.chen@sifive.com> Reviewed-by: Frank Chang <frank.chang@sifive.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <165570784143.17634.35095816584573691-1@git.sr.ht> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* target/riscv: Fix typo and restore Pointer Masking functionality for RISC-VAlexey Baturo2022-09-071-1/+1
| | | | | | | | Fixes: 4302bef9e178 ("target/riscv: Calculate address according to XLEN") Signed-off-by: Alexey Baturo <baturo.alexey@gmail.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20220717101543.478533-2-space.monkey.delivers@gmail.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>