summaryrefslogtreecommitdiffstats
path: root/target/ppc/cpu.h
Commit message (Collapse)AuthorAgeFilesLines
* target/ppc/cpu.h: Clean up comments in the struct CPUPPCState definitionBALATON Zoltan2020-02-201-91/+54Star
| | | | | | | | | | The cpu env struct is quite complex but comments supposed to explain it in its definition just make it harder to read. Reformat and reword some comments to make it clearer and more readable. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Message-Id: <8707144ab1ccf9c5c89a39c2d7a0b02307ca25d4.1581888834.git.balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc/cpu.h: Move fpu related members closer in cpu envBALATON Zoltan2020-02-201-5/+4Star
| | | | | | | | | Move fp_status and fpscr closer to other floating point and vector related members in cpu env definition so they are in one group. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Message-Id: <5b50e9e7eec2c383ae878b397d0b2927efc9ea43.1581888834.git.balaton@eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc/cpu.h: Remove duplicate includesBALATON Zoltan2020-02-201-2/+0Star
| | | | | | | | | | | Commit 74433bf083b added some includes but added them twice. Since these are guarded against multiple inclusion including them once is enough. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Message-Id: <20200212223207.5A37574637F@zero.eik.bme.hu> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc/cpu.h: Put macro parameter in parenthesesBALATON Zoltan2020-02-031-1/+1
| | | | | | | | | Fix PPC_INPUT macro to work with more complex expressions by protecting its argument with parentheses. Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu> Message-Id: <20200130021619.65FAB747871@zero.eik.bme.hu> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: add support for Hypervisor Facility Unavailable ExceptionCédric Le Goater2020-02-021-0/+6
| | | | | | | | | | | | | | | The privileged message send and clear instructions (msgsndp & msgclrp) are privileged, but will generate a hypervisor facility unavailable exception if not enabled in the HFSCR and executed in privileged non-hypervisor state. Add checks when accessing the DPDES register and when using the msgsndp and msgclrp isntructions. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20200120104935.24449-3-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Use cpu_*_mmuidx_ra instead of MMU_MODE*_SUFFIXRichard Henderson2020-01-161-2/+0Star
| | | | | | | | | | | | There are only two uses. Within dcbz_common, the local variable mmu_idx already contains the epid computation, and we can avoid repeating it for the store. Within helper_icbiep, the usage is trivially expanded using PPC_TLB_EPID_LOAD. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/ppc: Add SPR TBU40Suraj Jitindar Singh2019-12-171-0/+1
| | | | | | | | | | | | | The spr TBU40 is used to set the upper 40 bits of the timebase register, present on POWER5+ and later processors. This register can only be written by the hypervisor, and cannot be read. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20191128134700.16091-5-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Add SPR ASDRSuraj Jitindar Singh2019-12-171-0/+1
| | | | | | | | | | | | | | | | The Access Segment Descriptor Register (ASDR) provides information about the storage element when taking a hypervisor storage interrupt. When performing nested radix address translation, this is normally the guest real address. This register is present on POWER9 processors and later. Implement the ADSR, note read and write access is limited to the hypervisor. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20191128134700.16091-4-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Work [S]PURR implementation and add HV supportSuraj Jitindar Singh2019-12-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | The Processor Utilisation of Resources Register (PURR) and Scaled Processor Utilisation of Resources Register (SPURR) provide an estimate of the resources used by the thread, present on POWER7 and later processors. Currently the [S]PURR registers simply count at the rate of the timebase. Preserve this behaviour but rework the implementation to store an offset like the timebase rather than doing the calculation manually. Also allow hypervisor write access to the register along with the currently available read access. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ clg: rebased on current ppc tree ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20191128134700.16091-3-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Implement the VTB for HV accessSuraj Jitindar Singh2019-12-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | The virtual timebase register (VTB) is a 64-bit register which increments at the same rate as the timebase register, present on POWER8 and later processors. The register is able to be read/written by the hypervisor and read by the supervisor. All other accesses are illegal. Currently the VTB is just an alias for the timebase (TB) register. Implement the VTB so that is can be read/written independent of the TB. Make use of the existing method for accessing timebase facilities where by the compensation is stored and used to compute the value on reads/is updated on writes. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> [ clg: rebased on current ppc tree ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20191128134700.16091-2-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Add POWER10 DD1.0 model informationCédric Le Goater2019-12-171-0/+1
| | | | | | | | | | This includes in QEMU a new CPU model for the POWER10 processor with the same capabilities of a POWER9 process. The model will be extended when support is completed. Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20191205184454.10722-2-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* ppc: Make PPCVirtualHypervisor an incomplete typeGreg Kurz2019-12-171-4/+0Star
| | | | | | | | | | PPCVirtualHypervisor is an interface instance. It should never be dereferenced. Drop the dummy type definition for extra safety, which is the common practice with QOM interfaces. Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <157589808041.21182.18121655959115011353.stgit@bahia.lan> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* ppc: Don't use CPUPPCState::irq_input_state with modern Book3s CPU modelsGreg Kurz2019-12-171-1/+3
| | | | | | | | | | | | | | The power7_set_irq() and power9_set_irq() functions set this but it is never used actually. Modern Book3s compatible CPUs are only supported by the pnv and spapr machines. They have an interrupt controller, XICS for POWER7/8 and XIVE for POWER9, whose models don't require to track IRQ input states at the CPU level. Drop these lines to avoid confusion. Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <157548862861.3650476.16622818876928044450.stgit@bahia.lan> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: update {get,set}_dfp{64,128}() helper functions to read/write ↵Mark Cave-Ayland2019-10-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | DFP numbers correctly Since commit ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr register array" FP registers are no longer stored consecutively in memory and so the current method of combining FP register pairs into DFP numbers is incorrect. Firstly update the definition of the dh_*_fprp defines in helper.h to reflect that FP registers are now stored as part of an array of ppc_vsr_t elements rather than plain uint64_t elements, and then introduce a new ppc_fprp_t type which conceptually represents a DFP even-odd register pair to be consumed by the DFP helper functions. Finally update the new DFP {get,set}_dfp{64,128}() helper functions to convert between DFP numbers and DFP even-odd register pairs correctly, making use of the existing VsrD() macro to access the correct elements regardless of host endian. Fixes: ef96e3ae96 "target/ppc: move FP and VMX registers into aligned vsr register array" Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190926185801.11176-4-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* ppc: Add support for 'mffscrn','mffscrni' instructionsPaul A. Clarke2019-10-041-1/+8
| | | | | | | | | | | | | | | | | | ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR) instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl. This patch adds support for 'mffscrn' and 'mffscrni' instructions. 'mffscrn' and 'mffscrni' are similar to 'mffsl', except they do not return the status bits (FI, FR, FPRF) and they also set the rounding mode in the FPSCR. On CPUs without support for 'mffscrn'/'mffscrni' (below ISA 3.0), the instructions will execute identically to 'mffs'. Signed-off-by: Paul A. Clarke <pc@us.ibm.com> Message-Id: <1568817081-1345-1-git-send-email-pc@us.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* ppc: Add support for 'mffsl' instructionPaul A. Clarke2019-08-211-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR) instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl. This patch adds support for 'mffsl'. 'mffsl' is identical to 'mffs', except it only returns mode, status, and enable bits from the FPSCR. On CPUs without support for 'mffsl' (below ISA 3.0), the 'mffsl' instruction will execute identically to 'mffs'. Note: I renamed FPSCR_RN to FPSCR_RN0 so I could create an FPSCR_RN mask which is both bits of the FPSCR rounding mode, as defined in the ISA. I also fixed a typo in the definition of FPSCR_FR. Signed-off-by: Paul A. Clarke <pc@us.ibm.com> v4: - nit: added some braces to resolve a checkpatch complaint. v3: - Changed tcg_gen_and_i64 to tcg_gen_andi_i64, eliminating the need for a temporary, per review from Richard Henderson. v2: - I found that I copied too much of the 'mffs' implementation. The 'Rc' condition code bits are not needed for 'mffsl'. Removed. - I now free the (renamed) 'tmask' temporary. - I now bail early for older ISA to the original 'mffs' implementation. Message-Id: <1565982203-11048-1-git-send-email-pc@us.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Add Directed Privileged Door-bell Exception State (DPDES) SPRAlexey Kardashevskiy2019-08-211-0/+1
| | | | | | | | | | | | | DPDES stores a status of a doorbell message and if it is lost in migration, the destination CPU won't receive it. This does not hit us much as IPIs complete too quick to catch a pending one and even if we missed one, broadcasts happen often enough to wake that CPU. This defines DPDES and registers with KVM for migration. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20190816061733.53572-1-aik@ozlabs.ru> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* spapr: Implement dispatch tracking for tcgNicholas Piggin2019-08-211-0/+4
| | | | | | | | | | | | Implement cpu_exec_enter/exit on ppc which calls into new methods of the same name in PPCVirtualHypervisorClass. These are used by spapr to implement the splpar VPA dispatch counter initially. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Message-Id: <20190718034214.14948-2-npiggin@gmail.com> [dwg: Removed unnecessary CONFIG_USER_ONLY checks as suggested by gkurz] Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: move opcode decode tables to PowerPCCPUAlex Bennée2019-08-211-4/+4
| | | | | | | | | | | | | | | | The opcode decode tables aren't really part of the CPUPPCState but an internal implementation detail for the translator. This can cause problems with memcpy in cpu_copy as any table created during ppc_cpu_realize get written over causing a memory leak. To avoid this move the tables into PowerPCCPU which is better suited to hold internal implementation details. Attempts to fix: https://bugs.launchpad.net/qemu/+bug/1836558 Cc: 1836558@bugs.launchpad.net Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20190716121352.302-1-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* migration: Move the VMStateDescription typedef to typedefs.hMarkus Armbruster2019-08-161-1/+1
| | | | | | | | | | | | | | | We declare incomplete struct VMStateDescription in a couple of places so we don't have to include migration/vmstate.h for the typedef. That's fine with me. However, the next commit will drop migration/vmstate.h from a massive number of compiles. Move the typedef to qemu/typedefs.h now, so I don't have to insert struct in front of VMStateDescription all over the place then. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-15-armbru@redhat.com>
* Include qemu-common.h exactly where neededMarkus Armbruster2019-06-121-1/+0Star
| | | | | | | | | | | | | | | | No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
* cpu: Remove CPU_COMMONRichard Henderson2019-06-101-2/+0Star
| | | | | | | | | | This macro is now always empty, so remove it. This leaves the entire contents of CPUArchState under the control of the guest architecture. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Introduce CPUNegativeOffsetStateRichard Henderson2019-06-101-0/+2
| | | | | | | | | Nothing in there so far, but all of the plumbing done within the target ArchCPU state. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Move ENV_OFFSET to exec/gen-icount.hRichard Henderson2019-06-101-1/+0Star
| | | | | | | | | Now that we have ArchCPU, we can define this generically, in the one place that needs it. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/ppc: Use env_cpu, env_archcpuRichard Henderson2019-06-101-6/+1Star
| | | | | | | | | | | Cleanup in the boilerplate that each target must define. Replace ppc_env_get_cpu with env_archcpu. The combination CPU(ppc_env_get_cpu) should have used ENV_GET_CPU to begin; use env_cpu now. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Replace ENV_GET_CPU with env_cpuRichard Henderson2019-06-101-2/+0Star
| | | | | | | | | Now that we have both ArchCPU and CPUArchState, we can define this generically instead of via macro in each target's cpu.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Define ArchCPURichard Henderson2019-06-101-0/+1
| | | | | | | | For all targets, do this just before including exec/cpu-all.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Define CPUArchState with typedefRichard Henderson2019-06-101-2/+2
| | | | | | | | For all targets, do this just before including exec/cpu-all.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Split out target/arch/cpu-param.hRichard Henderson2019-06-101-38/+4Star
| | | | | | | | | | | | | | | | For all targets, into this new file move TARGET_LONG_BITS, TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS, TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES. Include this new file from exec/cpu-defs.h. This now removes the somewhat odd requirement that target/arch/cpu.h defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the bulk of the includes within target/arch/cpu.h to the top. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/ppc: Convert to CPUClass::tlb_fillRichard Henderson2019-05-101-4/+3Star
| | | | | | | Cc: qemu-ppc@nongnu.org Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* ppc/hash64: Rework R and C bit updatesBenjamin Herrenschmidt2019-04-261-2/+2
| | | | | | | | | | | | | | With MT-TCG, we are now running translation in a racy way, thus we need to mimic hardware when it comes to updating the R and C bits, by doing byte stores. The current "store_hpte" abstraction is ill suited for this, we replace it with two separate callbacks for setting R and C. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190411080004.8690-4-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Style fixes for cpu.[ch]David Gibson2019-04-261-108/+129
| | | | | | Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org>
* qom/cpu: Simplify how CPUClass:cpu_dump_state() printsMarkus Armbruster2019-04-181-2/+1Star
| | | | | | | | | | | | | | | | | | | | CPUClass method dump_statistics() takes an fprintf()-like callback and a FILE * to pass to it. Most callers pass fprintf() and stderr. log_cpu_state() passes fprintf() and qemu_log_file. hmp_info_registers() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The callback gets passed around a lot, which is tiresome. The type-punning around monitor_fprintf() is ugly. Drop the callback, and call qemu_fprintf() instead. Also gets rid of the type-punning, since qemu_fprintf() takes NULL instead of the current monitor cast to FILE *. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-15-armbru@redhat.com>
* qom/cpu: Simplify how CPUClass::dump_statistics() printsMarkus Armbruster2019-04-181-2/+1Star
| | | | | | | | | | | | | | | | CPUClass method dump_statistics() takes an fprintf()-like callback and a FILE * to pass to it. Its only caller hmp_info_cpustats() (via cpu_dump_statistics()) passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The type-punning is ugly. Drop the callback, and call qemu_printf() instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-13-armbru@redhat.com>
* target: Clean up how the dump_mmu() printMarkus Armbruster2019-04-181-1/+1
| | | | | | | | | | | | | | | | | | | | | The various dump_mmu() take an fprintf()-like callback and a FILE * to pass to it, and so do their helper functions. Passing around callback and argument is rather tiresome. Most dump_mmu() are called only by the target's hmp_info_tlb(). These all pass monitor_printf() cast to fprintf_function and the current monitor cast to FILE *. SPARC's dump_mmu() gets also called from target/sparc/ldst_helper.c a few times #ifdef DEBUG_MMU. These calls pass fprintf() and stdout. The type-punning is technically undefined behaviour, but works in practice. Clean up: drop the callback, and call qemu_printf() instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-11-armbru@redhat.com>
* target: Simplify how the TARGET_cpu_list() printMarkus Armbruster2019-04-181-1/+1
| | | | | | | | | | | | | | | | | The various TARGET_cpu_list() take an fprintf()-like callback and a FILE * to pass to it. Their callers (vl.c's main() via list_cpus(), bsd-user/main.c's main(), linux-user/main.c's main()) all pass fprintf() and stdout. Thus, the flexibility provided by the (rather tiresome) indirection isn't actually used. Drop the callback, and call qemu_printf() instead. Calling printf() would also work, but would make the code unsuitable for monitor context without making it simpler. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190417191805.28198-10-armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* target/ppc: Consolidate 64-bit server processor detection in a helperGreg Kurz2019-03-291-0/+6
| | | | | | | | | | | We use PPC_SEGMENT_64B in various places to guard code that is specific to 64-bit server processors compliant with arch 2.x. Consolidate the logic in a helper macro with an explicit name. Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <155327783157.1283071.3747129891004927299.stgit@bahia.lan> Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: introduce vsr64_offset() to simplify get_cpu_vsr{l,h}() and ↵Mark Cave-Ayland2019-03-121-10/+10
| | | | | | | | | | | | | | | | | | | set_cpu_vsr{l,h}() Now that all VSX registers are stored in host endian order, there is no need to go via different accessors depending upon the register number. Instead we introduce vsr64_offset() and use it directly from within get_cpu_vsr{l,h}() and set_cpu_vsr{l,h}(). This also allows us to rewrite avr64_offset() and fpr_offset() in terms of the new vsr64_offset() function to more clearly express the relationship between the VSX, FPR and VMX registers, and also remove vsrl_offset() which is no longer required. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-8-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: switch fpr/vsrl registers so all VSX registers are in host ↵Mark Cave-Ayland2019-03-121-2/+2
| | | | | | | | | | | | | | | | | | | endian order When VSX support was initially added, the fpr registers were added at offset 0 of the VSR register and the vsrl registers were added at offset 1. This is in contrast to the VMX registers (the last 32 VSX registers) which are stored in host-endian order. Switch the fpr/vsrl registers so that the lower 32 VSX registers are now also stored in host endian order to match the VMX registers. This ensures that TCG vector operations involving mixed VMX and VSX registers will function correctly. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190307180520.13868-7-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: improve avr64_offset() and use it to simplify ↵Mark Cave-Ayland2019-03-121-0/+5
| | | | | | | | | | | | | get_avr64()/set_avr64() By using the VsrD macro in avr64_offset() the same offset calculation can be used regardless of the host endian. This allows get_avr64() and set_avr64() to be simplified accordingly. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-6-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: introduce avr_full_offset() functionMark Cave-Ayland2019-03-121-1/+11
| | | | | | | | | | | | | | | All TCG vector operations require pointers to the base address of the vector rather than separate access to the top and bottom 64-bits. Convert the VMX TCG instructions to use a new avr_full_offset() function instead of avr64_offset() which can then itself be written as a simple wrapper onto vsr_full_offset(). This same function can also reused in cpu_avr_ptr() to avoid having more than one copy of the offset calculation logic. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-5-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: move Vsr* macros from internal.h to cpu.hMark Cave-Ayland2019-03-121-0/+20
| | | | | | | | | | It isn't possible to include internal.h from cpu.h so move the Vsr* macros into cpu.h alongside the other VMX/VSX register access functions. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Message-Id: <20190307180520.13868-4-mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: introduce single vsrl_offset() functionMark Cave-Ayland2019-03-121-1/+6
| | | | | | | | | | | | | Instead of having multiple copies of the offset calculation logic, move it to a single vsrl_offset() function. This commit also renames the existing get_vsr()/set_vsr() functions to get_vsrl()/set_vsrl() which better describes their purpose. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190307180520.13868-3-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: introduce single fpr_offset() functionMark Cave-Ayland2019-03-121-1/+6
| | | | | | | | | | Instead of having multiple copies of the offset calculation logic, move it to a single fpr_offset() function. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190307180520.13868-2-mark.cave-ayland@ilande.co.uk> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Implement large decrementer support for TCGSuraj Jitindar Singh2019-03-121-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to POWER9 the decrementer was a 32-bit register which decremented with each tick of the timebase. From POWER9 onwards the decrementer can be set to operate in a mode called large decrementer where it acts as a n-bit decrementing register which is visible as a 64-bit register, that is the value of the decrementer is sign extended to 64 bits (where n is implementation dependant). The mode in which the decrementer operates is controlled by the LPCR_LD bit in the logical paritition control register (LPCR). >From POWER9 onwards the HDEC (hypervisor decrementer) was enlarged to h-bits, also sign extended to 64 bits (where h is implementation dependant). Note this isn't configurable and is always enabled. On POWER9 the large decrementer and hdec are both 56 bits, as represented by the lrg_decr_bits cpu class property. Since they are the same size we only add one property for now, which could be extended in the case they ever differ in the future. We also add the lrg_decr_bits property for POWER5+/7/8 since it is used to determine the size of the hdec, which is only generated on the POWER5+ processor and later. On these processors it is 32 bits. Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190301024317.22137-2-sjitindarsingh@gmail.com> [dwg: Small style fixes] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Rename PATB/PATBE -> PATEBenjamin Herrenschmidt2019-02-251-1/+5
| | | | | | | | | | | | | | That "b" means "base address" and thus shouldn't be in the name of actual entries and related constants. This patch keeps the synthetic patb_entry field of the spapr virtual hypervisor unchanged until I figure out if that has an impact on the migration stream. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190215170029.15641-11-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc/spapr: Set LPCR:HR when using Radix modeBenjamin Herrenschmidt2019-02-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | The HW relies on LPCR:HR along with the PATE to determine whether to use Radix or Hash mode. In fact it uses LPCR:HR more commonly than the PATE. For us, it's also more efficient to do so, especially since unlike the HW we do not maintain a cache of the current PATE and HV PATE in a generic place. Prepare the grounds for that by ensuring that LPCR:HR is set properly on SPAPR machines. Another option would have been to use a callback to get the PATE but this gets messy when implementing bare metal support, it's much simpler (and faster) to use LPCR. Since existing migration streams may not have it, fix it up in spapr_post_load() as well based on the pseudo-PATE entry that we keep. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190215170029.15641-2-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Add POWER9 external interrupt modelBenjamin Herrenschmidt2019-02-251-0/+7
| | | | | | | | | | | | | Adds support for the Hypervisor directed interrupts in addition to the OS ones. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: - modified the icp_realize() and xive_tctx_realize() to take into account explicitely the POWER9 interrupt model - introduced a specific power9_set_irq for POWER9 ] Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20190215161648.9600-10-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Add Hypervisor Virtualization Interrupt on POWER9Benjamin Herrenschmidt2019-02-251-1/+4
| | | | | | | | | | This adds support for delivering that exception Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215161648.9600-9-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* target/ppc: Rename "in_pm_state" to "resume_as_sreset"Benjamin Herrenschmidt2019-02-251-3/+3
| | | | | | | | | | | To better reflect what this does, as it's specific to some of the P7/P8/P9 PM states, not generic. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <20190215161648.9600-6-clg@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>