summaryrefslogtreecommitdiffstats
path: root/include/exec/cpu-all.h
Commit message (Collapse)AuthorAgeFilesLines
* cpu-timers, icount: new modulesClaudio Fontana2020-10-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | refactoring of cpus.c continues with cpu timer state extraction. cpu-timers: responsible for the softmmu cpu timers state, including cpu clocks and ticks. icount: counts the TCG instructions executed. As such it is specific to the TCG accelerator. Therefore, it is built only under CONFIG_TCG. One complication is due to qtest, which uses an icount field to warp time as part of qtest (qtest_clock_warp). In order to solve this problem, provide a separate counter for qtest. This requires fixing assumptions scattered in the code that qtest_enabled() implies icount_enabled(), checking each specific case. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [remove redundant initialization with qemu_spice_init] Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [fix lingering calls to icount_get] Signed-off-by: Claudio Fontana <cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* meson: rename .inc.h files to .h.incPaolo Bonzini2020-08-211-5/+5
| | | | | | Make it consistent with '.c.inc' and '.rst.inc'. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* osdep: Make MIN/MAX evaluate arguments only onceEric Blake2020-06-261-5/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I'm not aware of any immediate bugs in qemu where a second runtime evaluation of the arguments to MIN() or MAX() causes a problem, but proactively preventing such abuse is easier than falling prey to an unintended case down the road. At any rate, here's the conversation that sparked the current patch: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg05718.html Update the MIN/MAX macros to only evaluate their argument once at runtime; this uses typeof(1 ? (a) : (b)) to ensure that we are promoting the temporaries to the same type as the final comparison (we have to trigger type promotion, as typeof(bitfield) won't compile; and we can't use typeof((a) + (b)) or even typeof((a) + 0), as some of our uses of MAX are on void* pointers where such addition is undefined). However, we are unable to work around gcc refusing to compile ({}) in a constant context (such as the array length of a static variable), even when only used in the dead branch of a __builtin_choose_expr(), so we have to provide a second macro pair MIN_CONST and MAX_CONST for use when both arguments are known to be compile-time constants and where the result must also be usable as a constant; this second form evaluates arguments multiple times but that doesn't matter for constants. By using a void expression as the expansion if a non-constant is presented to this second form, we can enlist the compiler to ensure the double evaluation is not attempted on non-constants. Alas, as both macros now rely on compiler intrinsics, they are no longer usable in preprocessor #if conditions; those will just have to be open-coded or the logic rewritten into #define or runtime 'if' conditions (but where the compiler dead-code-elimination will probably still apply). I tested that both gcc 10.1.1 and clang 10.0.0 produce errors for all forms of macro mis-use. As the errors can sometimes be cryptic, I'm demonstrating the gcc output: Use of MIN when MIN_CONST is needed: In file included from /home/eblake/qemu/qemu-img.c:25: /home/eblake/qemu/include/qemu/osdep.h:249:5: error: braced-group within expression allowed only inside a function 249 | ({ \ | ^ /home/eblake/qemu/qemu-img.c:92:12: note: in expansion of macro ‘MIN’ 92 | char array[MIN(1, 2)] = ""; | ^~~ Use of MIN_CONST when MIN is needed: /home/eblake/qemu/qemu-img.c: In function ‘is_allocated_sectors’: /home/eblake/qemu/qemu-img.c:1225:15: error: void value not ignored as it ought to be 1225 | i = MIN_CONST(i, n); | ^ Use of MIN in the preprocessor: In file included from /home/eblake/qemu/accel/tcg/translate-all.c:20: /home/eblake/qemu/accel/tcg/translate-all.c: In function ‘page_check_range’: /home/eblake/qemu/include/qemu/osdep.h:249:6: error: token "{" is not valid in preprocessor expressions 249 | ({ \ | ^ Fix the resulting callsites that used #if or computed a compile-time constant min or max to use the new macros. cpu-defs.h is interesting, as CPU_TLB_DYN_MAX_BITS is sometimes used as a constant and sometimes dynamic. It may be worth improving glib's MIN/MAX definitions to be saner, but that is a task for another day. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200625162602.700741-1-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Propagate cpu_memory_rw_debug() errorPhilippe Mathieu-Daudé2020-06-101-0/+1
| | | | | | | | Do not ignore the MemTxResult error type returned by the address_space_rw() API. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* accel/tcg: Relax va restrictions on 64-bit guestsRichard Henderson2020-05-151-4/+19
| | | | | | | | | | | | | | | | | | We cannot at present limit a 64-bit guest to a virtual address space smaller than the host. It will mostly work to ignore this limitation, except if the guest uses high bits of the address space for tags. But it will certainly work better, as presently we can wind up failing to allocate the guest stack. Widen our user-only page tree to the host or abi pointer width. Remove the workaround for this problem from target/alpha. Always validate guest addresses vs reserved_va, as there we control allocation ourselves. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200513175134.19619-7-alex.bennee@linaro.org>
* exec/cpu-all: Use bool for have_guest_baseRichard Henderson2020-05-151-1/+1
| | | | | | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20200513175134.19619-6-alex.bennee@linaro.org>
* accel/tcg: Add probe_access_flagsRichard Henderson2020-05-111-1/+12
| | | | | | | | | | | | | This new interface will allow targets to probe for a page and then handle watchpoints themselves. This will be most useful for vector predicated memory operations, where one page lookup can be used for many operations, and one test can avoid many watchpoint checks. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* exec: Let cpu_[physical]_memory API use a boolean 'is_write' argumentPhilippe Mathieu-Daudé2020-02-201-1/+1
| | | | | | | The 'is_write' argument is either 0 or 1. Convert it to a boolean type. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
* exec: Let the cpu_[physical]_memory API use void pointer argumentsPhilippe Mathieu-Daudé2020-02-201-1/+1
| | | | | | | As we are only dealing with a blob buffer, use a void pointer argument. This will let us simplify other APIs. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
* exec: Cache TARGET_PAGE_MASK for TARGET_PAGE_BITS_VARYRichard Henderson2019-10-281-2/+6
| | | | | | | | | | | | This eliminates a set of runtime shifts. It turns out that we require TARGET_PAGE_MASK more often than TARGET_PAGE_SIZE, so redefine TARGET_PAGE_SIZE based on TARGET_PAGE_MASK instead of the other way around. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* exec: Promote TARGET_PAGE_MASK to target_longRichard Henderson2019-10-281-1/+1
| | | | | | | | | | | | | There are some uint64_t uses that expect TARGET_PAGE_MASK to extend for a 32-bit, so this must continue to be a signed type. Define based on TARGET_PAGE_BITS not TARGET_PAGE_SIZE; this will make a following patch more clear. This should not have a functional effect so far. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* exec: Restrict TARGET_PAGE_BITS_VARY assert to CONFIG_DEBUG_TCGRichard Henderson2019-10-281-0/+4
| | | | | | | | | | This reduces the size of a release build by about 10k. Noticably, within the tlb miss helpers. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* exec: Use const alias for TARGET_PAGE_BITS_VARYRichard Henderson2019-10-281-4/+10
| | | | | | | | | | | | | | | Using a variable that is declared "const" for this tells the compiler that it may read the value once and assume that it does not change across function calls. For target_page_size, this means we have only one assert per function, and one read of the variable. This reduces the size of qemu-system-aarch64 by 8k. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: use ROUND_UP() to define xxx_PAGE_ALIGNWei Yang2019-10-281-4/+3Star
| | | | | | | | | | | | Use ROUND_UP() to define, which is a little bit easy to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Message-Id: <20191013021145.16011-2-richardw.yang@linux.intel.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Move ROM handling from I/O path to TLB pathRichard Henderson2019-09-251-1/+4
| | | | | | | | It does not require going through the whole I/O path in order to discard a write. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Introduce TLB_BSWAPRichard Henderson2019-09-251-1/+3
| | | | | | | | | | | Handle bswap on ram directly in load/store_helper. This fixes a bug with the previous implementation in that one cannot use the I/O path for RAM. Fixes: a26fc6f5152b47f1 Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* exec: Use TARGET_PAGE_BITS_MIN for TLB flagsRichard Henderson2019-09-251-6/+10
| | | | | | | | | | These bits do not need to vary with the actual page size used by the guest. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Handle watchpoints via TLB_WATCHPOINTRichard Henderson2019-09-031-1/+4
| | | | | | | | | | | | | | | | The raising of exceptions from check_watchpoint, buried inside of the I/O subsystem, is fundamentally broken. We do not have the helper return address with which we can unwind guest state. Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT. Move the call to cpu_check_watchpoint into the cputlb helpers where we do have the helper return address. This allows watchpoints on RAM to bypass the full i/o access path. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Fold TLB_RECHECK into TLB_INVALID_MASKRichard Henderson2019-09-031-4/+1Star
| | | | | | | | | | | | | | We had two different mechanisms to force a recheck of the tlb. Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit that would immediate set TLB_INVALID_MASK, which automatically means that a second check of the tlb entry fails. We can use the same mechanism to handle small pages. Conserve TLB_* bits by removing TLB_RECHECK. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* hw/core: Move cpu.c, cpu.h from qom/ to hw/core/Markus Armbruster2019-08-211-1/+1
| | | | | | | | | | Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h in comments replaced]
* Include qemu-common.h exactly where neededMarkus Armbruster2019-06-121-1/+0Star
| | | | | | | | | | | | | | | | No header includes qemu-common.h after this commit, as prescribed by qemu-common.h's file comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190523143508.25387-5-armbru@redhat.com> [Rebased with conflicts resolved automatically, except for include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and net/tap-bsd.c fixed up]
* cpu: Move the softmmu tlb to CPUNegativeOffsetStateRichard Henderson2019-06-101-0/+11
| | | | | | | | | | | We have for some time had code within the tcg backends to handle large positive offsets from env. This move makes sure that need not happen. Indeed, we are able to assert at build time that simple offsets suffice for all hosts. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Move icount_decr to CPUNegativeOffsetStateRichard Henderson2019-06-101-0/+1
| | | | | | | | | | | | | Amusingly, we had already ignored the comment to keep this value at the end of CPUState. This restores the minimum negative offset from TCG_AREG0 for code generation. For the couple of uses within qom/cpu.c, without NEED_CPU_H, add a pointer from the CPUState object to the IcountDecr object within CPUNegativeOffsetState. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Introduce CPUNegativeOffsetStateRichard Henderson2019-06-101-0/+24
| | | | | | | | | Nothing in there so far, but all of the plumbing done within the target ArchCPU state. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Introduce cpu_set_cpustate_pointersRichard Henderson2019-06-101-0/+11
| | | | | | | | Consolidate some boilerplate from foo_cpu_initfn. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Introduce env_archcpuRichard Henderson2019-06-101-2/+12
| | | | | | | | | This will replace foo_env_get_cpu with a generic definition. No changes to the target specific code so far. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Replace ENV_GET_CPU with env_cpuRichard Henderson2019-06-101-0/+12
| | | | | | | | | Now that we have both ArchCPU and CPUArchState, we can define this generically instead of via macro in each target's cpu.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Simplify how dump_exec_info() printsMarkus Armbruster2019-04-181-1/+1
| | | | | | | | | | | | | | | | dump_exec_info() takes an fprintf()-like callback and a FILE * to pass to it. Its only caller hmp_info_jit() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The type-punning is ugly. Drop the callback, and call qemu_printf() instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-5-armbru@redhat.com>
* tcg: Simplify how dump_opcount_info() printsMarkus Armbruster2019-04-181-1/+1
| | | | | | | | | | | | | | | | dump_opcount_info() takes an fprintf()-like callback and a FILE * to pass to it. Its only caller hmp_info_opcount() passes monitor_fprintf() and the current monitor cast to FILE *. monitor_fprintf() casts it right back, and is otherwise identical to monitor_printf(). The type-punning is ugly. Drop the callback, and call qemu_printf() instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190417191805.28198-4-armbru@redhat.com>
* unify len and addr type for memory/address APIsLi Zhijian2019-02-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Some address/memory APIs have different type between 'hwaddr/target_ulong addr' and 'int len'. It is very unsafe, especially some APIs will be passed a non-int len by caller which might cause overflow quietly. Below is an potential overflow case: dma_memory_read(uint32_t len) -> dma_memory_rw(uint32_t len) -> dma_memory_rw_relaxed(uint32_t len) -> address_space_rw(int len) # len overflow CC: Paolo Bonzini <pbonzini@redhat.com> CC: Peter Crosthwaite <crosthwaite.peter@gmail.com> CC: Richard Henderson <rth@twiddle.net> CC: Peter Maydell <peter.maydell@linaro.org> CC: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: Define and use new tlb_hit() and tlb_hit_page() functionsPeter Maydell2018-07-021-0/+23
| | | | | | | | | | | | | | | | | | The condition to check whether an address has hit against a particular TLB entry is not completely trivial. We do this in various places, and in fact in one place (get_page_addr_code()) we have got the condition wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline functions (one for a known-page-aligned address and one for an arbitrary address), and use them in all the places where we had the condition correct. This is a no-behaviour-change patch; we leave fixing the buggy code in get_page_addr_code() to a subsequent patch. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180629162122.19376-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZEPeter Maydell2018-06-261-1/+4
| | | | | | | | | | | | | | | | Add support for MMU protection regions that are smaller than TARGET_PAGE_SIZE. We do this by marking the TLB entry for those pages with a flag TLB_RECHECK. This flag causes us to always take the slow-path for accesses. In the slow path we can then special case them to always call tlb_fill() again, so we have the correct information for the exact address being accessed. This change allows us to handle reading and writing from small regions; we cannot deal with execution from the small region. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
* bswap: Add new stn_*_p() and ldn_*_p() memory access functionsPeter Maydell2018-06-151-0/+4
| | | | | | | | | | | | | | | There's a common pattern in QEMU where a function needs to perform a data load or store of an N byte integer in a particular endianness. At the moment this is handled by doing a switch() on the size and calling the appropriate ld*_p or st*_p function for each size. Provide a new family of functions ldn_*_p() and stn_*_p() which take the size as an argument and do the switch() themselves. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180611171007.4165-2-peter.maydell@linaro.org
* exec: reintroduce MemoryRegion cachingPaolo Bonzini2018-05-091-1/+5
| | | | | | | | | | | | | | | | | | MemoryRegionCache was reverted to "normal" address_space_* operations for 2.9, due to lack of support for IOMMUs. Reinstate the optimizations, caching only the IOMMU translation at address_cache_init but not the IOMMU lookup and target AddressSpace translation are not cached; now that MemoryRegionCache supports IOMMUs, it becomes more widely applicable too. The inlined fast path is defined in memory_ldst_cached.inc.h, while the slow path uses memory_ldst.inc.c as before. The smaller fast path causes a little code size reduction in MemoryRegionCache users: hw/virtio/virtio.o text size before: 32373 hw/virtio/virtio.o text size after: 31941 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: move memory access declarations to a common header, inline *_phys ↵Paolo Bonzini2018-05-091-45/+30Star
| | | | | | | | | | | | | | | functions For now, this reduces the text size very slightly due to the newly-added inlining: text size before: 9301965 text size after: 9300645 Later, however, the declarations in include/exec/memory_ldst.inc.h will be reused for the MemoryRegionCache slow path functions. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* linux-user: fix mmap/munmap/mprotect/mremap/shmatMax Filippov2018-03-091-1/+5
| | | | | | | | | | | | | | | | | | | | | | | In linux-user QEMU that runs for a target with TARGET_ABI_BITS bigger than L1_MAP_ADDR_SPACE_BITS an assertion in page_set_flags fires when mmap, munmap, mprotect, mremap or shmat is called for an address outside the guest address space. mmap and mprotect should return ENOMEM in such case. Change definition of GUEST_ADDR_MAX to always be the last valid guest address. Account for this change in open_self_maps. Add macro guest_addr_valid that verifies if the guest address is valid. Add function guest_range_valid that verifies if address range is within guest address space and does not wrap around. Use that macro in mmap/munmap/mprotect/mremap/shmat for error checking. Cc: qemu-stable@nongnu.org Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20180307215010.30706-1-jcmvbkbc@gmail.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
* accel/tcg: allow to invalidate a write TLB entry immediatelyDavid Hildenbrand2017-10-201-0/+3
| | | | | | | | | | | | | | | | | | | | Background: s390x implements Low-Address Protection (LAP). If LAP is enabled, writing to effective addresses (before any translation) 0-511 and 4096-4607 triggers a protection exception. So we have subpage protection on the first two pages of every address space (where the lowcore - the CPU private data resides). By immediately invalidating the write entry but allowing the caller to continue, we force every write access onto these first two pages into the slow path. we will get a tlb fault with the specific accessed addresses and can then evaluate if protection applies or not. We have to make sure to ignore the invalid bit if tlb_fill() succeeds. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20171016202358.3633-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
* util: move qemu_real_host_page_size/mask to osdep.hEmilio G. Cota2017-10-101-2/+0Star
| | | | | | | | | | | | | These only depend on the host and therefore belong in the common osdep, not in a target-dependent object. While at it, query the host during an init constructor, which guarantees the page size will be well-defined throughout the execution of the program. Suggested-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* exec: introduce MemoryRegionCachePaolo Bonzini2016-12-221-0/+23
| | | | | | | | | | | | Device models often have to perform multiple access to a single memory region that is known in advance, but would to use "DMA-style" functions instead of address_space_map/unmap. This can happen for example when the data has to undergo endianness conversion. Introduce a new data structure to cache the result of address_space_translate without forcing usage of a host address like address_space_map does. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: Add EXCP_ATOMICRichard Henderson2016-10-261-0/+1
| | | | | | | | | | When we cannot emulate an atomic operation within a parallel context, this exception allows us to stop the world and try again in a serial context. Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* cpu: Support a target CPU having a variable page sizePeter Maydell2016-10-241-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support target CPUs having a page size which isn't knownn at compile time. To use this, the CPU implementation should: * define TARGET_PAGE_BITS_VARY * not define TARGET_PAGE_BITS * define TARGET_PAGE_BITS_MIN to the smallest value it might possibly want for TARGET_PAGE_BITS * call set_preferred_target_page_bits() in its realize function to indicate the actual preferred target page size for the CPU (and report any error from it) In CONFIG_USER_ONLY, the CPU implementation should continue to define TARGET_PAGE_BITS appropriately for the guest OS page size. Machines which want to take advantage of having the page size something larger than TARGET_PAGE_BITS_MIN must set the MachineClass minimum_page_bits field to a value which they guarantee will be no greater than the preferred page size for any CPU they create. Note that changing the target page size by setting minimum_page_bits is a migration compatibility break for that machine. For debugging purposes, attempts to use TARGET_PAGE_SIZE before it has been finally confirmed will assert. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* tcg: Improve the alignment check infrastructureSergey Sorokin2016-07-061-4/+12
| | | | | | | | | | | | | | | Some architectures (e.g. ARMv8) need the address which is aligned to a size more than the size of the memory access. To support such check it's enough the current costless alignment check implementation in QEMU, but we need to support an alignment size specifying. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru> Signed-off-by: Richard Henderson <rth@twiddle.net> [rth: Assert in tcg_canonicalize_memop. Leave get_alignment_bits available for, though unused by, user-mode. Retain logging difference based on ALIGNED_ONLY.]
* target-*: Don't redefine cpu_exec()Peter Crosthwaite2016-06-291-0/+2
| | | | | | | | | | | | This function needs to be converted to QOM hook and virtualised for multi-arch. This rename interferes, as cpu-qom will not have access to the renaming causing name divergence. This rename doesn't really do anything anyway so just delete it. Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <69bd25a8678b8b31b91cd9760c777bed1aafb44e.1437212383.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaitepeter@gmail.com>
* qemu-common.h: Drop WORDS_ALIGNED definePeter Maydell2016-06-071-5/+0Star
| | | | | | | | | | | The WORDS_ALIGNED #define is not used anywhere, and hasn't been since 2013 when commit 612d590ebc6cef rewrote the various ld<type>_<endian>_p functions to not use it. Remove the #define and the comment describing it. Also remove the line in the comment about TARGET_WORDS_ALIGNED, since it has never actually existed. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* cpu: move endian-dependent load/store functions to cpu-all.hPaolo Bonzini2016-05-191-0/+25
| | | | | | | | Disentangle cpu-common.h and memory.h from NEED_CPU_H. Prototypes are not defined for !NEED_CPU_H, so remove them from poison.h too. Only macros need poisoning. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include: Clean up includesPeter Maydell2016-02-231-1/+0Star
| | | | | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. NB: If this commit breaks compilation for your out-of-tree patchseries or fork, then you need to make sure you add #include "qemu/osdep.h" to any new .c files that you have. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
* translate-all: ensure host page mask is always extended with 1'sPaolo Bonzini2015-12-021-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | Anthony reported that >4GB guests on Xen with 32bit QEMU broke after commit 4ed023c ("Round up RAMBlock sizes to host page sizes", 2015-11-05). In that patch sizes are masked against qemu_host_page_size/mask which are uintptr_t, and thus 32bit on a 32bit QEMU, even though the ram space might be bigger than 4GB on Xen. Since ram_addr_t is not available on user-mode emulation targets, ensure that we get a sign extension when masking away the low bits of the address. Remove the ~10 year old scary comment that the type of these variables is probably wrong, with another equally scary comment. The new comment however does not have "???" in it, which is arguably an improvement. For completeness use the alignment macros in linux-user and bsd-user instead of manually doing an &. linux-user and bsd-user are not affected by the Xen issue, however. Reviewed-by: Juan Quintela <quintela@redhat.com> Reported-by: Anthony PERARD <anthony.perard@citrix.com> Fixes: 4ed023ce2a39ab5812d33cf4d819def168965a7f Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Move RAMBlock and ram_list to ram_addr.hDr. David Alan Gilbert2015-09-091-41/+0Star
| | | | | | Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <1439547914-18249-1-git-send-email-dgilbert@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier2015-08-241-3/+1Star
| | | | | | | | | | | As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* linux-user: remove --enable-guest-base/--disable-guest-baseLaurent Vivier2015-08-241-5/+0Star
| | | | | | | | | | | | | | | | | | All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>