summaryrefslogtreecommitdiffstats
path: root/include/qemu/atomic.h
Commit message (Collapse)AuthorAgeFilesLines
* qemu/atomic: Drop special case for unsupported compilerPhilippe Mathieu-Daudé2020-12-151-17/+0Star
| | | | | | | | | | | | | | | | | Since commit efc6c070aca ("configure: Add a test for the minimum compiler version") the minimum compiler version required for GCC is 4.8, which has the GCC BZ#36793 bug fixed. We can safely remove the special case introduced in commit a281ebc11a6 ("virtio: add missing mb() on notification"). With clang 3.4, __ATOMIC_RELAXED is defined, so the chunk to remove (which is x86-specific), isn't reached either. Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20201210134752.780923-2-marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi2020-09-231-127/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
* qemu/atomic.h: add #ifdef guards for stdatomic.hAlex Bennée2020-03-271-0/+6
| | | | | | | | | | | Deep inside the FreeBSD netmap headers we end up including stdatomic.h which clashes with qemu's atomic functions which are modelled along the C11 standard. To avoid a massive rename lets just ifdef around the problem. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200326170121.13045-1-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* include/qemu/atomic.h: Add signal_barrierRichard Henderson2019-07-141-0/+11
| | | | | | | | We have some potential race conditions vs our user-exec signal handler that will be solved with this barrier. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* atomics: Set ATOMIC_REG_SIZE=8 for MIPS n32Paul Burton2019-01-031-2/+3
| | | | | | | | | | | | | | | | | | | | | ATOMIC_REG_SIZE is currently defined as the default sizeof(void *) for all MIPS host builds, including those using the n32 ABI. n32 is the MIPS64 ILP32 ABI and as such tcg/mips/tcg-target.h defines TCG_TARGET_REG_BITS as 64 for n32 builds. If we attempt to build QEMU for an n32 host with support for a 64b target architecture then TCG_OVERSIZED_GUEST is 0 and accel/tcg/cputlb.c attempts to use atomic_* functions. This fails because ATOMIC_REG_SIZE is 4, causing the calls to QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE) in the various atomic_* functions to generate errors. Fix this by defining ATOMIC_REG_SIZE as 8 for all MIPS64 builds, which will cover both n32 (ILP32) & n64 (LP64) ABIs in much the same was as we already do for x86_64/x32. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Paul Burton <pburton@wavecomp.com>
* util: add atomic64Emilio G. Cota2018-10-021-0/+34
| | | | | | | | This introduces read/set accessors for int64_t and uint64_t. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20180910232752.31565-3-cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic: fix comment s/x64_64/x86_64/Emilio G. Cota2018-10-021-1/+1
| | | | | | | Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20180903171831.15446-4-cota@braap.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic.h: Work around gcc spurious "unused value" warningPeter Maydell2018-05-101-1/+1
| | | | | | | | | | | | | | | | | | | | Some versions of gcc produce a spurious warning if the result of __atomic_compare_echange_n() is not used and the type involved is a signed 8 bit value: error: value computed is not used [-Werror=unused-value] This has been seen on at least gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609 Work around this by using an explicit cast to void to indicate that we don't care about the return value. We don't currently use our atomic_cmpxchg() macro on any signed 8 bit types, but the upcoming support for the Arm v8.1-Atomics will require it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
* memory: avoid "resurrection" of dead FlatViewsPaolo Bonzini2017-09-211-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible for address_space_get_flatview() as it currently stands to cause a use-after-free for the returned FlatView, if the reference count is incremented after the FlatView has been replaced by a writer: thread 1 thread 2 RCU thread ------------------------------------------------------------- rcu_read_lock read as->current_map set as->current_map flatview_unref '--> call_rcu flatview_ref [ref=1] rcu_read_unlock flatview_destroy <badness> Since FlatViews are not updated very often, we can just detect the situation using a new atomic op atomic_fetch_inc_nonzero, similar to Linux's atomic_inc_not_zero, which performs the refcount increment only if it hasn't already hit zero. This is similar to Linux commit de09a9771a53 ("CRED: Fix get_task_cred() and task_state() to not resurrect dead credentials", 2010-07-29). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* docs: fix broken paths to docs/devel/atomics.txtPhilippe Mathieu-Daudé2017-07-311-2/+2
| | | | | | | | With the move of some docs/ to docs/devel/ on ac06724a71, a couple of references were not updated. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* qemu/atomic: Loosen restrictions for 64-bit ILP32 hostsRichard Henderson2017-06-051-8/+26
| | | | | | | | We need to coordinate with the TCG_OVERSIZED_GUEST test in cputlb.c, and allow 64-bit atomics even though sizeof(void *) == 4. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* atomics: Add __nocheck atomic operationsRichard Henderson2016-10-261-9/+27
| | | | | | | | | While the check against sizeof(void *) is appropriate for normal usage within qemu, there are places in which we want wider operaions and have checked for their existance. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* atomics: add atomic_op_fetch variantsEmilio G. Cota2016-10-261-0/+17
| | | | | | | | | This paves the way for upcoming work. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1467054136-10430-9-git-send-email-cota@braap.org>
* atomics: add atomic_xorEmilio G. Cota2016-10-261-0/+4
| | | | | | | | | This paves the way for upcoming work. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1467054136-10430-8-git-send-email-cota@braap.org>
* atomics: Add parameters to macrosRichard Henderson2016-10-261-5/+5
| | | | | | | | Making these functional rather than object macros will prevent later problems with complex macro expansion. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* atomic: base mb_read/mb_set on load-acquire and store-releasePaolo Bonzini2016-10-241-62/+33Star
| | | | | | | | | | | | This introduces load-acquire and store-release operations in QEMU. For now, just use them as an implementation detail of atomic_mb_read and atomic_mb_set. Since docs/atomics.txt documents that atomic_mb_read only synchronizes with an atomic_mb_set of the same variable, we can use the new implementation everywhere instead of seq-cst loads and stores. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic: introduce smp_mb_acquire and smp_mb_releasePaolo Bonzini2016-10-241-20/+30
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic.h: comment on use of atomic_read/setAlex Bennée2016-10-041-0/+6
| | | | | | | | | Add some notes on the use of the relaxed atomic access helpers and their importance for defined behaviour in C11's multi-threaded memory model. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20160930213106.20186-3-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic.h: fix __SANITIZE_THREAD__ buildAlex Bennée2016-10-041-1/+1
| | | | | | | | | Only very modern GCC's actually set this define when building with the ThreadSanitizer so this little typo slipped though. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20160930213106.20186-2-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomics: Use __atomic_*_n() variant primitivesPranith Kumar2016-09-131-16/+8Star
| | | | | | | | | | Use the __atomic_*_n() primitives which take the value as argument. It is not necessary to store the value locally before calling the primitive, hence saving us a stack store and load. Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160829171701.14025-1-bobby.prani@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomics: Remove redundant barrier()'sPranith Kumar2016-09-131-4/+4
| | | | | | | | | | Remove the redundant barrier() after the fence as agreed in previous discussion here: https://lists.gnu.org/archive/html/qemu-devel/2016-04/msg00489.html Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20160824204424.14041-3-bobby.prani@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic: strip "const" from variables declared with typeofPaolo Bonzini2016-08-091-6/+48
| | | | | | | | | | | | | | | | | | | | With the latest clang, we have the following warning: /home/pranith/devops/code/qemu/include/qemu/seqlock.h:62:21: warning: passing 'typeof (*&sl->sequence) *' (aka 'const unsigned int *') to parameter of type 'unsigned int *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers] return unlikely(atomic_read(&sl->sequence) != start); ^~~~~~~~~~~~~~~~~~~~~~~~~~ /home/pranith/devops/code/qemu/include/qemu/atomic.h:58:25: note: expanded from macro 'atomic_read' __atomic_load(ptr, &_val, __ATOMIC_RELAXED); \ ^~~~~ Stripping const is a bit tricky due to promotions, but it is doable with either C11 _Generic or GCC extensions. Use the latter. Reported-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> [pranith: Add conversion for bool type] Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Clean up ill-advised or unusual header guardsMarkus Armbruster2016-07-121-5/+3Star
| | | | | | | Cleaned up with scripts/clean-header-guards.pl. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
* atomics: do not emit consume barrier for atomic_rcu_readEmilio G. Cota2016-05-291-2/+12
| | | | | | | | | | | | | | | | | | | | | | | Currently we emit a consume-load in atomic_rcu_read. Because of limitations in current compilers, this is overkill for non-Alpha hosts and it is only useful to make Thread Sanitizer work. This patch leaves the consume-load in atomic_rcu_read when compiling with Thread Sanitizer enabled, and resorts to a relaxed load + smp_read_barrier_depends otherwise. On an RMO host architecture, such as aarch64, the performance improvement of this change is easily measurable. For instance, qht-bench performs an atomic_rcu_read on every lookup. Performance before and after applying this patch: $ tests/qht-bench -d 5 -n 1 Before: 9.78 MT/s After: 10.96 MT/s Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1464120374-8950-4-git-send-email-cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomics: emit an smp_read_barrier_depends() barrier only for Alpha and ↵Emilio G. Cota2016-05-291-0/+11
| | | | | | | | | | | | | | | | | | Thread Sanitizer For correctness, smp_read_barrier_depends() is only required to emit a barrier on Alpha hosts. However, we are currently emitting a consume fence unconditionally, and most compilers currently treat consume and acquire fences as equivalent. Fix it by keeping the consume fence if we're compiling with Thread Sanitizer, since this might help prevent false warnings. Otherwise, only emit the barrier for Alpha hosts. Note that we still guarantee that smp_read_barrier_depends() is a compiler barrier. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1464120374-8950-3-git-send-email-cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include/qemu/atomic: add compile time assertsAlex Bennée2016-04-051-24/+34
| | | | | | | | | | | | | To be safely portable no atomic access should be trying to do more than the natural word width of the host. The most common abuse is trying to atomically access 64 bit values on a 32 bit host. This patch adds some QEMU_BUILD_BUG_ON to the __atomic instrinsic paths to create a build failure if (sizeof(*ptr) > sizeof(void *)). Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1459780549-12942-3-git-send-email-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include: Clean up includesPeter Maydell2016-02-231-1/+0Star
| | | | | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. NB: If this commit breaks compilation for your out-of-tree patchseries or fork, then you need to make sure you add #include "qemu/osdep.h" to any new .c files that you have. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
* include/qemu/atomic.h: default to __atomic functionsAlex Bennée2016-02-091-61/+131
| | | | | | | | | | | | | | | | | The __atomic primitives have been available since GCC 4.7 and provide a richer interface for describing memory ordering requirements. As a bonus by using the primitives instead of hand-rolled functions we can use tools such as the ThreadSanitizer which need the use of well defined APIs for its analysis. If we have __ATOMIC defines we exclusively use the __atomic primitives for all our atomic access. Otherwise we fall back to the mixture of __sync and hand-rolled barrier cases. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1453976119-24372-4-git-send-email-alex.bennee@linaro.org> [Use __ATOMIC_SEQ_CST for atomic_mb_read/atomic_mb_set on !POWER. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomics: add explicit compiler fence in __atomic memory barriersPaolo Bonzini2015-06-051-3/+9
| | | | | | | | | | | | | | | | | __atomic_thread_fence does not include a compiler barrier; in the C++11 memory model, fences take effect in combination with other atomic operations. GCC implements this by making __atomic_load and __atomic_store access memory as if the pointer was volatile, and leaves no trace whatsoever of acquire and release fences in the compiler's intermediate representation. In QEMU, we want memory barriers to act on all memory, but at the same time we would like to use __atomic_thread_fence for portability reasons. Add compiler barriers manually around the __atomic_thread_fence. Message-Id: <1433334080-14912-1-git-send-email-pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* rcu: add rcu libraryPaolo Bonzini2015-02-021-0/+61
| | | | | | | | | | | | | | | | | | This includes a (mangled) copy of the liburcu code. The main changes are: 1) removing dependencies on many other header files in liburcu; 2) removing for simplicity the tentative busy waiting in synchronize_rcu, which has limited performance effects; 3) replacing futexes in synchronize_rcu with QemuEvents for Win32 portability. The API is the same as liburcu, so it should be possible in the future to require liburcu on POSIX systems for example and use our copy only on Windows. Among the various versions available I chose urcu-mb, which is the least invasive implementation even though it does not have the fastest rcu_read_{lock,unlock} implementation. The urcu flavor can be changed later, after benchmarking. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic: fix position of volatile qualifierPaolo Bonzini2014-12-231-2/+2
| | | | | | | | What needs to be volatile is not the pointer, but the pointed-to value! Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* atomic.h: Fix build with clangPeter Maydell2013-11-211-3/+3
| | | | | | | | | | | | | clang defines __ATOMIC_SEQ_CST but its implementation of the __atomic_exchange() builtin differs from that of gcc. Move the __clang__ branch of the ifdef ladder to the top and fix its implementation (there is no such builtin as __sync_exchange), so we can compile with clang again. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1382435921-18438-1-git-send-email-peter.maydell@linaro.org Signed-off-by: Anthony Liguori <aliguori@amazon.com>
* add a header file for atomic operationsPaolo Bonzini2013-07-041-32/+166
| | | | | | | | We're already using them in several places, but __sync builtins are just too ugly to type, and do not provide seqcst load/store operations. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* block-migration: add lockPaolo Bonzini2013-03-111-0/+1
| | | | | | | | | | | | | | Some state is shared between the block migration code and its AIO callbacks. Once block migration will run outside the iothread, the block migration code and the AIO callbacks will be able to run concurrently. Protect the critical sections with a separate lock. Do the same for completed_sectors, which can be used from the monitor. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* misc: move include files to include/qemu/Paolo Bonzini2012-12-191-0/+67
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>