| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clang's C11 atomic_fetch_*() functions only take a C11 atomic type
pointer argument. QEMU uses direct types (int, etc) and this causes a
compiler error when a QEMU code calls these functions in a source file
that also included <stdatomic.h> via a system header file:
$ CC=clang CXX=clang++ ./configure ... && make
../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid)
Avoid using atomic_*() names in QEMU's atomic.h since that namespace is
used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h
and <stdatomic.h> can co-exist. I checked /usr/include on my machine and
searched GitHub for existing "qatomic_" users but there seem to be none.
This patch was generated using:
$ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \
sort -u >/tmp/changed_identifiers
$ for identifier in $(</tmp/changed_identifiers); do
sed -i "s%\<$identifier\>%q$identifier%g" \
$(git grep -I -l "\<$identifier\>")
done
I manually fixed line-wrap issues and misaligned rST tables.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200212130311.127515-3-ysato@users.sourceforge.jp>
Message-Id: <20200225124710.14152-14-alex.bennee@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the problems with kernel-doc/sphinx syntax in the
doc comments for the shuffle and unshuffle functions:
* mismatch between comment and prototype for argument name
* the inline bit patterns need to be marked up so they
are processed properly and rendered as monospace
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20190521122519.12573-6-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce cpu properties to give fine control over SVE vector lengths.
We introduce a property for each valid length up to the current
maximum supported, which is 2048-bits. The properties are named, e.g.
sve128, sve256, sve384, sve512, ..., where the number is the number of
bits. See the updates to docs/arm-cpu-features.rst for a description
of the semantics and for example uses.
Note, as sve-max-vq is still present and we'd like to be able to
support qmp_query_cpu_model_expansion with guests launched with e.g.
-cpu max,sve-max-vq=8 on their command lines, then we do allow
sve-max-vq and sve<N> properties to be provided at the same time, but
this is not recommended, and is why sve-max-vq is not mentioned in the
document. If sve-max-vq is provided then it enables all lengths smaller
than and including the max and disables all lengths larger. It also has
the side-effect that no larger lengths may be enabled and that the max
itself cannot be disabled. Smaller non-power-of-two lengths may,
however, be disabled, e.g. -cpu max,sve-max-vq=4,sve384=off provides a
guest the vector lengths 128, 256, and 512 bits.
This patch has been co-authored with Richard Henderson, who reworked
the target/arm/cpu64.c changes in order to push all the validation and
auto-enabling/disabling steps into the finalizer, resulting in a nice
LOC reduction.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
Message-id: 20191031142734.8590-5-drjones@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
|
|
|
|
|
|
|
|
| |
ctpopl() has a better implementation than hweight_long() and ui/vnc.c
being the last user of hweight_long(), we can simply remove it.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1489415605-13105-1-git-send-email-clg@kaod.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All the variants for rol/ror have a bug in case where the shift == 0.
For example rol32, would generate:
return (word << 0) | (word >> 32);
Which though works, would be flagged as a runtime error on clang's
sanitizer.
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
Add a macro that creates a 64bit value which has length number of ones
shifted across by the value of shift.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 9773244aa1c8c26b8b82cb261d8f5dd4b7b9fcf9.1467053537.git.alistair.francis@xilinx.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A half-shuffle operation takes a word with zeros in the high half:
0000 0000 0000 0000 ABCD EFGH IJKL MNOP
and spreads the bits out so they are in every other bit of the word:
0A0B 0C0D 0E0F 0G0H 0I0J 0K0L 0M0N 0O0P
A half-unshuffle performs the reverse operation.
Provide functions in bitops.h which implement these operations
for 32-bit and 64-bit inputs, and add tests for them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
Tested-by: Shannon Zhao <shannon.zhao@linaro.org>
Message-id: 1465915112-29272-3-git-send-email-peter.maydell@linaro.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
NB: If this commit breaks compilation for your out-of-tree
patchseries or fork, then you need to make sure you add
#include "qemu/osdep.h" to any new .c files that you have.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use atomic_or() for atomic bitmaps where several threads may set bits at
the same time. This avoids the race condition between threads loading
an element, bitwise ORing, and then storing the element.
When setting all bits in a word we can avoid atomic ops and instead just
use an smp_mb() at the end.
Most bitmap users don't need atomicity so introduce new functions.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <1417519399-3166-2-git-send-email-stefanha@redhat.com>
[Avoid barrier in the single word case, use full barrier instead of write.
- Paolo]
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
| |
don't mix tab and space. The rule is 4 spaces
Signed-off-by: Chih-Min Chao <cmchao@gmail.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
|
|
|
|
|
|
|
|
|
| |
The documentation for sextract64() claims that the return type is
an int64_t, but the code itself disagrees. Fix the return type to
conform to the documentation and to bring it into line with
sextract32(), which returns int32_t.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1423231328-15662-1-git-send-email-peter.maydell@linaro.org
|
|
|
|
|
|
|
|
|
|
| |
This removes the following circular dependency:
bitops.h -> qemu-common.h -> target-i386/cpu.h -> target-i386/cpu-qom.h ->
qom/cpu.h -> qdev-core.h -> bitmap.h -> bitops.h.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
find_first_bit has started to be used heavily in TCG code. The current
implementation based on find_next_bit is not optimal and can't be
optimized be the compiler if the bit array has a fixed size, which is
the case most of the time.
This new implementation does not use find_next_bit and is yet small
enough to be inlined.
Cc: Corentin Chary <corentin.chary@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
|
|
|
| |
Move index and size fields from int to long. We need that for
migration. long is 64 bits on sane architectures, and 32bits should
be enough on all the 32bits architectures.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
|
|
|
|
|
|
|
| |
These functions were copies from include/linux/bitopts.h.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A common operation in instruction decoding is to take a field
from an instruction that represents a signed integer in some
arbitrary number of bits, and sign extend it into a C signed
integer type for manipulation. Provide new functions sextract32()
and sextract64() which perform this operation; they are like
the existing extract32() and extract64() except that the field
is sign-extended into the returned result.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1372419632-5521-2-git-send-email-peter.maydell@linaro.org
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We had two copies of a ffs function for longs with subtly different
semantics and, for the one in bitops.h, a confusing name: the result
was off-by-one compared to the library function ffsl.
Unify the functions into one, and solve the name problem by calling
the 0-based functions "bitops_ctzl" and "bitops_ctol" respectively.
This also fixes the build on platforms with ffsl, including Mac OS X
and Windows.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Andreas Färber <afaerber@suse.de>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|