summaryrefslogtreecommitdiffstats
path: root/cpus-common.c
Commit message (Collapse)AuthorAgeFilesLines
* Use g_new() & friends where that makes obvious senseMarkus Armbruster2022-03-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer, for two reasons. One, it catches multiplication overflowing size_t. Two, it returns T * rather than void *, which lets the compiler catch more type errors. This commit only touches allocations with size arguments of the form sizeof(T). Patch created mechanically with: $ spatch --in-place --sp-file scripts/coccinelle/use-g_new-etc.cocci \ --macro-file scripts/cocci-macro-file.h FILES... Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20220315144156.1595462-4-armbru@redhat.com> Reviewed-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
* overall/alpha tcg cpus|hppa: Fix Lesser GPL version numberChetan Pant2020-11-151-1/+1
| | | | | | | | | | | | There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by: Chetan Pant <chetan4windows@gmail.com> Message-Id: <20201023123353.19796-1-chetan4windows@gmail.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
* qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi2020-09-231-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
* cpus: Move CPU code from exec.c to cpus-common.cPhilippe Mathieu-Daudé2020-07-111-0/+18
| | | | | | | | | | | | This code was introduced with SMP support in commit 6a00d60127, later commit 267f685b8b moved CPU list management to common code but forgot this code. Move now and simplify ifdef'ry. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200702104017.14057-1-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpu: convert queued work to a QSIMPLEQEmilio G. Cota2020-06-161-17/+8Star
| | | | | | | | | | | | | | | | | We convert queued work to a QSIMPLEQ, instead of open-coding it. While at it, make sure that all accesses to the list are performed while holding the list's lock. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200609200738.445-3-robert.foley@linaro.org> Message-Id: <20200612190237.30436-6-alex.bennee@linaro.org>
* cpus-common: ensure auto-assigned cpu_indexes don't clashAlex Bennée2020-05-271-5/+5
| | | | | | | | | | | | | | | | | | | | Basing the cpu_index on the number of currently allocated vCPUs fails when vCPUs aren't removed in a LIFO manner. This is especially true when we are allocating a cpu_index for each guest thread in linux-user where there is no ordering constraint on their allocation and de-allocation. [I've dropped the assert which is there to guard against out-of-order removal as this should probably be caught higher up the stack. Maybe we could just ifdef CONFIG_SOFTTMU it?] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Igor Mammedow <imammedo@redhat.com> Cc: Nikolay Igotti <igotti@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20200520140541.30256-13-alex.bennee@linaro.org>
* lockable: replaced locks with lock guard macros where appropriateDaniel Brodsky2020-05-041-9/+5Star
| | | | | | | | | | | - ran regexp "qemu_mutex_lock\(.*\).*\n.*if" to find targets - replaced result with QEMU_LOCK_GUARD if all unlocks at function end - replaced result with WITH_QEMU_LOCK_GUARD if unlock not at end Signed-off-by: Daniel Brodsky <dnbrdsky@gmail.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-id: 20200404042108.389635-3-dnbrdsky@gmail.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* cpu: introduce cpu_in_exclusive_context()Emilio G. Cota2019-10-281-0/+4
| | | | | | | | | Suggested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: moved inside start/end_exclusive fns + cleanup] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-08-21' ↵Peter Maydell2019-08-221-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into staging Monitor patches for 2019-08-21 # gpg: Signature made Wed 21 Aug 2019 16:35:07 BST # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-monitor-2019-08-21: monitor/qmp: Update comment for commit 4eaca8de268 qdev: Collect HMP handlers command handlers in qdev-monitor.c qapi: Move query-target from misc.json to machine.json hw/core: Move cpu.c, cpu.h from qom/ to hw/core/ Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * hw/core: Move cpu.c, cpu.h from qom/ to hw/core/Markus Armbruster2019-08-211-1/+1
| | | | | | | | | | | | | | | | | | | | Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h in comments replaced]
* | cpus-common: nuke finish_safe_workRoman Kagan2019-08-201-8/+0Star
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was introduced in commit ab129972c8b41e15b0521895a46fd9c752b68a5e, with the following motivation: Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. However, it seems to be redundant, because the cpu-exclusive infrastructure provides suffificent protection against the newly added CPU starting execution while the cpu-exclusive work is running, and the aforementioned traversing of the cpu list is protected by qemu_cpu_list_lock. Besides, this appears to be the only place where the cpu-exclusive section is entered with the BQL taken, which has been found to trigger AB-BA deadlock as follows: vCPU thread main thread ----------- ----------- async_safe_run_on_cpu(self, async_synic_update) ... [cpu hot-add] process_queued_cpu_work() qemu_mutex_unlock_iothread() [grab BQL] start_exclusive() cpu_list_add() async_synic_update() finish_safe_work() qemu_mutex_lock_iothread() cpu_exec_start() So remove it. This paves the way to establishing a strict nesting rule of never entering the exclusive section with the BQL taken. Signed-off-by: Roman Kagan <rkagan@virtuozzo.com> Message-Id: <20190523105440.27045-2-rkagan@virtuozzo.com>
* qemu/queue.h: simplify reverse access to QTAILQPaolo Bonzini2019-01-111-1/+1
| | | | | | | The new definition of QTAILQ does not require passing the headname, remove it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qom: convert the CPU list to RCUEmilio G. Cota2018-08-231-2/+2
| | | | | | | | | | | | | | Iterating over the list without using atomics is undefined behaviour, since the list can be modified concurrently by other threads (e.g. every time a new thread is created in user-mode). Fix it by implementing the CPU list as an RCU QTAILQ. This requires a little bit of extra work to traverse list in reverse order (see previous patch), but other than that the conversion is trivial. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20180819091335.22863-12-cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* *_run_on_cpu: introduce run_on_cpu_data typePaolo Bonzini2016-10-311-4/+5
| | | | | | | | | | | This changes the *_run_on_cpu APIs (and helpers) to pass data in a run_on_cpu_data type instead of a plain void *. This is because we sometimes want to pass a target address (target_ulong) and this fails on 32 bit hosts emulating 64 bit guests. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: lock-free fast path for cpu_exec_start/endPaolo Bonzini2016-09-271-15/+80
| | | | | | | | | | | | | | | | Set cpu->running without taking the cpu_list lock, only requiring it if there is a concurrent exclusive section. This requires adding a new field to CPUState, which records whether a running CPU is being counted in pending_cpus. When an exclusive section is started concurrently with cpu_exec_start, cpu_exec_start can use the new field to determine if it has to wait for the end of the exclusive section. Likewise, cpu_exec_end can use it to see if start_exclusive is waiting for that CPU. This a separate patch for easier bisection of issues. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: Introduce async_safe_run_on_cpu()Paolo Bonzini2016-09-271-2/+31
| | | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: simplify locking for start_exclusive/end_exclusivePaolo Bonzini2016-09-271-3/+8
| | | | | | | | | | | | | | It is not necessary to hold qemu_cpu_list_mutex throughout the exclusive section, because no other exclusive section can run while pending_cpus != 0. exclusive_idle() is called in cpu_exec_start(), and that prevents any CPUs created after start_exclusive() from entering cpu_exec() during an exclusive section. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: remove redundant call to exclusive_idle()Paolo Bonzini2016-09-271-1/+0Star
| | | | | | | | | | | No need to call exclusive_idle() from cpu_exec_end since it is done immediately afterwards in cpu_exec_start. Any exclusive section could run as soon as cpu_exec_end leaves, because cpu->running is false and the mutex is not taken, so the call does not add any protection either. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: always defer async_run_on_cpu work itemsPaolo Bonzini2016-09-271-5/+0Star
| | | | | | | | | | async_run_on_cpu is only called from the I/O thread, not from CPU threads, so it doesn't make any difference. It will make a difference however for async_safe_run_on_cpu. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move exclusive work infrastructure from linux-userPaolo Bonzini2016-09-271-0/+82
| | | | | | | | | | | | This will serve as the base for async_safe_run_on_cpu. Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: fix uninitialized variable use in run_on_cpuPaolo Bonzini2016-09-271-2/+2
| | | | | | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move CPU work item management to common codeSergey Fedorov2016-09-271-0/+94
| | | | | | | | | | | | | Make CPU work core functions common between system and user-mode emulation. User-mode does not use run_on_cpu, so do not implement it. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move CPU list management to common codePaolo Bonzini2016-09-271-0/+83
Add a mutex for the CPU list to system emulation, as it will be used to manage safe work. Abstract manipulation of the CPU list in new functions cpu_list_add and cpu_list_remove. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>