summaryrefslogtreecommitdiffstats
path: root/cpus-common.c
Commit message (Collapse)AuthorAgeFilesLines
* qemu/queue.h: simplify reverse access to QTAILQPaolo Bonzini2019-01-111-1/+1
| | | | | | | The new definition of QTAILQ does not require passing the headname, remove it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qom: convert the CPU list to RCUEmilio G. Cota2018-08-231-2/+2
| | | | | | | | | | | | | | Iterating over the list without using atomics is undefined behaviour, since the list can be modified concurrently by other threads (e.g. every time a new thread is created in user-mode). Fix it by implementing the CPU list as an RCU QTAILQ. This requires a little bit of extra work to traverse list in reverse order (see previous patch), but other than that the conversion is trivial. Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20180819091335.22863-12-cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* *_run_on_cpu: introduce run_on_cpu_data typePaolo Bonzini2016-10-311-4/+5
| | | | | | | | | | | This changes the *_run_on_cpu APIs (and helpers) to pass data in a run_on_cpu_data type instead of a plain void *. This is because we sometimes want to pass a target address (target_ulong) and this fails on 32 bit hosts emulating 64 bit guests. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: lock-free fast path for cpu_exec_start/endPaolo Bonzini2016-09-271-15/+80
| | | | | | | | | | | | | | | | Set cpu->running without taking the cpu_list lock, only requiring it if there is a concurrent exclusive section. This requires adding a new field to CPUState, which records whether a running CPU is being counted in pending_cpus. When an exclusive section is started concurrently with cpu_exec_start, cpu_exec_start can use the new field to determine if it has to wait for the end of the exclusive section. Likewise, cpu_exec_end can use it to see if start_exclusive is waiting for that CPU. This a separate patch for easier bisection of issues. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: Introduce async_safe_run_on_cpu()Paolo Bonzini2016-09-271-2/+31
| | | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: simplify locking for start_exclusive/end_exclusivePaolo Bonzini2016-09-271-3/+8
| | | | | | | | | | | | | | It is not necessary to hold qemu_cpu_list_mutex throughout the exclusive section, because no other exclusive section can run while pending_cpus != 0. exclusive_idle() is called in cpu_exec_start(), and that prevents any CPUs created after start_exclusive() from entering cpu_exec() during an exclusive section. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: remove redundant call to exclusive_idle()Paolo Bonzini2016-09-271-1/+0Star
| | | | | | | | | | | No need to call exclusive_idle() from cpu_exec_end since it is done immediately afterwards in cpu_exec_start. Any exclusive section could run as soon as cpu_exec_end leaves, because cpu->running is false and the mutex is not taken, so the call does not add any protection either. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: always defer async_run_on_cpu work itemsPaolo Bonzini2016-09-271-5/+0Star
| | | | | | | | | | async_run_on_cpu is only called from the I/O thread, not from CPU threads, so it doesn't make any difference. It will make a difference however for async_safe_run_on_cpu. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move exclusive work infrastructure from linux-userPaolo Bonzini2016-09-271-0/+82
| | | | | | | | | | | | This will serve as the base for async_safe_run_on_cpu. Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: fix uninitialized variable use in run_on_cpuPaolo Bonzini2016-09-271-2/+2
| | | | | | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move CPU work item management to common codeSergey Fedorov2016-09-271-0/+94
| | | | | | | | | | | | | Make CPU work core functions common between system and user-mode emulation. User-mode does not use run_on_cpu, so do not implement it. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move CPU list management to common codePaolo Bonzini2016-09-271-0/+83
Add a mutex for the CPU list to system emulation, as it will be used to manage safe work. Abstract manipulation of the CPU list in new functions cpu_list_add and cpu_list_remove. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>