summaryrefslogtreecommitdiffstats
path: root/include/qom
Commit message (Collapse)AuthorAgeFilesLines
...
* trace: Allocate cpu->trace_dstate in placeLluís Vilanova2017-07-171-6/+3Star
| | | | | | | | | | | | | | | | | | There's little point in dynamically allocating the bitmap if we know at compile-time the max number of events we want to support. Thus, make room in the struct for the bitmap, which will make things easier later: this paves the way for upcoming changes, in which we'll use a u32 to fully capture cpu->trace_dstate. This change also increases performance by saving a dereference and improving locality--note that this is important since upcoming work makes reading this bitmap fairly common. Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Lluís Vilanova <vilanova@ac.upc.edu> Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Message-id: 149915725977.6295.15069969323605305641.stgit@frigg.lan Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* cpu: Convert to DEFINE_PROP_LINKFam Zheng2017-07-141-0/+1
| | | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <20170714021509.23681-20-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qom: enforce readonly nature of link's check callbackIgor Mammedov2017-07-141-3/+3
| | | | | | | | | | | | | link's check callback is supposed to verify/permit setting it, however currently nothing restricts it from misusing it and modifying target object from within. Make sure that readonly semantics are checked by compiler to prevent callback's misuse. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <20170714021509.23681-2-famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qom/cpu: remove host_tid fieldAlex Bennée2017-07-141-2/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This was only used by the gdbstub and even then was only being set for subsequent threads. Rather the continue duplicating the number just make the gdbstub get the information from TaskState structure. Now the tid is correctly reported for all threads the bug I was seeing with "vCont;C04:0;c" packets is fixed as the correct tid is reported to gdb. I moved cpu_gdb_index into the gdbstub to facilitate easy access to the TaskState which is used elsewhere in gdbstub. To prevent BSD failing to build I've included ts_tid into its TaskStruct but not populated it - which was the same state as the old cpu->host_tid. I'll leave it up to the BSD maintainers to actually populate this properly if they want a working gdbstub with user-threads. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <20170712105216.747-4-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include/exec/poison: Mark CONFIG_SOFTMMU as poisonedThomas Huth2017-07-041-0/+8
| | | | | | | | | | CONFIG_SOFTMMU should never be used in common code, so mark it as poisoned, too. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1498454578-18709-6-git-send-email-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* vcpu_dirty: share the same field in CPUState for all acceleratorsSergio Andres Gomez Del Real2017-07-041-2/+3
| | | | | | | | | | This patch simply replaces the separate boolean field in CPUState that kvm, hax (and upcoming hvf) have for keeping track of vcpu dirtiness with a single shared field. Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com> Message-Id: <20170618191101.3457-1-Sergio.G.DelReal@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: consistently access cpu->tb_jmp_cache atomicallyEmilio G. Cota2017-06-301-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some code paths can lead to atomic accesses racing with memset() on cpu->tb_jmp_cache, which can result in torn reads/writes and is undefined behaviour in C11. These torn accesses are unlikely to show up as bugs, but from code inspection they seem possible. For example, tb_phys_invalidate does: /* remove the TB from the hash list */ h = tb_jmp_cache_hash_func(tb->pc); CPU_FOREACH(cpu) { if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) { atomic_set(&cpu->tb_jmp_cache[h], NULL); } } Here atomic_set might race with a concurrent memset (such as the ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and therefore we might end up with a torn pointer (or who knows what, because we are under undefined behaviour). This patch converts parallel accesses to cpu->tb_jmp_cache to use atomic primitives, thereby bringing these accesses back to defined behaviour. The price to pay is to potentially execute more instructions when clearing cpu->tb_jmp_cache, but given how infrequently they happen and the small size of the cache, the performance impact I have measured is within noise range when booting debian-arm. Note that under "safe async" work (e.g. do_tb_flush) we could use memset because no other vcpus are running. However I'm keeping these accesses atomic as well to keep things simple and to avoid confusing analysis tools such as ThreadSanitizer. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1497486973-25845-1-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* object: add uint property setter/getterMarc-André Lureau2017-06-201-0/+23
| | | | | | | Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20170607163635.17635-13-marcandre.lureau@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* numa: move numa_node from CPUState into target specific classesIgor Mammedov2017-06-051-2/+0Star
| | | | | | | | | | | Move vcpu's associated numa_node field out of generic CPUState into inherited classes that actually care about cpu<->numa mapping, i.e: ARMCPU, PowerPCCPU, X86CPU. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Message-Id: <1496161442-96665-6-git-send-email-imammedo@redhat.com> [ehabkost: s/CPU is belonging to/CPU belongs to/ on comments] Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* spapr: add node-id property to sPAPR coreIgor Mammedov2017-05-111-0/+2
| | | | | | | | | | it will allow switching from cpu_index to core based numa mapping in follow up patches. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Message-Id: <1494415802-227633-3-git-send-email-imammedo@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* cpus: don't credit executed instructions before they have runAlex Bennée2017-04-101-0/+1
| | | | | | | | | | Outside of the vCPU thread icount time will only be tracked against timers_state.qemu_icount. We no longer credit cycles until they have completed the run. Inside the vCPU thread we adjust for passage of time by looking at how many have run so far. This is only valid inside the vCPU thread while it is running. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
* Merge branch 'icount-update' into HEADPaolo Bonzini2017-03-031-8/+7Star
|\ | | | | | | | | | | | | | | | | | | | | Merge the original development branch due to breakage caused by the MTTCG merge. Conflicts: cpu-exec.c translate-common.c Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * cpu-exec: unify icount_decr and tcg_exit_reqPaolo Bonzini2017-02-221-8/+7Star
| | | | | | | | | | | | | | | | | | | | | | The icount interrupt flag and tcg_exit_req serve almost the same purpose, let's make them completely the same. The former TB_EXIT_REQUESTED and TB_EXIT_ICOUNT_EXPIRED cases are unified, since we can distinguish them from the value of the interrupt flag. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | cputlb: add tlb_flush_by_mmuidx async routinesAlex Bennée2017-02-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | This converts the remaining TLB flush routines to use async work when detecting a cross-vCPU flush. The only minor complication is having to serialise the var_list of MMU indexes into a form that can be punted to an asynchronous job. The pending_tlb_flush field on QOM's CPU structure also becomes a bitfield rather than a boolean. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* | cputlb: introduce tlb_flush_* async work.KONRAD Frederic2017-02-241-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some architectures allow to flush the tlb of other VCPUs. This is not a problem when we have only one thread for all VCPUs but it definitely needs to be an asynchronous work when we are in true multithreaded work. We take the tb_lock() when doing this to avoid racing with other threads which may be invalidating TB's at the same time. The alternative would be to use proper atomic primitives to clear the tlb entries en-mass. This patch doesn't do anything to protect other cputlb function being called in MTTCG mode making cross vCPU changes. Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [AJB: remove need for g_malloc on defer, make check fixes, tb_lock] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* | tcg: drop global lock during TCG code executionJan Kiszka2017-02-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This finally allows TCG to benefit from the iothread introduction: Drop the global mutex while running pure TCG CPU code. Reacquire the lock when entering MMIO or PIO emulation, or when leaving the TCG loop. We have to revert a few optimization for the current TCG threading model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not kicking it in qemu_cpu_kick. We also need to disable RAM block reordering until we have a more efficient locking mechanism at hand. Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here. These numbers demonstrate where we gain something: 20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm 20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm The guest CPU was fully loaded, but the iothread could still run mostly independent on a second core. Without the patch we don't get beyond 32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm 32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm We don't benefit significantly, though, when the guest is not fully loading a host CPU. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com> [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex] Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [EGC: fixed iothread lock for cpu-exec IRQ handling] Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: -smp single-threaded fix, clean commit msg, BQL fixes] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com> [PM: target-arm changes] Acked-by: Peter Maydell <peter.maydell@linaro.org>
* | tcg: add options for enabling MTTCGKONRAD Frederic2017-02-241-0/+9
|/ | | | | | | | | | | | | | We know there will be cases where MTTCG won't work until additional work is done in the front/back ends to support. It will however be useful to be able to turn it on. As a result MTTCG will default to off unless the combination is supported. However the user can turn it on for the sake of testing. Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [AJB: move to -accel tcg,thread=multi|single, defaults] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* report guest crash information in GUEST_PANICKED eventAnton Nefedov2017-02-161-0/+10
| | | | | | | | | | | it's not very convenient to use the crash-information property interface, so provide a CPU class callback to get the guest crash information, and pass that information in the event Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Message-Id: <1487053524-18674-3-git-send-email-den@openvz.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* arm: Correctly handle watchpoints for BE32 CPUsJulian Brown2017-02-071-0/+3
| | | | | | | | | | | | | | | In BE32 mode, sub-word size watchpoints can fail to trigger because the address of the access is adjusted in the opcode helpers before being compared with the watchpoint registers. This patch reverses the address adjustment before performing the comparison with the help of a new CPUClass hook. This version of the patch augments and tidies up comments a little. Signed-off-by: Julian Brown <julian@codesourcery.com> Message-id: caaf64ffc72f6ae183015337b7afdbd4b8989cb6.1484929304.git.julian@codesourcery.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* object: make some funcs staticMarc-André Lureau2017-01-241-24/+0Star
| | | | | | | There is no need to have those functions as public API. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* object.h: spelling fixMarc-André Lureau2017-01-241-1/+1
| | | | | Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* Plumb the HAXM-based hardware acceleration supportVincent Palatin2017-01-191-0/+5
| | | | | | | | | | | | | Use the Intel HAX is kernel-based hardware acceleration module for Windows (similar to KVM on Linux). Based on the "target/i386: Add Intel HAX to android emulator" patch from David Chou <david.j.chou@intel.com> Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Message-Id: <7b9cae28a0c379ab459c7a8545c9a39762bd394f.1484045952.git.vpalatin@chromium.org> [Drop hax_populate_ram stub. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* monitor: reuse user_creatable_add_opts() instead of user_creatable_add()Igor Mammedov2017-01-121-17/+0Star
| | | | | | | | | | | | | | Simplify code by dropping ~57LOC by merging user_creatable_add() into user_creatable_add_opts() and using the later from monitor. Along with it allocate opts_visitor_new() once in user_creatable_add_opts(). As result we have one less API func and a more readable/simple user_creatable_add_opts() vs user_creatable_add(). Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1484052795-158195-3-git-send-email-imammedo@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* *_run_on_cpu: introduce run_on_cpu_data typePaolo Bonzini2016-10-311-5/+23
| | | | | | | | | | | This changes the *_run_on_cpu APIs (and helpers) to pass data in a run_on_cpu_data type instead of a plain void *. This is because we sometimes want to pass a target address (target_ulong) and this fails on 32 bit hosts emulating 64 bit guests. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: comment on which functions have to be called with tb_lock heldPaolo Bonzini2016-10-311-0/+3
| | | | | | | | | | | | | | | | | | | softmmu requires more functions to be thread-safe, because translation blocks can be invalidated from e.g. notdirty callbacks. Probably the same holds for user-mode emulation, it's just that no one has ever tried to produce a coherent locking there. This patch will guide the introduction of more tb_lock and tb_unlock calls for system emulation. Note that after this patch some (most) of the mentioned functions are still called outside tb_lock/tb_unlock. The next one will rectify this. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-Id: <20161027151030.20863-7-alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: call cpu_exec_exit() from a CPU unrealize common functionLaurent Vivier2016-10-241-1/+1
| | | | | | | | | | | | | | | | | | As cpu_exec_exit() mirrors the cpu_exec_realizefn(), rename it as cpu_exec_unrealizefn(). Create and register a cpu_common_unrealizefn() function for the CPU device class and call cpu_exec_unrealizefn() from this function. Remove cpu_exec_exit() from cpu_common_finalize() (which mirrors init, not realize), and as x86_cpu_unrealizefn() and ppc_cpu_unrealizefn() overwrite the device class unrealize function, add a call to a parent_unrealize pointer. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* exec: move cpu_exec_init() calls to realize functionsLaurent Vivier2016-10-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | Modify all CPUs to call it from XXX_cpu_realizefn() function. Remove all the cannot_destroy_with_object_finalize_yet as unsafe references have been moved to cpu_exec_realizefn(). (tested with QOM command provided by commit 4c315c27) for arm: Setting of cpu->mp_affinity is moved from arm_cpu_initfn() to arm_cpu_realizefn() as setting of cpu_index is now done in cpu_exec_realizefn(). To avoid to overwrite an user defined value, we set it to an invalid value by default, and update it in realize function only if the value is still invalid. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* exec: split cpu_exec_init()Laurent Vivier2016-10-241-0/+1
| | | | | | | | | | | | | | | | | Put in cpu_exec_initfn() what initializes the CPU, and leave in cpu_exec_init() what adds it to the environment. As cpu_exec_initfn() is called by all XX_cpu_initfn(), call it directly in cpu_common_initfn(). cpu_exec_init() is now a realize function, it will be renamed to cpu_exec_realizefn() and moved to the XX_cpu_realizefn() function in a following patch. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* trace: dynamically allocate trace_dstate in CPUStateDaniel P. Berrange2016-10-121-3/+6
| | | | | | | | | | | | | | | | | | | The CPUState struct has a bitmap tracking which VCPU events are currently active. This is indexed based on the event ID values, and sized according the maximum TraceEventVCPUID enum value. When we start dynamically assigning IDs at runtime, we can't statically declare a bitmap without making an assumption about the max event count. This problem can be solved by dynamically allocating the per-CPU dstate bitmap. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Lluís Vilanova <vilanova@ac.upc.edu> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 1475588159-30598-15-git-send-email-berrange@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* cpus-common: lock-free fast path for cpu_exec_start/endPaolo Bonzini2016-09-271-2/+3
| | | | | | | | | | | | | | | | Set cpu->running without taking the cpu_list lock, only requiring it if there is a concurrent exclusive section. This requires adding a new field to CPUState, which records whether a running CPU is being counted in pending_cpus. When an exclusive section is started concurrently with cpu_exec_start, cpu_exec_start can use the new field to determine if it has to wait for the end of the exclusive section. Likewise, cpu_exec_end can use it to see if start_exclusive is waiting for that CPU. This a separate patch for easier bisection of issues. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: Make tb_flush() thread safeSergey Fedorov2016-09-271-2/+0Star
| | | | | | | | | | | | | | | | | | Use async_safe_run_on_cpu() to make tb_flush() thread safe. This is possible now that code generation does not happen in the middle of execution. It can happen that multiple threads schedule a safe work to flush the translation buffer. To keep statistics and debugging output sane, always check if the translation buffer has already been flushed. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> [AJB: minor re-base fixes] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-13-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: Introduce async_safe_run_on_cpu()Paolo Bonzini2016-09-271-0/+14
| | | | | | Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: simplify locking for start_exclusive/end_exclusivePaolo Bonzini2016-09-271-4/+0Star
| | | | | | | | | | | | | | It is not necessary to hold qemu_cpu_list_mutex throughout the exclusive section, because no other exclusive section can run while pending_cpus != 0. exclusive_idle() is called in cpu_exec_start(), and that prevents any CPUs created after start_exclusive() from entering cpu_exec() during an exclusive section. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move exclusive work infrastructure from linux-userPaolo Bonzini2016-09-271-1/+43
| | | | | | | | | | | | This will serve as the base for async_safe_run_on_cpu. Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with qemu_cpu_list_lock: together with a call to exclusive_idle (via cpu_exec_start/end) in cpu_list_add, this protects exclusive work against concurrent CPU addition and removal. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move CPU work item management to common codeSergey Fedorov2016-09-271-8/+19
| | | | | | | | | | | | | Make CPU work core functions common between system and user-mode emulation. User-mode does not use run_on_cpu, so do not implement it. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-10-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus-common: move CPU list management to common codePaolo Bonzini2016-09-271-0/+12
| | | | | | | | | | Add a mutex for the CPU list to system emulation, as it will be used to manage safe work. Abstract manipulation of the CPU list in new functions cpu_list_add and cpu_list_remove. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* cpus: pass CPUState to run_on_cpu helpersAlex Bennée2016-09-271-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | CPUState is a fairly common pointer to pass to these helpers. This means if you need other arguments for the async_run_on_cpu case you end up having to do a g_malloc to stuff additional data into the routine. For the current users this isn't a massive deal but for MTTCG this gets cumbersome when the only other parameter is often an address. This adds the typedef run_on_cpu_func for helper functions which has an explicit CPUState * passed as the first parameter. All the users of run_on_cpu and async_run_on_cpu have had their helpers updated to use CPUState where available. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [Sergey Fedorov: - eliminate more CPUState in user data; - remove unnecessary user data passing; - fix target-s390x/kvm.c and target-s390x/misc_helper.c] Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> (ppc parts) Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> (s390 parts) Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <1470158864-17651-3-git-send-email-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec: Set cpu_index only if it's not been explictly setIgor Mammedov2016-07-261-0/+2
| | | | | | | | | | | | | | | | | | It keeps the legacy behavior for all users that doesn't care about stable cpu_index value, but would allow boards that would support device_add/device_del to set stable cpu_index that won't depend on order in which cpus are created/destroyed. While at that simplify cpu_get_free_index() as cpu_index generated by USER_ONLY and softmmu variants is the same since none of the users support cpu-remove so far, except of not yet released spapr/x86 device_add/delr, which will be altered by follow up patches to set stable cpu_index manually. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* trace: Add per-vCPU tracing states for events with the 'vcpu' propertyLluís Vilanova2016-07-181-0/+6
| | | | | | | | Each vCPU gets a 'trace_dstate' bitmap to control the per-vCPU dynamic tracing state of events with the 'vcpu' property. Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Fix confusing argument names in some common functionsSergey Sorokin2016-07-121-4/+11
| | | | | | | | | | | | | | There are functions tlb_fill(), cpu_unaligned_access() and do_unaligned_access() that are called with access type and mmu index arguments. But these arguments are named 'is_write' and 'is_user' in their declarations. The patches fix the arguments to avoid a confusion. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Message-id: 1465907177-1399402-1-git-send-email-afarallax@yandex.ru Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* cpu: Use CPUClass->parse_features() as convertor to global propertiesIgor Mammedov2016-07-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Currently CPUClass->parse_features() is used to parse -cpu features string and set properties on created CPU instances. But considering that features specified by -cpu apply to every created CPU instance, it doesn't make sense to parse the same features string for every CPU created. It also makes every target that cares about parsing features string explicitly call CPUClass->parse_features() parser, which gets in a way if we consider using generic device_add for CPU hotplug as device_add has not a clue about CPU specific hooks. Turns out we can use global properties mechanism to set properties on every created CPU instance for a given type. That way it's possible to convert CPU features into a set of global properties for CPU type specified by -cpu cpu_model and common Device.device_post_init() will apply them to CPU of given type automatically regardless whether it's manually created CPU or CPU created with help of device_add. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* qom: Fix comment typoChanglong Xie2016-07-061-1/+1
| | | | | | | | It's qom_unref, not qdef_unref. Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
* qom: API to get instance_size of a typeBharata B Rao2016-06-171-1/+7
| | | | | | | | Add an API object_type_get_size(const char *typename) that returns the instance_size of the give typename. Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* all: Remove unnecessary glib.h includesPeter Maydell2016-06-071-1/+0Star
| | | | | | | | | | | Remove glib.h includes, as it is provided by osdep.h. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Tested-by: Eric Blake <eblake@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* cpu: Add a sync version of cpu_remove()Bharata B Rao2016-05-301-0/+8
| | | | | | | | | | | | | | This sync API will be used by the CPU hotplug code to wait for the CPU to completely get removed before flagging the failure to the device_add command. Sync version of this call is needed to correctly recover from CPU realization failures when ->plug() handler fails. Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* cpu: Reclaim vCPU objectsGu Zheng2016-05-301-0/+10
| | | | | | | | | | | | | | | | | | | | | | In order to deal well with the kvm vcpus (which can not be removed without any protection), we do not close KVM vcpu fd, just record and mark it as stopped into a list, so that we can reuse it for the appending cpu hot-add request if possible. It is also the approach that kvm guys suggested: https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com> Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com> [- Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu() isn't needed as it is done from cpu_exec_exit() - Use iothread mutex instead of global mutex during destroy - Don't cleanup vCPU object from vCPU thread context but leave it to the callers (device_add/device_del)] Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* cpu: move exec-all.h inclusion out of cpu.hPaolo Bonzini2016-05-191-0/+10
| | | | | | | | | | exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include: move CPU-related definitions out of qemu-common.hPaolo Bonzini2016-05-191-0/+9
| | | | | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: Remove needless CPUState::current_tbSergey Fedorov2016-05-131-2/+0Star
| | | | | | | | | | | This field was used for telling cpu_interrupt() to unlink a chain of TBs being executed when it worked that way. Now, cpu_interrupt() don't do this anymore. So we don't need this field anymore. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Message-Id: <1462273462-14036-1-git-send-email-sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Rework tb_invalidated_flagSergey Fedorov2016-05-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'tb_invalidated_flag' was meant to catch two events: * some TB has been invalidated by tb_phys_invalidate(); * the whole translation buffer has been flushed by tb_flush(). Then it was checked: * in cpu_exec() to ensure that the last executed TB can be safely linked to directly call the next one; * in cpu_exec_nocache() to decide if the original TB should be provided for further possible invalidation along with the temporarily generated TB. It is always safe to patch an invalidated TB since it is not going to be used anyway. It is also safe to call tb_phys_invalidate() for an already invalidated TB. Thus, setting this flag in tb_phys_invalidate() is simply unnecessary. Moreover, it can prevent from pretty proper linking of TBs, if any arbitrary TB has been invalidated. So just don't touch it in tb_phys_invalidate(). If this flag is only used to catch whether tb_flush() has been called then rename it to 'tb_flushed'. Declare it as 'bool' and stick to using only 'true' and 'false' to set its value. Also, instead of setting it in tb_gen_code(), just after tb_flush() has been called, do it right inside of tb_flush(). In cpu_exec(), this flag is used to track if tb_flush() has been called and have made 'next_tb' (a reference to the last executed TB) invalid for linking it to directly call the next TB. tb_flush() can be called during the CPU execution loop from tb_gen_code(), during TB execution or by another thread while 'tb_lock' is released. Catch for translation buffer flush reliably by resetting this flag once before first TB lookup and each time we find it set before trying to add a direct jump. Don't touch in in tb_find_physical(). Each vCPU has its own execution loop in multithreaded mode and thus should have its own copy of the flag to be able to reset it with its own 'next_tb' and don't affect any other vCPU execution thread. So make this flag per-vCPU and move it to CPUState. In cpu_exec_nocache(), we only need to check if tb_flush() has been called from tb_gen_code() called by cpu_exec_nocache() itself. To do this reliably, preserve the old value of the flag, reset it before calling tb_gen_code(), check afterwards, and combine the saved value back to the flag. This patch is based on the patch "tcg: move tb_invalidated_flag to CPUState" from Paolo Bonzini <pbonzini@redhat.com>. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>