summaryrefslogtreecommitdiffstats
path: root/include/sysemu/iothread.h
Commit message (Collapse)AuthorAgeFilesLines
* include: Make headers more self-containedMarkus Armbruster2019-08-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Back in 2016, we discussed[1] rules for headers, and these were generally liked: 1. Have a carefully curated header that's included everywhere first. We got that already thanks to Peter: osdep.h. 2. Headers should normally include everything they need beyond osdep.h. If exceptions are needed for some reason, they must be documented in the header. If all that's needed from a header is typedefs, put those into qemu/typedefs.h instead of including the header. 3. Cyclic inclusion is forbidden. This patch gets include/ closer to obeying 2. It's actually extracted from my "[RFC] Baby steps towards saner headers" series[2], which demonstrates a possible path towards checking 2 automatically. It passes the RFC test there. [1] Message-ID: <87h9g8j57d.fsf@blackfin.pond.sub.org> https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg03345.html [2] Message-Id: <20190711122827.18970-1-armbru@redhat.com> https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg02715.html Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-2-armbru@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
* iothread: create the gcontext unconditionallyPeter Xu2019-03-081-1/+1
| | | | | | | | | | | | | | | | | | | | | In existing code we create the gcontext dynamically at the first access of the gcontext from caller. That can bring some complexity and potential races during using iothread. Since the context itself is not that big a resource, and we won't have millions of iothread, let's simply create the gcontext unconditionally. This will also be a preparation work further to move the thread context push operation earlier than before (now it's only pushed right before we want to start running the gmainloop). Removing the g_once since it's not necessary, while introducing a new run_gcontext boolean to show whether we want to run the gcontext. Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20190306115532.23025-3-peterx@redhat.com Message-Id: <20190306115532.23025-3-peterx@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: replace init_done_cond with a semaphorePeter Xu2019-03-081-2/+1Star
| | | | | | | | | | | | | Only sending an init-done message using lock+cond seems an overkill to me. Replacing it with a simpler semaphore. Meanwhile, init the semaphore unconditionally, then we can destroy it unconditionally too in finalize which seems cleaner. Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20190306115532.23025-2-peterx@redhat.com Message-Id: <20190306115532.23025-2-peterx@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* vl: introduce vm_shutdown()Stefan Hajnoczi2018-03-081-1/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 00d09fdbbae5f7864ce754913efc84c12fdf9f1a ("vl: pause vcpus before stopping iothreads") and commit dce8921b2baaf95974af8176406881872067adfa ("iothread: Stop threads before main() quits") tried to work around the fact that emulation was still active during termination by stopping iothreads. They suffer from race conditions: 1. virtio_scsi_handle_cmd_vq() racing with iothread_stop_all() hits the virtio_scsi_ctx_check() assertion failure because the BDS AioContext has been modified by iothread_stop_all(). 2. Guest vq kick racing with main loop termination leaves a readable ioeventfd that is handled by the next aio_poll() when external clients are enabled again, resulting in unwanted emulation activity. This patch obsoletes those commits by fully disabling emulation activity when vcpus are stopped. Use the new vm_shutdown() function instead of pause_all_vcpus() so that vm change state handlers are invoked too. Virtio devices will now stop their ioeventfds, preventing further emulation activity after vm_stop(). Note that vm_stop(RUN_STATE_SHUTDOWN) cannot be used because it emits a QMP STOP event that may affect existing clients. It is no longer necessary to call replay_disable_events() directly since vm_shutdown() does so already. Drop iothread_stop_all() since it is no longer used. Cc: Fam Zheng <famz@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20180307144205.20619-5-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: fix iothread_stop() race conditionStefan Hajnoczi2017-12-191-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a small chance that iothread_stop() hangs as follows: Thread 3 (Thread 0x7f63eba5f700 (LWP 16105)): #0 0x00007f64012c09b6 in ppoll () at /lib64/libc.so.6 #1 0x000055959992eac9 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77 #2 0x000055959992eac9 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:322 #3 0x0000559599930711 in aio_poll (ctx=0x55959bdb83c0, blocking=blocking@entry=true) at util/aio-posix.c:629 #4 0x00005595996806fe in iothread_run (opaque=0x55959bd78400) at iothread.c:59 #5 0x00007f640159f609 in start_thread () at /lib64/libpthread.so.0 #6 0x00007f64012cce6f in clone () at /lib64/libc.so.6 Thread 1 (Thread 0x7f640b45b280 (LWP 16103)): #0 0x00007f64015a0b6d in pthread_join () at /lib64/libpthread.so.0 #1 0x00005595999332ef in qemu_thread_join (thread=<optimized out>) at util/qemu-thread-posix.c:547 #2 0x00005595996808ae in iothread_stop (iothread=<optimized out>) at iothread.c:91 #3 0x000055959968094d in iothread_stop_iter (object=<optimized out>, opaque=<optimized out>) at iothread.c:102 #4 0x0000559599857d97 in do_object_child_foreach (obj=obj@entry=0x55959bdb8100, fn=fn@entry=0x559599680930 <iothread_stop_iter>, opaque=opaque@entry=0x0, recurse=recurse@entry=false) at qom/object.c:852 #5 0x0000559599859477 in object_child_foreach (obj=obj@entry=0x55959bdb8100, fn=fn@entry=0x559599680930 <iothread_stop_iter>, opaque=opaque@entry=0x0) at qom/object.c:867 #6 0x0000559599680a6e in iothread_stop_all () at iothread.c:341 #7 0x000055959955b1d5 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4913 The relevant code from iothread_run() is: while (!atomic_read(&iothread->stopping)) { aio_poll(iothread->ctx, true); and iothread_stop(): iothread->stopping = true; aio_notify(iothread->ctx); ... qemu_thread_join(&iothread->thread); The following scenario can occur: 1. IOThread: while (!atomic_read(&iothread->stopping)) -> stopping=false 2. Main loop: iothread->stopping = true; aio_notify(iothread->ctx); 3. IOThread: aio_poll(iothread->ctx, true); -> hang The bug is explained by the AioContext->notify_me doc comments: "If this field is 0, everything (file descriptors, bottom halves, timers) will be re-evaluated before the next blocking poll(), thus the event_notifier_set call can be skipped." The problem is that "everything" does not include checking iothread->stopping. This means iothread_run() will block in aio_poll() if aio_notify() was called just before aio_poll(). This patch fixes the hang by replacing aio_notify() with aio_bh_schedule_oneshot(). This makes aio_poll() or g_main_loop_run() to return. Implementing this properly required a new bool running flag. The new flag prevents races that are tricky if we try to use iothread->stopping. Now iothread->stopping is purely for iothread_stop() and iothread->running is purely for the iothread_run() thread. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20171207201320.19284-6-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: add iothread_by_id() APIStefan Hajnoczi2017-12-191-0/+1
| | | | | | | | | | | Encapsulate IOThread QOM object lookup so that callers don't need to know how and where IOThread objects live. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20171206144550.22295-8-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: export iothread_stop()Peter Xu2017-10-031-0/+1
| | | | | | | | | | | | | | | | | | | So that internal iothread users can explicitly stop one iothread without destroying it. Since at it, fix iothread_stop() to allow it to be called multiple times. Before this patch we may call iothread_stop() more than once on single iothread, while that may not be correct since qemu_thread_join() is not allowed to run twice. From manual of pthread_join(): Joining with a thread that has previously been joined results in undefined behavior. Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20170928025958.1420-4-peterx@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: provide helpers for internal usePeter Xu2017-10-031-0/+8
| | | | | | | | | | | | | | IOThread is a general framework that contains IO loop environment and a real thread behind. It's also good to be used internally inside qemu. Provide some helpers for it to create iothreads to be used internally. Put all the internal used iothreads into the internal object container. Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-id: 20170928025958.1420-3-peterx@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qemu-iothread: IOThread supports the GMainContext event loopWang Yong2017-09-081-0/+4
| | | | | | | | | | | | | | IOThread uses AioContext event loop and does not run a GMainContext. Therefore,chardev cannot work in IOThread,such as the chardev is used for colo-compare packets reception. This patch makes the IOThread run the GMainContext event loop, chardev and IOThread can work together. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Wang Yong <wang.yong155@zte.com.cn> Signed-off-by: Wang Guang <wang.guang55@zte.com.cn> Signed-off-by: Jason Wang <jasowang@redhat.com>
* iothread: add poll-grow and poll-shrink parametersStefan Hajnoczi2017-01-031-0/+2
| | | | | | | | | | These parameters control the poll time self-tuning algorithm. They are optional and will default to sane values if omitted. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20161201192652.9509-14-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: add polling parametersStefan Hajnoczi2017-01-031-0/+3
| | | | | | | | | | Poll mode can be configured with -object iothread,poll-max-ns=NUM. Polling is disabled with a value of 0 nanoseconds. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20161201192652.9509-7-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iothread: Stop threads before main() quitsFam Zheng2016-09-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right after main_loop ends, we release various things but keep iothread alive. The latter is not prepared to the sudden change of resources. Specifically, after bdrv_close_all(), virtio-scsi dataplane get a surprise at the empty BlockBackend: (gdb) bt at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543 at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577 It is because the d->conf.blk->root is set to NULL, then blk_get_aio_context() returns qemu_aio_context, whereas s->ctx is still pointing to the iothread: hw/scsi/virtio-scsi.c:543: if (s->dataplane_started) { assert(blk_get_aio_context(d->conf.blk) == s->ctx); } To fix this, let's stop iothreads before doing bdrv_close_all(). Cc: qemu-stable@nongnu.org Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 1473326931-9699-1-git-send-email-famz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Remove various unused functionsThomas Huth2015-05-081-1/+0Star
| | | | | | | | | The functions tpm_backend_thread_tpm_reset() and iothread_find() are completely unused, let's remove them. Signed-off-by: Thomas Huth <huth@tuxfamily.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* iothread: make IOThread struct definition publicStefan Hajnoczi2014-04-041-1/+11
| | | | | | | | | | Make the IOThread struct definition public so objects can be embedded in parent structs. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* iothread: add I/O thread objectStefan Hajnoczi2014-03-131-0/+30
This is a stand-in for Michael Roth's QContext. I expect this to be replaced once QContext is completed. The IOThread object is an AioContext event loop thread. This patch adds the concept of multiple event loop threads, allowing users to define them. When SMP guests run on SMP hosts it makes sense to instantiate multiple IOThreads. This spreads event loop processing across multiple cores. Note that additional patches are required to actually bind a device to an IOThread. [Andreas Färber <afaerber@suse.de> pointed out that the embedded parent object instance should be called "parent_obj" and have a newline afterwards. This patch has been changed to reflect this. -- Stefan] Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>