| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1454089805-5470-16-git-send-email-peter.maydell@linaro.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The coroutine files are currently referenced by the block-obj-y
variable. The coroutine functionality though is already used by
more than just the block code. eg migration code uses coroutine
yield. In the future the I/O channel code will also use the
coroutine yield functionality. Since the coroutine code is nicely
self-contained it can be easily built as part of the libqemuutil.a
library, making it widely available.
The headers are also moved into include/qemu, instead of the
include/block directory, since they are now part of the util
codebase, and the impl was never in the block/ directory
either.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch simplifies thread_pool_completion_bh().
The function first checks elem->state:
if (elem->state != THREAD_DONE) {
continue;
}
It then goes on to check elem->state == THREAD_DONE although we already
know this must be the case.
The QLIST_REMOVE() is duplicated down both branches of an if-else
statement so that can be lifted out as well.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1427992762-10126-1-git-send-email-stefanha@redhat.com
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
I'll use it with block backends shortly, and the name is going to fit
badly there. It's a block layer thing anyway, not just a block driver
thing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
I'll use BlockDriverAIOCB with block backends shortly, and the name is
going to fit badly there. It's a block layer thing anyway, not just a
block driver thing.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
|
|
|
|
| |
Suggested-by: BenoƮt Canet <benoit.canet@irqsave.net>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The .cancel_async shares the same the first half with .cancel: try to
steal the request if not submitted yet. In this case set the elem to
THREAD_DONE status and ret to -ECANCELED, which means
thread_pool_completion_bh will call the cb with -ECANCELED.
If the request is already submitted, do nothing, as we know the normal
completion will happen in the future.
Testing code update:
Before, done_cb is only called if the request is already submitted by
thread pool. Now done_cb is always called, even before it is submitted,
because we emulate bdrv_aio_cancel with bdrv_aio_cancel_async. So also
update the test criteria accordingly.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
| |
Dragging block_int.h into a header is *not* nice. Fortunately, this
is the only offender.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The thread pool has a race condition if two elements complete before
thread_pool_completion_bh() runs:
If element A's callback waits for element B using aio_poll() it will
deadlock since pool->completion_bh is not marked scheduled when the
nested aio_poll() runs.
Fix this by marking the BH scheduled while thread_pool_completion_bh()
is executing. This way any nested aio_poll() loops will enter
thread_pool_completion_bh() and complete the remaining elements.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
EventNotifier is implemented using an eventfd or pipe. It therefore
consumes file descriptors, which can be limited by rlimits and should
therefore be used sparingly.
Switch from EventNotifier to QEMUBH in thread-pool.c. Originally
EventNotifier was used because qemu_bh_schedule() was not thread-safe
yet.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current flow of canceling a thread from THREAD_ACTIVE state is:
1) Caller wants to cancel a request, so it calls thread_pool_cancel.
2) thread_pool_cancel waits on the conditional variable
elem->check_cancel.
3) The worker thread changes state to THREAD_DONE once the task is
done, and notifies elem->check_cancel to allow thread_pool_cancel
to continue execution, and signals the notifier (pool->notifier) to
allow callback function to be called later. But because of the
global mutex, the notifier won't get processed until step 4) and 5)
are done.
4) thread_pool_cancel continues, leaving the notifier signaled, it
just returns to caller.
5) Caller thinks the request is already canceled successfully, so it
releases any related data, such as freeing elem->common.opaque.
6) In the next main loop iteration, the notifier handler,
event_notifier_ready, is called. It finds the canceled thread in
THREAD_DONE state, so calls elem->common.cb, with an (likely)
dangling opaque pointer. This is a use-after-free.
Fix it by calling event_notifier_ready before leaving
thread_pool_cancel.
Test case update: This change will let cancel complete earlier than
test-thread-pool.c expects, so update the code to check this case: if
it's already done, done_cb sets .aiocb to NULL, skip calling
bdrv_aio_cancel on them.
Reported-by: Ulrich Obergfell <uobergfe@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
If enabled, set the thread name at creation (on GNU systems with
pthread_set_np)
Fix up all the callers with a thread name
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
include/qemu/timer.h has no need to include main-loop.h and
doing so causes an issue for the next patch. Unfortunately
various files assume including timers.h will pull in main-loop.h.
Untangle this mess.
Signed-off-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The .io_flush() handler no longer exists and has no users. Drop the
io_flush argument to aio_set_fd_handler() and related functions.
The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no
longer used and are dropped too.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
| |
.io_flush() is no longer called so drop thread_pool_active(). The block
layer is the only thread-pool.c user and it already tracks in-flight
requests, therefore we do not need thread_pool_active().
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that each AioContext has a ThreadPool and the main loop AioContext
can be fetched with bdrv_get_aio_context(), we can eliminate the concept
of a global thread pool from thread-pool.c.
The submit functions must take a ThreadPool* argument.
block/raw-posix.c and block/raw-win32.c use
aio_get_thread_pool(bdrv_get_aio_context(bs)) to fetch the main loop's
ThreadPool.
tests/test-thread-pool.c must be updated to reflect the new
thread_pool_submit() function prototypes.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ThreadPool is tied to an AioContext through its event notifier, which
dictates in which AioContext the work item's callback function will be
invoked.
In order to support multiple AioContexts we need to support multiple
ThreadPool instances.
This patch adds the new/free functions. The free function deserves
special attention because it quiesces remaining worker threads. This
requires a new condition variable and a "stopping" flag to let workers
know they should terminate once idle.
We never needed to do this before since the global threadpool was not
explicitly destroyed until process termination.
Also stash the AioContext pointer in ThreadPool so that we can call
aio_set_event_notifier() in thread_pool_free(). We didn't need to hold
onto AioContext previously since there was no free function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move global variables into a struct so multiple thread pools can be
supported in the future.
This patch does not change thread-pool.h interfaces. There is still a
global thread pool and it is not yet possible to create/destroy
individual thread pools. Moving the variables into a struct first makes
later patches easier to review.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
| |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
| |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Now that AIOPool no longer keeps a freelist, it isn't really a "pool"
anymore. Rename it to AIOCBInfo and make it const since it no longer
needs to be modified.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ordering is:
worker thread consumer thread
-------------------------------------------------------------------
write ret event_notifier_test_and_clear
wmb() read state
write state rmb()
event_notifier_set read ret
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Add a generic thread-pool. The code is roughly based on posix-aio-compat.c,
with some changes, especially the following:
- use QemuSemaphore instead of QemuCond;
- separate the state of the thread from the return code of the worker
function. The return code is totally opaque for the thread pool;
- do not busy wait when doing cancellation.
A more generic threadpool (but still specific to I/O so that in the future
it can use special scheduling classes or PI mutexes) can have many uses:
it allows more flexibility in raw-posix.c and can more easily be extended
to Win32, and it will also be used to do an msync of the persistent bitmap.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|