summaryrefslogtreecommitdiffstats
path: root/block/io.c
Commit message (Collapse)AuthorAgeFilesLines
* block: Allow graph changes in bdrv_drain_all_begin/end sectionsKevin Wolf2018-06-181-14/+46
| | | | | | | | | | | | | | | | | | | | bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and did a subtree drain for each of them. This works fine as long as the graph is static, but sadly, reality looks different. If the graph changes so that root nodes are added or removed, we would have to compensate for this. bdrv_next() returns each root node only once even if it's the root node for multiple BlockBackends or for a monitor-owned block driver tree, which would only complicate things. The much easier and more obviously correct way is to fundamentally change the way the functions work: Iterate over all BlockDriverStates, no matter who owns them, and drain them individually. Compensation is only necessary when a new BDS is created inside a drain_all section. Removal of a BDS doesn't require any action because it's gone afterwards anyway. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: ignore_bds_parents parameter for drain functionsKevin Wolf2018-06-181-35/+53
| | | | | | | | | | | | | | | | In the future, bdrv_drained_all_begin/end() will drain all invidiual nodes separately rather than whole subtrees. This means that we don't want to propagate the drain to all parents any more: If the parent is a BDS, it will already be drained separately. Recursing to all parents is unnecessary work and would make it an O(n²) operation. Prepare the drain function for the changed drain_all by adding an ignore_bds_parents parameter to the internal implementation that prevents the propagation of the drain to BDS parents. We still (have to) propagate it to non-BDS parents like BlockBackends or Jobs because those are not drained separately. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Move bdrv_drain_all_begin() out of coroutine contextKevin Wolf2018-06-181-5/+17
| | | | | | | | Before we can introduce a single polling loop for all nodes in bdrv_drain_all_begin(), we must make sure to run it outside of coroutine context like we already do for bdrv_do_drained_begin(). Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Defer .bdrv_drain_begin callback to polling phaseKevin Wolf2018-06-181-5/+23
| | | | | | | | | | | We cannot allow aio_poll() in bdrv_drain_invoke(begin=true) until we're done with propagating the drain through the graph and are doing the single final BDRV_POLL_WHILE(). Just schedule the coroutine with the callback and increase bs->in_flight to make sure that the polling phase will wait for it. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Don't poll in parent drain callbacksKevin Wolf2018-06-181-8/+16
| | | | | | | | | | | | | bdrv_do_drained_begin() is only safe if we have a single BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow that parent callbacks introduce a nested polling loop that could cause graph changes while we're traversing the graph. Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single node without waiting for its requests to complete. These requests will be waited for in the BDRV_POLL_WHILE() call down the call chain. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Drain recursively with a single BDRV_POLL_WHILE()Kevin Wolf2018-06-181-18/+45
| | | | | | | | | | | | | | Anything can happen inside BDRV_POLL_WHILE(), including graph changes that may interfere with its callers (e.g. child list iteration in recursive callers of bdrv_do_drained_begin). Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the end of bdrv_do_drained_begin() to avoid such effects. The recursion happens now inside the loop condition. As the graph can only change between bdrv_drain_poll() calls, but not inside of it, doing the recursion here is safe. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Remove bdrv_drain_recurse()Kevin Wolf2018-06-181-33/+3Star
| | | | | | | | | | | | | | | | For bdrv_drain(), recursively waiting for child node requests is pointless because we didn't quiesce their parents, so new requests could come in anyway. Letting the function work only on a single node makes it more consistent. For subtree drains and drain_all, we already have the recursion in bdrv_do_drained_begin(), so the extra recursion doesn't add anything either. Remove the useless code. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Really pause block jobs on drainKevin Wolf2018-06-181-6/+34
| | | | | | | | | | | | | | | | We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Avoid unnecessary aio_poll() in AIO_WAIT_WHILE()Kevin Wolf2018-06-181-1/+10
| | | | | | | | | | | | | | | | | | | | Commit 91af091f923 added an additional aio_poll() to BDRV_POLL_WHILE() in order to make sure that all pending BHs are executed on drain. This was the wrong place to make the fix, as it is useless overhead for all other users of the macro and unnecessarily complicates the mechanism. This patch effectively reverts said commit (the context has changed a bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the loop condition for drain. The effect is probably hard to measure in any real-world use case because actual I/O will dominate, but if I run only the initialisation part of 'qemu-img convert' where it calls bdrv_block_status() for the whole image to find out how much data there is copy, this phase actually needs only roughly half the time after this patch. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Don't manually poll in bdrv_drain_all()Kevin Wolf2018-06-181-29/+12Star
| | | | | | | | | | | | | | | | | | | | | All involved nodes are already idle, we called bdrv_do_drain_begin() on them. The comment in the code suggested that this was not correct because the completion of a request on one node could spawn a new request on a different node (which might have been drained before, so we wouldn't drain the new request). In reality, new requests to different nodes aren't spawned out of nothing, but only in the context of a parent request, and they aren't submitted to random nodes, but only to child nodes. As long as we still poll for the completion of the parent request (which we do), draining each root node separately is good enough. Remove the additional polling code from bdrv_drain_all_begin() and replace it with an assertion that all nodes are already idle after we drained them separately. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Remove 'recursive' parameter from bdrv_drain_invoke()Kevin Wolf2018-06-181-10/+3Star
| | | | | | | All callers pass false for the 'recursive' parameter now. Remove it. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Use bdrv_do_drain_begin/end in bdrv_drain_all()Kevin Wolf2018-06-181-8/+2Star
| | | | | | | | | | | | | | | | | | | | bdrv_do_drain_begin/end() implement already everything that bdrv_drain_all_begin/end() need and currently still do manually: Disable external events, call parent drain callbacks, call block driver callbacks. It also does two more things: The first is incrementing bs->quiesce_counter. bdrv_drain_all() already stood out in the test case by behaving different from the other drain variants. Adding this is not only safe, but in fact a bug fix. The second is calling bdrv_drain_recurse(). We already do that later in the same function in a loop, so basically doing an early first iteration doesn't hurt. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* test-bdrv-drain: bdrv_drain() works with cross-AioContext eventsKevin Wolf2018-06-181-4/+0Star
| | | | | | | | | | | | | As long as nobody keeps the other I/O thread from working, there is no reason why bdrv_drain() wouldn't work with cross-AioContext events. The key is that the root request we're waiting for is in the AioContext we're polling (which it always is for bdrv_drain()) so that aio_poll() is woken up in the end. Add a test case that shows that it works. Remove the comment in bdrv_drain() that claims otherwise. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Introduce API for copy offloadingFam Zheng2018-06-011-0/+97
| | | | | | | | | | | | | Introduce the bdrv_co_copy_range() API for copy offloading. Block drivers implementing this API support efficient copy operations that avoid reading each block from the source device and writing it to the destination devices. Examples of copy offload primitives are SCSI EXTENDED COPY and Linux copy_file_range(2). Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20180601092648.24614-2-famz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Set BDRV_REQ_WRITE_UNCHANGED for COR writesMax Reitz2018-05-151-2/+4
| | | | | | | | | Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20180421132929.21610-5-mreitz@redhat.com Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Add BDRV_REQ_WRITE_UNCHANGED flagMax Reitz2018-05-151-1/+5
| | | | | | | | | | | | | This flag signifies that a write request will not change the visible disk content. With this flag set, it is sufficient to have the BLK_PERM_WRITE_UNCHANGED permission instead of BLK_PERM_WRITE. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-id: 20180421132929.21610-4-mreitz@redhat.com Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Merge .bdrv_co_writev{,_flags} in driversEric Blake2018-05-151-9/+4Star
| | | | | | | | | | | | | | | | | | | | We have too many driver callback interfaces; simplify the mess somewhat by merging the flags parameter of .bdrv_co_writev_flags() into .bdrv_co_writev(). Note that as long as a driver doesn't set .supported_write_flags, the flags argument will be 0 and behavior is identical. Also note that the public function bdrv_co_writev() still lacks a flags argument; so the driver signature is thus intentionally slightly different. But that's not the end of the world, nor the first time that the driver interface differs slightly from the public interface. Ideally, we should be rewriting all of these drivers to use modern byte-based interfaces. But that's a more invasive patch to write and audit, compared to the simplification done here. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Drop last of the sector-based aio callbacksEric Blake2018-05-151-48/+36Star
| | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. Now that all drivers with aio callbacks are using the byte-based interfaces, we can remove the sector-based versions. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Support byte-based aio callbacksEric Blake2018-05-151-9/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. Add new sector-based aio callbacks for read and write, to match the fact that bdrv_aio_pdiscard is already byte-based. Ideally, drivers should be converted to use coroutine callbacks rather than aio; but that is not quite as trivial (and if we were to do that conversion, the null-aio driver would disappear), so for the short term, converting the signature but keeping things with aio is easier. However, we CAN declare that a driver that uses the byte-based aio interfaces now defaults to byte-based operations, and must explicitly provide a refresh_limits override to stick with larger alignments (making the alignment issues more obvious directly in the drivers touched in the next few patches). Once all drivers are converted, the sector-based aio callbacks will be removed; in the meantime, a FIXME comment is added due to a slight inefficiency that will be touched up as part of that later cleanup. Simplify some instances of 'bs->drv' into 'drv' while touching this, since the local variable already exists to reduce typing. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* coroutine: avoid co_queue_wakeup recursionStefan Hajnoczi2018-03-271-2/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | qemu_aio_coroutine_enter() is (indirectly) called recursively when processing co_queue_wakeup. This can lead to stack exhaustion. This patch rewrites co_queue_wakeup in an iterative fashion (instead of recursive) with bounded memory usage to prevent stack exhaustion. qemu_co_queue_run_restart() is inlined into qemu_aio_coroutine_enter() and the qemu_coroutine_enter() call is turned into a loop to avoid recursion. There is one change that is worth mentioning: Previously, when coroutine A queued coroutine B, qemu_co_queue_run_restart() entered coroutine B from coroutine A. If A was terminating then it would still stay alive until B yielded. After this patch B is entered by A's parent so that a A can be deleted immediately if it is terminating. It is safe to make this change since B could never interact with A if it was terminating anyway. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20180322152834.12656-3-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: extract AIO_WAIT_WHILE() from BlockDriverStateStefan Hajnoczi2018-03-021-8/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | BlockDriverState has the BDRV_POLL_WHILE() macro to wait on event loop activity while a condition evaluates to true. This is used to implement synchronous operations where it acts as a condvar between the IOThread running the operation and the main loop waiting for the operation. It can also be called from the thread that owns the AioContext and in that case it's just a nested event loop. BlockBackend needs this behavior but doesn't always have a BlockDriverState it can use. This patch extracts BDRV_POLL_WHILE() into the AioWait abstraction, which can be used with AioContext and isn't tied to BlockDriverState anymore. This feature could be built directly into AioContext but then all users would kick the event loop even if they signal different conditions. Imagine an AioContext with many BlockDriverStates, each time a request completes any waiter would wake up and re-check their condition. It's nicer to keep a separate AioWait object for each condition instead. Please see "block/aio-wait.h" for details on the API. The name AIO_WAIT_WHILE() avoids the confusion between AIO_POLL_WHILE() and AioContext polling. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: fix write with zero flag set and iovector providedAnton Nefedov2018-03-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | The normal bdrv_co_pwritev() use is either - BDRV_REQ_ZERO_WRITE clear and iovector provided - BDRV_REQ_ZERO_WRITE set and iovector == NULL while - the flag clear and iovector == NULL is an assertion failure in bdrv_co_do_zero_pwritev() - the flag set and iovector provided is in fact allowed (the flag prevails and zeroes are written) However the alignment logic does not support the latter case so the padding areas get overwritten with zeroes. Currently, general functions like bdrv_rw_co() do provide iovector regardless of flags. So, keep it supported and use bdrv_co_do_zero_pwritev() alignment for it which also makes the code a bit more obvious anyway. Signed-off-by: Anton Nefedov <anton.nefedov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Drop unused .bdrv_co_get_block_status()Eric Blake2018-03-021-40/+10Star
| | | | | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. Now that all drivers have been updated to provide the byte-based .bdrv_co_block_status(), we can delete the sector-based interface. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Switch passthrough drivers to .bdrv_co_block_status()Eric Blake2018-03-021-16/+20
| | | | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. Update the generic helpers, and all passthrough clients (blkdebug, commit, mirror, throttle) accordingly. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Add .bdrv_co_block_status() callbackEric Blake2018-03-021-9/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. Now that the block layer exposes byte-based allocation, it's time to tackle the drivers. Add a new callback that operates on as small as byte boundaries. Subsequent patches will then update individual drivers, then finally remove .bdrv_co_get_block_status(). The new code also passes through the 'want_zero' hint, which will allow subsequent patches to further optimize callers that only care about how much of the image is allocated (want_zero is false), rather than full details about runs of zeroes and which offsets the allocation actually maps to (want_zero is true). As part of this effort, fix another part of the documentation: the claim in commit 4c41cb4 that BDRV_BLOCK_ALLOCATED is short for 'DATA || ZERO' is a lie at the block layer (see commit e88ae2264), even though it is how the bit is computed from the driver layer. After all, there are intentionally cases where we return ZERO but not ALLOCATED at the block layer, when we know that a read sees zero because the backing file is too short. Note that the driver interface is thus slightly different than the public interface with regards to which bits will be set, and what guarantees are provided on input. We also add an assertion that any driver using the new callback will make progress (the only time pnum will be 0 is if the block layer already handled an out-of-bounds request, or if there is an error); the old driver interface did not provide this guarantee, which could lead to some inf-loops in drastic corner-case failures. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Introduce buf register APIFam Zheng2018-02-081-0/+24
| | | | | | | | | | Allow block driver to map and unmap a buffer for later I/O, as a performance hint. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20180116060901.17413-5-famz@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
* block: Allow graph changes in subtree drained sectionKevin Wolf2017-12-221-4/+24
| | | | | | | | | | | | We need to remember how many of the drain sections in which a node is were recursive (i.e. subtree drain rather than node drain), so that they can be correctly applied when children are added or removed during the drained section. With this change, it is safe to modify the graph even inside a bdrv_subtree_drained_begin/end() section. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Add bdrv_subtree_drained_begin/end()Kevin Wolf2017-12-221-11/+43
| | | | | | | | | | | | bdrv_drained_begin() waits for the completion of requests in the whole subtree, but it only actually keeps its immediate bs parameter quiesced until bdrv_drained_end(). Add a version that keeps the whole subtree drained. As of this commit, graph changes cannot be allowed during a subtree drained section, but this will be fixed soon. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Don't notify parents in drain call chainKevin Wolf2017-12-221-13/+34
| | | | | | | | | | | | | | | | | | | | This is in preparation for subtree drains, i.e. drained sections that affect not only a single node, but recursively all child nodes, too. Calling the parent callbacks for drain is pointless when we just came from that parent node recursively and leads to multiple increases of bs->quiesce_counter in a single drain call. Don't do it. In order for this to work correctly, the parent callback must be called for every bdrv_drain_begin/end() call, not only for the outermost one: If we have a node N with two parents A and B, recursive draining of A should cause the quiesce_counter of B to increase because its child N is drained independently of B. If now B is recursively drained, too, A must increase its quiesce_counter because N is drained independently of A only now, even if N is going from quiesce_counter 1 to 2. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Nested drain_end must still call callbacksKevin Wolf2017-12-221-5/+7
| | | | | | | | | | bdrv_do_drained_begin() restricts the call of parent callbacks and aio_disable_external() to the outermost drain section, but the block driver callbacks are always called. bdrv_do_drained_end() must match this behaviour, otherwise nodes stay drained even if begin/end calls were balanced. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Don't block_job_pause_all() in bdrv_drain_all()Kevin Wolf2017-12-221-4/+0Star
| | | | | | | Block jobs are already paused using the BdrvChildRole drain callbacks, so we don't need an additional block_job_pause_all() call. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Make bdrv_drain() driver callbacks non-recursiveKevin Wolf2017-12-221-7/+9
| | | | | | | | | | | | | | | | bdrv_drained_begin() doesn't increase bs->quiesce_counter recursively and also doesn't notify other parent nodes of children, which both means that the child nodes are not actually drained, and bdrv_drained_begin() is providing useful functionality only on a single node. To keep things consistent, we also shouldn't call the block driver callbacks recursively. A proper recursive drain version that provides an actually working drained section for child nodes will be introduced later. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* block: Assert drain_all is only called from main AioContextKevin Wolf2017-12-221-0/+6
| | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* block: Remove unused bdrv_requests_pendingFam Zheng2017-12-221-18/+0Star
| | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Unify order in drain functionsKevin Wolf2017-12-221-4/+8
| | | | | | | | | | | | | | | | | | Drain requests are propagated to child nodes, parent nodes and directly to the AioContext. The order in which this happened was different between all combinations of drain/drain_all and begin/end. The correct order is to keep children only drained when their parents are also drained. This means that at the start of a drained section, the AioContext needs to be drained first, the parents second and only then the children. The correct order for the end of a drained section is the opposite. This patch changes the three other functions to follow the example of bdrv_drained_begin(), which is the only one that got it right. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Don't wait for requests in bdrv_drain*_end()Kevin Wolf2017-12-221-2/+0Star
| | | | | | | | | | | | | | The device is drained, so there is no point in waiting for requests at the end of the drained section. Remove the bdrv_drain_recurse() calls there. The bdrv_drain_recurse() calls were introduced in commit 481cad48e5e in order to call the .bdrv_co_drain_end() driver callback. This is now done by a separate bdrv_drain_invoke() call. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: bdrv_drain_recurse(): Remove unused begin parameterKevin Wolf2017-12-221-6/+6
| | | | | | | | Now that the bdrv_drain_invoke() calls are pulled up to the callers of bdrv_drain_recurse(), the 'begin' parameter isn't needed any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Call .drain_begin only once in bdrv_drain_all_begin()Kevin Wolf2017-12-221-2/+1Star
| | | | | | | | | | | | | | | | bdrv_drain_all_begin() used to call the .bdrv_co_drain_begin() driver callback inside its polling loop. This means that how many times it got called for each node depended on long it had to poll the event loop. This is obviously not right and results in nodes that stay drained even after bdrv_drain_all_end(), which calls .bdrv_co_drain_begin() once per node. Fix bdrv_drain_all_begin() to call the callback only once, too. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Make bdrv_drain_invoke() recursiveKevin Wolf2017-12-221-3/+11
| | | | | | | | | | | | | | | | | | | | | This change separates bdrv_drain_invoke(), which calls the BlockDriver drain callbacks, from bdrv_drain_recurse(). Instead, the function performs its own recursion now. One reason for this is that bdrv_drain_recurse() can be called multiple times by bdrv_drain_all_begin(), but the callbacks may only be called once. The separation is necessary to fix this bug. The other reason is that we intend to go to a model where we call all driver callbacks first, and only then start polling. This is not fully achieved yet with this patch, as bdrv_drain_invoke() contains a BDRV_POLL_WHILE() loop for the block driver callbacks, which can still call callbacks for any unrelated event. It's a step in this direction anyway. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Expect graph changes in bdrv_parent_drained_begin/endKevin Wolf2017-11-291-4/+4
| | | | | | | | | | | | | | | | | | The .drained_begin/end callbacks can (directly or indirectly via aio_poll()) cause block nodes to be removed or the current BdrvChild to point to a different child node. Use QLIST_FOREACH_SAFE() to make sure we don't access invalid BlockDriverStates or accidentally continue iterating the parents of the new child node instead of the node we actually came from. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Tested-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Guard against NULL bs->drvMax Reitz2017-11-171-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently do not guard everywhere against a NULL bs->drv where we should be doing so. Most of the places fixed here just do not care about that case at all. Some care implicitly, e.g. through a prior function call to bdrv_getlength() which would always fail for an ejected BDS. Add an assert there to make it more obvious. Other places seem to care, but do so insufficiently: Freeing clusters in a qcow2 image is an error-free operation, but it may leave the image in an unusable state anyway. Giving qcow2_free_clusters() an error code is not really viable, it is much easier to note that bs->drv may be NULL even after a successful driver call. This concerns bdrv_co_flush(), and the way the check is added to bdrv_co_pdiscard() (in every iteration instead of only once). Finally, some places employ at least an assert(bs->drv); somewhere, that may be reasonable (such as in the reopen code), but in bdrv_has_zero_init(), it is definitely not. Returning 0 there in case of an ejected BDS saves us much headache instead. Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com> Buglink: https://bugs.launchpad.net/qemu/+bug/1728660 Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20171110203111.7666-4-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Reduce bdrv_aligned_preadv() roundingEric Blake2017-10-261-6/+2Star
| | | | | | | | | Now that bdrv_is_allocated accepts non-aligned inputs, we can remove the TODO added in commit d6a644bb. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Align block status requestsEric Blake2017-10-261-26/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Any device that has request_alignment greater than 512 should be unable to report status at a finer granularity; it may also be simpler for such devices to be guaranteed that the block layer has rounded things out to the granularity boundary (the way the block layer already rounds all other I/O out). Besides, getting the code correct for super-sector alignment also benefits us for the fact that our public interface now has byte granularity, even though none of our drivers have byte-level callbacks. Add an assertion in blkdebug that proves that the block layer never requests status of unaligned sections, similar to what it does on other requests (while still keeping the generic helper in place for when future patches add a throttle driver). Note that iotest 177 already covers this (it would fail if you use just the blkdebug.c hunk without the io.c changes). Meanwhile, we can drop assertions in callers that no longer have to pass in sector-aligned addresses. There is a mid-function scope added for 'count' and 'longret', for a couple of reasons: first, an upcoming patch will add an 'if' statement that checks whether a driver has an old- or new-style callback, and can conveniently use the same scope for less indentation churn at that time. Second, since we are trying to get rid of sector-based computations, wrapping things in a scope makes it easier to group and see what will be deleted in a final cleanup patch once all drivers have been converted to the new-style callback. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Convert bdrv_get_block_status_above() to bytesEric Blake2017-10-261-47/+8Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. In the common case, allocation is unlikely to ever use values that are not naturally sector-aligned, but it is possible that byte-based values will let us be more precise about allocation at the end of an unaligned file that can do byte-based access. Changing the name of the function from bdrv_get_block_status_above() to bdrv_block_status_above() ensures that the compiler enforces that all callers are updated. Likewise, since it a byte interface allows an offset mapping that might not be sector aligned, split the mapping out of the return value and into a pass-by-reference parameter. For now, the io.c layer still assert()s that all uses are sector-aligned, but that can be relaxed when a later patch implements byte-based block status in the drivers. For the most part this patch is just the addition of scaling at the callers followed by inverse scaling at bdrv_block_status(), plus updates for the new split return interface. But some code, particularly bdrv_block_status(), gets a lot simpler because it no longer has to mess with sectors. Likewise, mirror code no longer computes s->granularity >> BDRV_SECTOR_BITS, and can therefore drop an assertion about alignment because the loop no longer depends on alignment (never mind that we don't really have a driver that reports sub-sector alignments, so it's not really possible to test the effect of sub-sector mirroring). Fix a neighboring assertion to use is_power_of_2 while there. For ease of review, bdrv_get_block_status() was tackled separately. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Switch bdrv_co_get_block_status_above() to byte-basedEric Blake2017-10-261-44/+24Star
| | | | | | | | | | We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Convert another internal type (no semantic change), and rename it to match the corresponding public function rename. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Switch bdrv_common_block_status_above() to byte-basedEric Blake2017-10-261-31/+30Star
| | | | | | | | | We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Convert another internal function (no semantic change). Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Switch BdrvCoGetBlockStatusData to byte-basedEric Blake2017-10-261-18/+38
| | | | | | | | | | We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Convert another internal type (no semantic change), and rename it to match the corresponding public function rename. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Switch bdrv_co_get_block_status() to byte-basedEric Blake2017-10-261-43/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Convert another internal function (no semantic change); and as with its public counterpart, rename to bdrv_co_block_status() and split the offset return, to make the compiler enforce that we catch all uses. For now, we assert that callers and the return value still use aligned data, but ultimately, this will be the function where we hand off to a byte-based driver callback, and will eventually need to add logic to ensure we round calls according to the driver's request_alignment then touch up the result handed back to the caller, to start permitting a caller to pass unaligned offsets. Note that we are now prepared to accepts 'bytes' larger than INT_MAX; this is okay as long as we clamp things internally before violating any 32-bit limits, and makes no difference to how a client will use the information (clients looping over the entire file must already be prepared for consecutive calls to return the same status, as drivers are already free to return shorter-than-maximal status due to any other convenient split points, such as when the L2 table crosses cluster boundaries in qcow2). Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Convert bdrv_get_block_status() to bytesEric Blake2017-10-261-13/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are gradually moving away from sector-based interfaces, towards byte-based. In the common case, allocation is unlikely to ever use values that are not naturally sector-aligned, but it is possible that byte-based values will let us be more precise about allocation at the end of an unaligned file that can do byte-based access. Changing the name of the function from bdrv_get_block_status() to bdrv_block_status() ensures that the compiler enforces that all callers are updated. For now, the io.c layer still assert()s that all callers are sector-aligned, but that can be relaxed when a later patch implements byte-based block status in the drivers. There was an inherent limitation in returning the offset via the return value: we only have room for BDRV_BLOCK_OFFSET_MASK bits, which means an offset can only be mapped for sector-aligned queries (or, if we declare that non-aligned input is at the same relative position modulo 512 of the answer), so the new interface also changes things to return the offset via output through a parameter by reference rather than mashed into the return value. We'll have some glue code that munges between the two styles until we finish converting all uses. For the most part this patch is just the addition of scaling at the callers followed by inverse scaling at bdrv_block_status(), coupled with the tweak in calling convention. But some code, particularly bdrv_is_allocated(), gets a lot simpler because it no longer has to mess with sectors. For ease of review, bdrv_get_block_status_above() will be tackled separately. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Switch bdrv_make_zero() to byte-basedEric Blake2017-10-261-16/+16
| | | | | | | | | | | | | We are gradually converting to byte-based interfaces, as they are easier to reason about than sector-based. Change the internal loop iteration of zeroing a device to track by bytes instead of sectors (although we are still guaranteed that we iterate by steps that are sector-aligned). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>