summaryrefslogtreecommitdiffstats
path: root/block/commit.c
Commit message (Collapse)AuthorAgeFilesLines
* commit: Set commit_top_bs->total_sectorsKevin Wolf2017-04-071-0/+1
| | | | | | | | | | | | Like in the mirror filter driver, we also need to set the image size for the commit filter driver. This is less likely to be a problem in practice than for the mirror because we're not at the active layer here, but attaching new parents to a node in the middle of the chain is possible, so the size needs to be correct anyway. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* commit: Set commit_top_bs->aio_contextKevin Wolf2017-04-071-0/+2
| | | | | | | | | The filter driver that is inserted by the commit job needs to use the same AioContext as its parent and child nodes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* commit: Implement .bdrv_refresh_filenameKevin Wolf2017-03-131-0/+8
| | | | | | | | | | We want query-block to return the right filename, even if a commit job put a bdrv_commit_top on top of the actual image format driver. Let bdrv_commit_top.bdrv_refresh_filename get the filename from its backing file. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* commit: Implement bdrv_commit_top.bdrv_co_get_block_statusKevin Wolf2017-03-131-4/+16
| | | | | | | | | | In some cases, bdrv_co_get_block_status() is called recursively for the whole backing chain. The automatically inserted bdrv_commit_top filter driver must not stop the recursion, so implement a callback that simply forwards the request to bs->backing. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* commit: Don't use error_abort in commit_startFam Zheng2017-03-071-2/+14
| | | | | | | | bdrv_set_backing_hd failure needn't be abort. Since we already have error parameter, use it. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* commit: Fix error handlingKevin Wolf2017-03-071-1/+1
| | | | | | | | | | Apparently some kind of mismerge happened in commit 8dfba279, which broke the error handling without any real reason by removing the assignment of the return value to ret in a blk_insert_bs() call. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
* block: Add Error parameter to bdrv_set_backing_hd()Kevin Wolf2017-02-281-7/+7
| | | | | | | | | | Not all callers of bdrv_set_backing_hd() know for sure that attaching the backing file will be allowed by the permission system. Return the error from the function rather than aborting. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* commit: Add filter-node-name to block-commitKevin Wolf2017-02-281-2/+3
| | | | | | | | | | | | | | | | Management tools need to be able to know about every node in the graph and need a way to address them. Changing the graph structure was okay because libvirt doesn't really manage the node level yet, but future libvirt versions need to deal with both new and old version of qemu. This new option to blockdev-commit allows the client to set a node-name for the automatically inserted filter driver, and at the same time serves as a witness for a future libvirt that this version of qemu does automatically insert a filter driver. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* commit: Use real permissions for HMP 'commit'Kevin Wolf2017-02-281-6/+27
| | | | | | | | | | This is a little simpler than the commit block job because it's synchronous and only commits into the immediate backing file, but otherwise doing more or less the same. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
* commit: Use real permissions in commit block jobKevin Wolf2017-02-281-18/+95
| | | | | | | | | | | | | | | | | This is probably one of the most interesting conversions to the new op blocker system because a commit block job intentionally leaves some intermediate block nodes in the backing chain that aren't valid on their own any more; only the whole chain together results in a valid view. In order to provide the 'consistent read' permission to the parents of the 'top' node of the commit job, a new filter block driver is inserted above 'top' which doesn't require 'consistent read' on its backing chain. Subsequently, the commit job can block 'consistent read' on all intermediate nodes without causing a conflict. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* blockjob: Add permissions to block_job_add_bdrv()Kevin Wolf2017-02-281-2/+6
| | | | | | | | | | | | Block jobs don't actually do I/O through the the reference they create with block_job_add_bdrv(), but they might want to use the permisssion system to express what the block job does to intermediate nodes. This adds permissions to block_job_add_bdrv() to provide the means to request permissions. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
* blockjob: Add permissions to block_job_create()Kevin Wolf2017-02-281-2/+3
| | | | | | | | | This functions creates a BlockBackend internally, so the block jobs need to tell it what they want to do with the BB. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
* block: Add error parameter to blk_insert_bs()Kevin Wolf2017-02-281-8/+30
| | | | | | | | | | Now that blk_insert_bs() requests the BlockBackend permissions for the node it attaches to, it can fail. Instead of aborting, pass the errors to the callers. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
* block: Add permissions to blk_new()Kevin Wolf2017-02-281-4/+8
| | | | | | | | | | | | | | | | | | We want every user to be specific about the permissions it needs, so we'll pass the initial permissions as parameters to blk_new(). A user only needs to call blk_set_perm() if it wants to change the permissions after the fact. The permissions are stored in the BlockBackend and applied whenever a BlockDriverState should be attached in blk_insert_bs(). This does not include actually choosing the right set of permissions everywhere yet. Instead, the usual FIXME comment is added to each place and will be addressed in individual patches. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
* blockjob: add block_job_startJohn Snow2016-11-151-3/+2Star
| | | | | | | | | | | | | | | | | | | Instead of automatically starting jobs at creation time via backup_start et al, we'd like to return a job object pointer that can be started manually at later point in time. For now, add the block_job_start mechanism and start the jobs automatically as we have been doing, with conversions job-by-job coming in later patches. Of note: cancellation of unstarted jobs will perform all the normal cleanup as if the job had started, particularly abort and clean. The only difference is that we will not emit any events, because the job never actually started. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* blockjob: add .start fieldJohn Snow2016-11-151-1/+2
| | | | | | | | | | | | | | | | Add an explicit start field to specify the entrypoint. We already have ownership of the coroutine itself AND managing the lifetime of the coroutine, let's take control of creation of the coroutine, too. This will allow us to delay creation of the actual coroutine until we know we'll actually start a BlockJob in block_job_start. This avoids the sticky question of how to "un-create" a Coroutine that hasn't been started yet. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* blockjobs: split interface into public/private, Part 1John Snow2016-11-011-1/+1
| | | | | | | | | | | | | | | | | | | | | To make it a little more obvious which functions are intended to be public interface and which are intended to be for use only by jobs themselves, split the interface into "public" and "private" files. Convert blockjobs (e.g. block/backup) to using the private interface. Leave blockdev and others on the public interface. There are remaining uses of private state by qemu-img, and several cases in blockdev.c and block/io.c where we grab job->blk for the purposes of acquiring an AIOContext. These will be corrected in future patches. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-7-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* blockjob: centralize QMP event emissionsJohn Snow2016-11-011-4/+4
| | | | | | | | | | | | | | There's no reason to leave this to blockdev; we can do it in blockjobs directly and get rid of an extra callback for most users. All non-internal events, even those created outside of QMP, will consistently emit events. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-5-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* blockjobs: Allow creating internal jobsJohn Snow2016-11-011-1/+1
| | | | | | | | | | Add the ability to create jobs without an ID. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-3-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block: Block all nodes involved in the block-commit operationAlberto Garcia2016-10-311-0/+14
| | | | | | | | | | | | | After a successful block-commit operation all nodes between top and base are removed from the backing chain, and top's overlay needs to be updated to point to base. Because of that we should prevent other block jobs from messing with them. This patch blocks all operations in these nodes in commit_start(). Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: prepare bdrv_reopen_multiple to release AioContextPaolo Bonzini2016-10-281-1/+1
| | | | | | | | | | | | After the next patch bdrv_drain_all will have to be called without holding any AioContext. Prepare to do this by adding an AioContext argument to bdrv_reopen_multiple. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1477565348-5458-15-git-send-email-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
* commit: Add 'base' to the reopen queue before 'overlay_bs'Alberto Garcia2016-09-231-4/+4
| | | | | | | | | | | | | | | | Now that we're checking for duplicates in the reopen queue, there's no need to force a specific order in which the queue is constructed so we can revert 3db2bd5508c86a1605258bc77c9672d93b5c350e. Since both ways of constructing the queue are now valid, this patch doesn't have any effect on the behavior of QEMU and is not strictly necessary. However it can help us check that the fix for the reopen queue is robust: if it stops working properly at some point, iotest 040 will break. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* commit: get the overlay node before manipulating the backing chainAlberto Garcia2016-09-201-2/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'block-commit' command has a 'top' parameter to specify the topmost node from which the data is going to be copied. [E] <- [D] <- [C] <- [B] <- [A] In this case if [C] is the top node then this is the result: [E] <- [B] <- [A] [B] must be modified so its backing image string points to [E] instead of [C]. commit_start() takes care of reopening [B] in read-write mode, and commit_complete() puts it back in read-only mode once the operation has finished. In order to find [B] (the overlay node) we look for the node that has [C] (the top node) as its backing image. However in commit_complete() we're doing it after [C] has been removed from the chain, so [B] is never found and remains in read-write mode. This patch gets the overlay node before the backing chain is manipulated. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 1471836963-28548-1-git-send-email-berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* Improve block job rate limiting for small bandwidth valuesSascha Silbe2016-07-131-8/+5Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ratelimit_calculate_delay() previously reset the accounting every time slice, no matter how much data had been processed before. This had (at least) two consequences: 1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream. Not sure if there are real-world use cases where this would be a problem. Mirroring and backup over a slow link (e.g. DSL) would come to mind, though. 2. Tests for block job operations (e.g. cancel) were rather racy All block jobs currently use a time slice of 100ms. That's a reasonable value to get smooth output during regular operation. However this also meant that the state of block jobs changed every 100ms, no matter how low the configured limit was. On busy hosts, qemu often transferred additional chunks until the test case had a chance to cancel the job. Fix the block job rate limit code to delay for more than one time slice to address the above issues. To make it easier to handle oversized chunks we switch the semantics from returning a delay _before_ the current request to a delay _after_ the current request. If necessary, this delay consists of multiple time slice units. Since the mirror job sends multiple chunks in one go even if the rate limit was exceeded in between, we need to keep track of the start of the current time slice so we can correctly re-compute the delay for the updated amount of data. The minimum bandwidth now is 1 data unit per time slice. The block jobs are currently passing the amount of data transferred in sectors and using 100ms time slices, so this translates to 5120 bytes/second. With chunk sizes usually being O(512KiB), tests have plenty of time (O(100s)) to operate on block jobs. The chance of a race condition now is fairly remote, except possibly on insanely loaded systems. Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com> Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* commit: Fix use of error handling policyKevin Wolf2016-07-131-3/+3
| | | | | | | | | | Commit implemented the 'enospc' policy as 'ignore' if the error was not ENOSPC. The QAPI documentation promises that it's treated as 'stop'. Using the common block job error handling function fixes this and also adds the missing QMP event. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* coroutine: move entry argument to qemu_coroutine_createPaolo Bonzini2016-07-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In practice the entry argument is always known at creation time, and it is confusing that sometimes qemu_coroutine_enter is used with a non-NULL argument to re-enter a coroutine (this happens in block/sheepdog.c and tests/test-coroutine.c). So pass the opaque value at creation time, for consistency with e.g. aio_bh_new. Mostly done with the following semantic patch: @ entry1 @ expression entry, arg, co; @@ - co = qemu_coroutine_create(entry); + co = qemu_coroutine_create(entry, arg); ... - qemu_coroutine_enter(co, arg); + qemu_coroutine_enter(co); @ entry2 @ expression entry, arg; identifier co; @@ - Coroutine *co = qemu_coroutine_create(entry); + Coroutine *co = qemu_coroutine_create(entry, arg); ... - qemu_coroutine_enter(co, arg); + qemu_coroutine_enter(co); @ entry3 @ expression entry, arg; @@ - qemu_coroutine_enter(qemu_coroutine_create(entry), arg); + qemu_coroutine_enter(qemu_coroutine_create(entry, arg)); @ reentry @ expression co; @@ - qemu_coroutine_enter(co, NULL); + qemu_coroutine_enter(co); except for the aforementioned few places where the semantic patch stumbled (as expected) and for test_co_queue, which would otherwise produce an uninitialized variable warning. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* commit: Add 'job-id' parameter to 'block-commit'Alberto Garcia2016-07-131-3/+4
| | | | | | | | | | This patch adds a new optional 'job-id' parameter to 'block-commit', allowing the user to specify the ID of the block job to be created. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* blockjob: Add 'job_id' parameter to block_job_create()Alberto Garcia2016-07-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | When a new job is created, the job ID is taken from the device name of the BDS. This patch adds a new 'job_id' parameter to let the caller provide one instead. This patch also verifies that the ID is always unique and well-formed. This causes problems in a couple of places where no ID is being set, because the BDS does not have a device name. In the case of test_block_job_start() (from test-blockjob-txn.c) we can simply use this new 'job_id' parameter to set the missing ID. In the case of img_commit() (from qemu-img.c) we still don't have the API to make commit_active_start() set the job ID, so we solve it by setting a default value. We'll get rid of this as soon as we extend the API. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Use BlockBackend for I/O in bdrv_commit()Kevin Wolf2016-07-051-12/+22
| | | | | | | | | Just like block jobs, the HMP commit command should use its own BlockBackend for doing I/O on BlockDriverStates. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Move bdrv_commit() to block/commit.cKevin Wolf2016-07-051-0/+111
| | | | | | | | No code changes, just moved from one file to another. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Create the commit block job before reopening any imageAlberto Garcia2016-06-161-5/+6
| | | | | | | | | | | | | If the base or overlay images need to be reopened in read-write mode but the block_job_create() call fails then no one will put those images back in read-only mode. We can solve this problem easily by calling block_job_create() first. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: aa495045770a6f1a7cc5d408397a17c75097fdd8.1464346103.git.berto@igalia.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* commit: Use BlockBackend for I/OKevin Wolf2016-05-251-20/+33
| | | | | | | | | | This changes the commit block job to use the job's BlockBackend for performing its I/O. job->bs isn't used by the commit code any more afterwards. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* blockjob: Don't touch BDS iostatusKevin Wolf2016-05-191-7/+0Star
| | | | | | | | | | | | | | Block jobs don't actually make use of the iostatus for their BDSes, but they manage a separate block job iostatus. Still, they require that it is enabled for the source BDS and they enable it automatically for the target and set the error handling mode - which ends up never being used by the job. This patch removes all of the BDS iostatus handling from the block job, which removes another few bs->blk accesses. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster2016-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* block: Clean up includesPeter Maydell2016-01-201-0/+1
| | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* commit: reopen overlay_bs before baseAlberto Garcia2015-11-111-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | 'block-commit' needs write access to two different nodes of the chain: - 'base', because that's where the data is written to. - the overlay of 'top', because it needs to update the backing file string to point to 'base' after the operation. Both images have to be opened in read-write mode, and commit_start() takes care of reopening them if necessary. With the current implementation, however, when overlay_bs is reopened in read-write mode it has the side effect of making 'base' read-only again, eventually making 'block-commit' fail. This needs to be fixed in bdrv_reopen(), but until we get to that it can be worked around simply by swapping the order of base and overlay_bs in the reopen queue. In order to reproduce this bug, overlay_bs needs to be initially in read-only mode. That is: the 'top' parameter of 'block-commit' cannot be the active layer nor its immediate backing chain. Cc: qemu-stable@nongnu.org Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Move I/O status and error actions into BBMax Reitz2015-10-231-1/+2
| | | | | | | | | These options are only relevant for the user of a whole BDS tree (like a guest device or a block job) and should thus be moved into the BlockBackend. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Allow specifying driver-specific options to reopenKevin Wolf2015-09-141-2/+2
| | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* Include qapi/qmp/qerror.h exactly where neededMarkus Armbruster2015-06-221-0/+1
| | | | | | | | | In particular, don't include it into headers. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
* qerror: Clean up QERR_ macros to expand into a single stringMarkus Armbruster2015-06-221-1/+1
| | | | | | | | | | | | | | | | | | | | | These macros expand into error class enumeration constant, comma, string. Unclean. Has been that way since commit 13f59ae. The error class is always ERROR_CLASS_GENERIC_ERROR since the previous commit. Clean up as follows: * Prepend every use of a QERR_ macro by ERROR_CLASS_GENERIC_ERROR, and delete it from the QERR_ macro. No change after preprocessing. * Rewrite error_set(ERROR_CLASS_GENERIC_ERROR, ...) into error_setg(...). Again, no change after preprocessing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
* block: let commit blockjob run in BDS AioContextStefan Hajnoczi2014-11-031-26/+44
| | | | | | | | | | | | | | | | | | The commit block job must run in the BlockDriverState AioContext so that it works with dataplane. Acquire the AioContext in blockdev.c so starting the block job is safe. One detail here is that the bdrv_drain_all() must be moved inside the aio_context_acquire() region so requests cannot sneak in between the drain and acquire. The completion code in block/commit.c must perform backing chain manipulation and bdrv_reopen() from the main loop. Use block_job_defer_to_main_loop() to achieve that. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1413889440-32577-11-git-send-email-stefanha@redhat.com
* block: Rename BlockDriverCompletionFunc to BlockCompletionFuncMarkus Armbruster2014-10-201-1/+1
| | | | | | | | | | I'll use it with block backends shortly, and the name is going to fit badly there. It's a block layer thing anyway, not just a block driver thing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: extend block-commit to accept a string for the backing fileJeff Cody2014-07-011-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some image chains, QEMU may not always be able to resolve the filenames properly, when updating the backing file of an image after a block commit. For instance, certain relative pathnames may fail, or drives may have been specified originally by file descriptor (e.g. /dev/fd/???), or a relative protocol pathname may have been used. In these instances, QEMU may lack the information to be able to make the correct choice, but the user or management layer most likely does have that knowledge. With this extension to the block-commit api, the user is able to change the backing file of the overlay image as part of the block-commit operation. This allows the change to be 'safe', in the sense that if the attempt to write the overlay image metadata fails, then the block-commit operation returns failure, without disrupting the guest. If the commit top is the active layer, then specifying the backing file string will be treated as an error (there is no overlay image to modify in that case). If a backing file string is not specified in the command, the backing file string to use is determined in the same manner as it was previously. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qerror.h: Remove QERR defines that are only used onceCole Robinson2014-04-251-1/+1
| | | | | | | | | | Just hardcode them in the callers Cc: Luiz Capitulino <lcapitulino@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
* commit: Remove unused checkFam Zheng2013-12-201-7/+1Star
| | | | | | | | | We support top == active for commit now, remove the check and add an assertion here. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qapi: make use of new BlockJobTypeFam Zheng2013-10-111-1/+1
| | | | | | | | Switch the string to enum type BlockJobType in BlockJobDriver. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* blockjob: rename BlockJobType to BlockJobDriverFam Zheng2013-10-111-2/+2
| | | | | | | | | | We will use BlockJobType as the enum type name of block jobs in QAPI, rename current BlockJobType to BlockJobDriver, which will eventually become a set of operations, similar to block drivers. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: remove bdrv_is_allocated_above/bdrv_co_is_allocated_above distinctionPaolo Bonzini2013-09-061-3/+3
| | | | | | | | | Now that bdrv_is_allocated detects coroutine context, the two can use the same code. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* aio / timers: convert block_job_sleep_ns and co_sleep_ns to new APIAlex Bligh2013-08-221-1/+1
| | | | | | | | Convert block_job_sleep_ns and co_sleep_ns to use the new timer API. Signed-off-by: Alex Bligh <alex@alex.org.uk> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Make BlockJobTypes constKevin Wolf2013-06-281-1/+1
| | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com>