summaryrefslogtreecommitdiffstats
path: root/tests/test-bdrv-drain.c
Commit message (Collapse)AuthorAgeFilesLines
* test-bdrv-drain: Test graph changes in drain_all sectionKevin Wolf2018-06-181-2/+73
| | | | | | | | This tests both adding and remove a node between bdrv_drain_all_begin() and bdrv_drain_all_end(), and enabled the existing detach test for drain_all. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test that bdrv_drain_invoke() doesn't pollKevin Wolf2018-06-181-14/+88
| | | | | | | This adds a test case that goes wrong if bdrv_drain_invoke() calls aio_poll(). Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Graph change through parent callbackKevin Wolf2018-06-181-0/+130
| | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test node deletion in subtree recursionKevin Wolf2018-06-181-9/+29
| | | | | | | | If bdrv_do_drained_begin() polls during its subtree recursion, the graph can change and mess up the bs->children iteration. Test that this doesn't happen. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Add test for node deletionMax Reitz2018-06-181-0/+169
| | | | | | | | | | | | | | | | This patch adds two bdrv-drain tests for what happens if some BDS goes away during the drainage. The basic idea is that you have a parent BDS with some child nodes. Then, you drain one of the children. Because of that, the party who actually owns the parent decides to (A) delete it, or (B) detach all its children from it -- both while the child is still being drained. A real-world case where this can happen is the mirror block job, which may exit if you drain one of its children. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Really pause block jobs on drainKevin Wolf2018-06-181-8/+10
| | | | | | | | | | | | | | | | We already requested that block jobs be paused in .bdrv_drained_begin, but no guarantee was made that the job was actually inactive at the point where bdrv_drained_begin() returned. This introduces a new callback BdrvChildRole.bdrv_drained_poll() and uses it to make bdrv_drain_poll() consider block jobs using the node to be drained. For the test case to work as expected, we have to switch from block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even considered active and must be waited for when draining the node. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* tests/test-bdrv-drain: bdrv_drain_all() works in coroutines nowKevin Wolf2018-06-181-2/+14
| | | | | | | | | | | | | Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(), coroutine context is automatically left with a BH, preventing the deadlocks that made bdrv_drain_all*() unsafe in coroutine context. Now that we even removed the old polling code as dead code, it's obvious that it's compatible now. Enable the coroutine test cases for bdrv_drain_all(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Use bdrv_do_drain_begin/end in bdrv_drain_all()Kevin Wolf2018-06-181-10/+4Star
| | | | | | | | | | | | | | | | | | | | bdrv_do_drain_begin/end() implement already everything that bdrv_drain_all_begin/end() need and currently still do manually: Disable external events, call parent drain callbacks, call block driver callbacks. It also does two more things: The first is incrementing bs->quiesce_counter. bdrv_drain_all() already stood out in the test case by behaving different from the other drain variants. Adding this is not only safe, but in fact a bug fix. The second is calling bdrv_drain_recurse(). We already do that later in the same function in a loop, so basically doing an early first iteration doesn't hurt. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* test-bdrv-drain: bdrv_drain() works with cross-AioContext eventsKevin Wolf2018-06-181-1/+186
| | | | | | | | | | | | | As long as nobody keeps the other I/O thread from working, there is no reason why bdrv_drain() wouldn't work with cross-AioContext events. The key is that the root request we're waiting for is in the AioContext we're polling (which it always is for bdrv_drain()) so that aio_poll() is woken up in the end. Add a test case that shows that it works. Remove the comment in bdrv_drain() that claims otherwise. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* job: Add error message for failing jobsKevin Wolf2018-05-301-1/+1
| | | | | | | | | | | | | | | | | So far we relied on job->ret and strerror() to produce an error message for failed jobs. Not surprisingly, this tends to result in completely useless messages. This adds a Job.error field that can contain an error string for a failing job, and a parameter to job_completed() that sets the field. As a default, if NULL is passed, we continue to use strerror(job->ret). All existing callers are changed to pass NULL. They can be improved in separate patches. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
* job: Add job_transition_to_ready()Kevin Wolf2018-05-231-1/+1
| | | | | | | | | | | | The transition to the READY state was still performed in the BlockJob layer, in the same function that sent the BLOCK_JOB_READY QMP event. This patch brings the state transition to the Job layer and implements the QMP event using a notifier called from the Job layer, like we already do for other events related to state transitions. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* job: Move completion and cancellation to JobKevin Wolf2018-05-231-3/+2Star
| | | | | | | This moves the top-level job completion and cancellation functions from BlockJob to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* job: Move .complete callback to JobKevin Wolf2018-05-231-3/+3
| | | | | | | | | | This moves the .complete callback that tells a READY job to complete from BlockJobDriver to JobDriver. The wrapper function job_complete() doesn't require anything block job specific any more and can be moved to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* job: Add job_drain()Kevin Wolf2018-05-231-0/+1
| | | | | | | | | | | block_job_drain() contains a blk_drain() call which cannot be moved to Job, so add a new JobDriver callback JobDriver.drain which has a common implementation for all BlockJobs. In addition to this we keep the existing BlockJobDriver.drain callback that is called by the common drain implementation for all block jobs. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* job: Move pause/resume functions to JobKevin Wolf2018-05-231-0/+1
| | | | | | | | | | While we already moved the state related to job pausing to Job, the functions to do were still BlockJob only. This commit moves them over to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
* job: Add job_sleep_ns()Kevin Wolf2018-05-231-4/+4
| | | | | | | | | There is nothing block layer specific about block_job_sleep_ns(), so move the function to Job. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* job: Move coroutine and related code to JobKevin Wolf2018-05-231-19/+19
| | | | | | | | | | This commit moves some core functions for dealing with the job coroutine from BlockJob to Job. This includes primarily entering the coroutine (both for the first and reentering) and yielding explicitly and at pause points. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
* job: Move defer_to_main_loop to JobKevin Wolf2018-05-231-3/+4
| | | | | | | | | | | | | | | | | | | Move the defer_to_main_loop functionality from BlockJob to Job. The code can be simplified because we can use job->aio_context in job_defer_to_main_loop_bh() now, instead of having to access the BlockDriverState. Probably taking the data->aio_context lock in addition was already unnecessary in the old code because we didn't actually make use of anything protected by the old AioContext except getting the new AioContext, in case it changed between scheduling the BH and running it. But it's certainly unnecessary now that the BDS isn't accessed at all any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
* job: Add reference countingKevin Wolf2018-05-231-0/+1
| | | | | | | | | | | | | This moves reference counting from BlockJob to Job. In order to keep calling the BlockJob cleanup code when the job is deleted via job_unref(), introduce a new JobDriver.free callback. Every block job must use block_job_free() for this callback, this is asserted in block_job_create(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
* job: Create Job, JobDriver and job_create()Kevin Wolf2018-05-231-1/+3
| | | | | | | | | | | | | | | | | This is the first step towards creating an infrastructure for generic background jobs that aren't tied to a block device. For now, Job only stores its ID and JobDriver, the rest stays in BlockJob. The following patches will move over more parts of BlockJob to Job if they are meaningful outside the context of a block job. BlockJob.driver is now redundant, but this patch leaves it around to avoid unnecessary churn. The next patches will get rid of almost all of its uses anyway so that it can be removed later with much less churn. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
* blockjobs: add block_job_verb permission tableJohn Snow2018-03-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Which commands ("verbs") are appropriate for jobs in which state is also somewhat burdensome to keep track of. As of this commit, it looks rather useless, but begins to look more interesting the more states we add to the STM table. A recurring theme is that no verb will apply to an 'undefined' job. Further, it's not presently possible to restrict the "pause" or "resume" verbs any more than they are in this commit because of the asynchronous nature of how jobs enter the PAUSED state; justifications for some seemingly erroneous applications are given below. ===== Verbs ===== Cancel: Any state except undefined. Pause: Any state except undefined; 'created': Requests that the job pauses as it starts. 'running': Normal usage. (PAUSED) 'paused': The job may be paused for internal reasons, but the user may wish to force an indefinite user-pause, so this is allowed. 'ready': Normal usage. (STANDBY) 'standby': Same logic as above. Resume: Any state except undefined; 'created': Will lift a user's pause-on-start request. 'running': Will lift a pause request before it takes effect. 'paused': Normal usage. 'ready': Will lift a pause request before it takes effect. 'standby': Normal usage. Set-speed: Any state except undefined, though ready may not be meaningful. Complete: Only a 'ready' job may accept a complete request. ======= Changes ======= (1) To facilitate "nice" error checking, all five major block-job verb interfaces in blockjob.c now support an errp parameter: - block_job_user_cancel is added as a new interface. - block_job_user_pause gains an errp paramter - block_job_user_resume gains an errp parameter - block_job_set_speed already had an errp parameter. - block_job_complete already had an errp parameter. (2) block-job-pause and block-job-resume will no longer no-op when trying to pause an already paused job, or trying to resume a job that isn't paused. These functions will now report that they did not perform the action requested because it was not possible. iotests have been adjusted to address this new behavior. (3) block-job-complete doesn't worry about checking !block_job_started, because the permission table guards against this. (4) test-bdrv-drain's job implementation needs to announce that it is 'ready' now, in order to be completed. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* blockjobs: model single jobs as transactionsJohn Snow2018-03-191-2/+2
| | | | | | | | | | | | | | | model all independent jobs as single job transactions. It's one less case we have to worry about when we add more states to the transition machine. This way, we can just treat all job lifetimes exactly the same. This helps tighten assertions of the STM graph and removes some conditionals that would have been needed in the coming commits adding a more explicit job lifetime management API. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test graph changes in drained sectionKevin Wolf2017-12-221-0/+80
| | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Recursive draining with multiple parentsKevin Wolf2017-12-221-0/+74
| | | | | | Test that drain sections are correctly propagated through the graph. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test behaviour in coroutine contextKevin Wolf2017-12-221-0/+59
| | | | | | | | If bdrv_do_drained_begin/end() are called in coroutine context, they first use a BH to get out of the coroutine context. Call some existing tests again from a coroutine to cover this code path. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Tests for bdrv_subtree_drainKevin Wolf2017-12-221-1/+26
| | | | | | Add a subtree drain version to the existing test cases. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test nested drain sectionsKevin Wolf2017-12-221-0/+57
| | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Don't block_job_pause_all() in bdrv_drain_all()Kevin Wolf2017-12-221-6/+4Star
| | | | | | | Block jobs are already paused using the BdrvChildRole drain callbacks, so we don't need an additional block_job_pause_all() call. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test drain vs. block jobsKevin Wolf2017-12-221-0/+121
| | | | | | Block jobs must be paused if any of the involved nodes are drained. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test bs->quiesce_counterKevin Wolf2017-12-221-0/+45
| | | | | | | | This is currently only working correctly for bdrv_drain(), not for bdrv_drain_all(). Leave a comment for the drain_all case, we'll address it later. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test callback for bdrv_drainKevin Wolf2017-12-221-7/+62
| | | | | | | | | | | | The existing test is for bdrv_drain_all_begin/end() only. Generalise the test case so that it can be run for the other variants as well. At the moment this is only bdrv_drain_begin/end(), but in a while, we'll add another one. Also, add a backing file to the test node to test whether the operations work recursively. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* test-bdrv-drain: Test BlockDriver callbacks for drainKevin Wolf2017-12-221-0/+137
This adds a test case that the BlockDriver callbacks for drain are called in bdrv_drained_all_begin/end(), and that both of them are called exactly once. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>