summaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
...
* | block/backup: centralize copy_bitmap initializationJohn Snow2019-08-161-14/+15
| | | | | | | | | | | | | | | | | | | | Just a few housekeeping changes that keeps the following commit easier to read; perform the initial copy_bitmap initialization in one place. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-8-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: improve sync=bitmap work estimatesJohn Snow2019-08-161-5/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | When making backups based on bitmaps, the work estimate can be more accurate. Update iotests to reflect the new strategy. TOP work estimates are broken, but do not get worse with this commit. That issue is addressed in the following commits instead. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-7-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: hoist bitmap check into QMP interfaceJohn Snow2019-08-161-9/+4Star
| | | | | | | | | | | | | | | | | | | | This is nicer to do in the unified QMP interface that we have now, because it lets us use the right terminology back at the user. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716000117.25219-5-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | qapi: implement block-dirty-bitmap-remove transaction actionJohn Snow2019-08-161-8/+7Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It is used to do transactional movement of the bitmap (which is possible in conjunction with merge command). Transactional bitmap movement is needed in scenarios with external snapshot, when we don't want to leave copy of the bitmap in the base image. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190708220502.12977-3-jsnow@redhat.com [Edited "since" version to 4.2 --js] Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: loosen restriction on readonly bitmapsJohn Snow2019-08-161-0/+6
| | | | | | | | | | | | | | | | | | | | | | With the "never" sync policy, we actually can utilize readonly bitmaps now. Loosen the check at the QMP level, and tighten it based on provided arguments down at the job creation level instead. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-19-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: add 'always' bitmap sync policyJohn Snow2019-08-161-8/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds an "always" policy for bitmap synchronization. Regardless of if the job succeeds or fails, the bitmap is *always* synchronized. This means that for backups that fail part-way through, the bitmap retains a record of which sectors need to be copied out to accomplish a new backup using the old, partial result. In effect, this allows us to "resume" a failed backup; however the new backup will be from the new point in time, so it isn't a "resume" as much as it is an "incremental retry." This can be useful in the case of extremely large backups that fail considerably through the operation and we'd like to not waste the work that was already performed. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-13-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: upgrade copy_bitmap to BdrvDirtyBitmapJohn Snow2019-08-161-39/+43
| | | | | | | | | | | | | | | | | | | | | | This simplifies some interface matters; namely the initialization and (later) merging the manifest back into the sync_bitmap if it was provided. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-12-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/dirty-bitmap: add bdrv_dirty_bitmap_getJohn Snow2019-08-162-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | Add a public interface for get. While we're at it, rename "bdrv_get_dirty_bitmap_locked" to "bdrv_dirty_bitmap_get_locked". (There are more functions to rename to the bdrv_dirty_bitmap_VERB form, but they will wait until the conclusion of this series.) Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-11-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/dirty-bitmap: add bdrv_dirty_bitmap_merge_internalJohn Snow2019-08-161-5/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I'm surprised it didn't come up sooner, but sometimes we have a +busy bitmap as a source. This is dangerous from the QMP API, but if we are the owner that marked the bitmap busy, it's safe to merge it using it as a read only source. It is not safe in the general case to allow users to read from in-use bitmaps, so create an internal variant that foregoes the safety checking. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-10-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: add 'never' policy to bitmap sync modeJohn Snow2019-08-161-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | This adds a "never" policy for bitmap synchronization. Regardless of if the job succeeds or fails, we never update the bitmap. This can be used to perform differential backups, or simply to avoid the job modifying a bitmap. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-7-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* | block/backup: Add mirror sync mode 'bitmap'John Snow2019-08-163-11/+17
|/ | | | | | | | | | | | | | | | | We don't need or want a new sync mode for simple differences in semantics. Create a new mode simply named "BITMAP" that is designed to make use of the new Bitmap Sync Mode field. Because the only bitmap sync mode is 'on-success', this adds no new functionality to the backup job (yet). The old incremental backup mode is maintained as a syntactic sugar for sync=bitmap, mode=on-success. Add all of the plumbing necessary to support this new instruction. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190709232550.10724-6-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
* Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell2019-08-165-31/+84
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block layer patches: - file-posix: Fix O_DIRECT alignment detection - Fixes for concurrent block jobs - block-backend: Queue requests while drained (fix IDE vs. job crashes) - qemu-img convert: Deprecate using -n and -o together - iotests: Migration tests with filter nodes - iotests: More media change tests # gpg: Signature made Fri 16 Aug 2019 10:29:18 BST # gpg: using RSA key 7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: file-posix: Handle undetectable alignment qemu-img convert: Deprecate using -n and -o together block-backend: Queue requests while drained mirror: Keep mirror_top_bs drained after dropping permissions block: Remove blk_pread_unthrottled() iotests: Add test for concurrent stream/commit tests: Test mid-drain bdrv_replace_child_noperm() tests: Test polling in bdrv_drop_intermediate() block: Reduce (un)drains when replacing a child block: Keep subtree drained in drop_intermediate block: Simplify bdrv_filter_default_perms() iotests: Test migration with all kinds of filter nodes iotests: Move migration helpers to iotests.py iotests/118: Add -blockdev based tests iotests/118: Create test classes dynamically iotests/118: Test media change for scsi-cd Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * file-posix: Handle undetectable alignmentNir Soffer2019-08-161-11/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases buf_align or request_alignment cannot be detected: 1. With Gluster, buf_align cannot be detected since the actual I/O is done on Gluster server, and qemu buffer alignment does not matter. Since we don't have alignment requirement, buf_align=1 is the best value. 2. With local XFS filesystem, buf_align cannot be detected if reading from unallocated area. In this we must align the buffer, but we don't know what is the correct size. Using the wrong alignment results in I/O error. 3. With Gluster backed by XFS, request_alignment cannot be detected if reading from unallocated area. In this case we need to use the correct alignment, and failing to do so results in I/O errors. 4. With NFS, the server does not use direct I/O, so both buf_align cannot be detected. In this case we don't need any alignment so we can use buf_align=1 and request_alignment=1. These cases seems to work when storage sector size is 512 bytes, because the current code starts checking align=512. If the check succeeds because alignment cannot be detected we use 512. But this does not work for storage with 4k sector size. To determine if we can detect the alignment, we probe first with align=1. If probing succeeds, maybe there are no alignment requirement (cases 1, 4) or we are probing unallocated area (cases 2, 3). Since we don't have any way to tell, we treat this as undetectable alignment. If probing with align=1 fails with EINVAL, but probing with one of the expected alignments succeeds, we know that we found a working alignment. Practically the alignment requirements are the same for buffer alignment, buffer length, and offset in file. So in case we cannot detect buf_align, we can use request alignment. If we cannot detect request alignment, we can fallback to a safe value. To use this logic, we probe first request alignment instead of buf_align. Here is a table showing the behaviour with current code (the value in parenthesis is the optimal value). Case Sector buf_align (opt) request_alignment (opt) result ====================================================================== 1 512 512 (1) 512 (512) OK 1 4096 512 (1) 4096 (4096) FAIL ---------------------------------------------------------------------- 2 512 512 (512) 512 (512) OK 2 4096 512 (4096) 4096 (4096) FAIL ---------------------------------------------------------------------- 3 512 512 (1) 512 (512) OK 3 4096 512 (1) 512 (4096) FAIL ---------------------------------------------------------------------- 4 512 512 (1) 512 (1) OK 4 4096 512 (1) 512 (1) OK Same cases with this change: Case Sector buf_align (opt) request_alignment (opt) result ====================================================================== 1 512 512 (1) 512 (512) OK 1 4096 4096 (1) 4096 (4096) OK ---------------------------------------------------------------------- 2 512 512 (512) 512 (512) OK 2 4096 4096 (4096) 4096 (4096) OK ---------------------------------------------------------------------- 3 512 4096 (1) 4096 (512) OK 3 4096 4096 (1) 4096 (4096) OK ---------------------------------------------------------------------- 4 512 4096 (1) 4096 (1) OK 4 4096 4096 (1) 4096 (1) OK I tested that provisioning VMs and copying disks on local XFS and Gluster with 4k bytes sector size work now, resolving bugs [1],[2]. I tested also on XFS, NFS, Gluster with 512 bytes sector size. [1] https://bugzilla.redhat.com/1737256 [2] https://bugzilla.redhat.com/1738657 Signed-off-by: Nir Soffer <nsoffer@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * block-backend: Queue requests while drainedKevin Wolf2019-08-164-3/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes devices like IDE that can still start new requests from I/O handlers in the CPU thread while the block backend is drained. The basic assumption is that in a drain section, no new requests should be allowed through a BlockBackend (blk_drained_begin/end don't exist, we get drain sections only on the node level). However, there are two special cases where requests should not be queued: 1. Block jobs: We already make sure that block jobs are paused in a drain section, so they won't start new requests. However, if the drain_begin is called on the job's BlockBackend first, it can happen that we deadlock because the job stays busy until it reaches a pause point - which it can't if its requests aren't processed any more. The proper solution here would be to make all requests through the job's filter node instead of using a BlockBackend. For now, just disabling request queuing on the job BlockBackend is simpler. 2. In test cases where making requests through bdrv_* would be cumbersome because we'd need a BdrvChild. As we already got the functionality to disable request queuing from 1., use it in tests, too, for convenience. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
| * mirror: Keep mirror_top_bs drained after dropping permissionsKevin Wolf2019-08-161-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mirror_top_bs is currently implicitly drained through its connection to the source or the target node. However, the drain section for target_bs ends early after moving mirror_top_bs from src to target_bs, so that requests can already be restarted while mirror_top_bs is still present in the chain, but has dropped all permissions and therefore runs into an assertion failure like this: qemu-system-x86_64: block/io.c:1634: bdrv_co_write_req_prepare: Assertion `child->perm & BLK_PERM_WRITE' failed. Keep mirror_top_bs drained until all graph changes have completed. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
| * block: Remove blk_pread_unthrottled()Kevin Wolf2019-08-161-16/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The functionality offered by blk_pread_unthrottled() goes back to commit 498e386c584. Then, we couldn't perform I/O throttling with synchronous requests because timers wouldn't be executed in polling loops. So the commit automatically disabled I/O throttling as soon as a synchronous request was issued. However, for geometry detection during disk initialisation, we always used (and still use) synchronous requests even if guest requests use AIO later. Geometry detection was not wanted to disable I/O throttling, so bdrv_pread_unthrottled() was introduced which disabled throttling only temporarily. All of this isn't necessary any more because we do run timers in polling loop and even synchronous requests are now using coroutine infrastructure internally. For this reason, commit 90c78624f already removed the automatic disabling of I/O throttling. It's time to get rid of the workaround for the removed code, and its abuse of blk_root_drained_begin()/end(), as well. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* | Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-08-15' into ↵Peter Maydell2019-08-163-103/+134
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging nbd patches for 2019-08-15 - Addition of InetSocketAddress keep-alive - Addition of BDRV_REQ_PREFETCH for more efficient copy-on-read - Initial refactoring in preparation of NBD reconnect # gpg: Signature made Thu 15 Aug 2019 19:28:41 BST # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full] # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full] # gpg: aka "[jpeg image of size 6874]" [full] # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2019-08-15: block/nbd: refactor nbd connection parameters block/nbd: add cmdline and qapi parameter reconnect-delay block/nbd: move from quit to state block/nbd: use non-blocking io channel for nbd negotiation block/nbd: split connection_co start out of nbd_client_connect nbd: improve CMD_CACHE: use BDRV_REQ_PREFETCH block/stream: use BDRV_REQ_PREFETCH block: implement BDRV_REQ_PREFETCH qapi: Add InetSocketAddress member keep-alive Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | block/nbd: refactor nbd connection parametersVladimir Sementsov-Ogievskiy2019-08-151-61/+60Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We'll need some connection parameters to be available all the time to implement nbd reconnect. So, let's refactor them: define additional parameters in BDRVNBDState, drop them from function parameters, drop nbd_client_init and separate options parsing instead from nbd_open. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190618114328.55249-6-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: Drop useless 'if' before object_unref] Signed-off-by: Eric Blake <eblake@redhat.com>
| * | block/nbd: add cmdline and qapi parameter reconnect-delayVladimir Sementsov-Ogievskiy2019-08-151-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reconnect will be implemented in the following commit, so for now, in semantics below, disconnect itself is a "serious error". Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190618114328.55249-5-vsementsov@virtuozzo.com> [eblake: slipped from 4.1 to 4.2] Signed-off-by: Eric Blake <eblake@redhat.com>
| * | block/nbd: move from quit to stateVladimir Sementsov-Ogievskiy2019-08-151-21/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To implement reconnect we need several states for the client: CONNECTED, QUIT and two different CONNECTING states. CONNECTING states will be added in the following patches. This patch implements CONNECTED and QUIT. QUIT means, that we should close the connection and fail all current and further requests (like old quit = true). CONNECTED means that connection is ok, we can send requests (like old quit = false). For receiving loop we use a comparison of the current state with QUIT, because reconnect will be in the same loop, so it should be looping until the end. Opposite, for requests we use a comparison of the current state with CONNECTED, as we don't want to send requests in future CONNECTING states. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190618114328.55249-4-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
| * | block/nbd: use non-blocking io channel for nbd negotiationVladimir Sementsov-Ogievskiy2019-08-151-9/+7Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No reason to use blocking channel for negotiation and we'll benefit in further reconnect feature, as qio_channel reads and writes will do qemu_coroutine_yield while waiting for io completion. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190618114328.55249-3-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
| * | block/nbd: split connection_co start out of nbd_client_connectVladimir Sementsov-Ogievskiy2019-08-151-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | nbd_client_connect is going to be used from connection_co, so, let's refactor nbd_client_connect in advance, leaving io channel configuration all in nbd_client_connect. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190618114328.55249-2-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
| * | block/stream: use BDRV_REQ_PREFETCHVladimir Sementsov-Ogievskiy2019-08-151-15/+9Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This helps to avoid extra io, allocations and memory copying. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190725100550.33801-3-vsementsov@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [eblake: fix comment grammar] Signed-off-by: Eric Blake <eblake@redhat.com>
| * | block: implement BDRV_REQ_PREFETCHVladimir Sementsov-Ogievskiy2019-08-151-6/+12
| |/ | | | | | | | | | | | | | | | | | | | | Do effective copy-on-read request when we don't need data actually. It will be used for block-stream and NBD_CMD_CACHE. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190725100550.33801-2-vsementsov@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [eblake: comment grammar fix] Signed-off-by: Eric Blake <eblake@redhat.com>
* | sysemu: Split sysemu/runstate.h off sysemu/sysemu.hMarkus Armbruster2019-08-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sysemu/sysemu.h is a rather unfocused dumping ground for stuff related to the system-emulator. Evidence: * It's included widely: in my "build everything" tree, changing sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h, down from 5400 due to the previous two commits). * It pulls in more than a dozen additional headers. Split stuff related to run state management into its own header sysemu/runstate.h. Touching sysemu/sysemu.h now recompiles some 850 objects. qemu/uuid.h also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400 to 4200. Touching new sysemu/runstate.h recompiles some 500 objects. Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also add qemu/main-loop.h. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-30-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [Unbreak OS-X build]
* | Clean up inclusion of sysemu/sysemu.hMarkus Armbruster2019-08-161-1/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Almost a third of its inclusions are actually superfluous. Delete them. Downgrade two more to qapi/qapi-types-run-state.h, and move one from char/serial.h to char/serial.c. hw/semihosting/config.c, monitor/monitor.c, qdev-monitor.c, and stubs/semihost.c define variables declared in sysemu/sysemu.h without including it. The compiler is cool with that, but include it anyway. This doesn't reduce actual use much, as it's still included into widely included headers. The next commit will tackle that. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-27-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
* | Include qemu/main-loop.h lessMarkus Armbruster2019-08-1610-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more. Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
* | trace: Do not include qom/cpu.h into generated trace.hMarkus Armbruster2019-08-161-0/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | docs/devel/tracing.txt explains "since many source files include trace.h, [the generated trace.h use] a minimum of types and other header files included to keep the namespace clean and compile times and dependencies down." Commit 4815185902 "trace: Add per-vCPU tracing states for events with the 'vcpu' property" made them all include qom/cpu.h via control-internal.h. qom/cpu.h in turn includes about thirty headers. Ouch. Per-vCPU tracing is currently not supported in sub-directories' trace-events. In other words, qom/cpu.h can only be used in trace-root.h, not in any trace.h. Split trace/control-vcpu.h off trace/control.h and trace/control-internal.h. Have the generated trace.h include trace/control.h (which no longer includes qom/cpu.h), and trace-root.h include trace/control-vcpu.h (which includes it). The resulting improvement is a bit disappointing: in my "build everything" tree, some 1100 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h) depend on a trace.h, and about 600 of them no longer depend on qom/cpu.h. But more than 1300 others depend on trace-root.h. More work is clearly needed. Left for another day. Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-8-armbru@redhat.com>
* block/backup: disable copy_range for compressed backupVladimir Sementsov-Ogievskiy2019-08-061-1/+1
| | | | | | | | | | | | | | | Enabled by default copy_range ignores compress option. It's definitely unexpected for user. It's broken since introduction of copy_range usage in backup in 9ded4a011496. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190730163251.755248-3-vsementsov@virtuozzo.com Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com>
* mirror: Only mirror granularity-aligned chunksMax Reitz2019-08-061-0/+29
| | | | | | | | | | | | | | | | | | | In write-blocking mode, all writes to the top node directly go to the target. We must only mirror chunks of data that are aligned to the job's granularity, because that is how the dirty bitmap works. Therefore, the request alignment for writes must be the job's granularity (in write-blocking mode). Unfortunately, this forces all reads and writes to have the same granularity (we only need this alignment for writes to the target, not the source), but that is something to be fixed another time. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190805153308.2657-1-mreitz@redhat.com Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Fixes: d06107ade0ce74dc39739bac80de84b51ec18546 Signed-off-by: Max Reitz <mreitz@redhat.com>
* backup: Copy only dirty areasMax Reitz2019-08-061-2/+11
| | | | | | | | | | | | | | | | | | | | The backup job must only copy areas that the copy_bitmap reports as dirty. This is always the case when using traditional non-offloading backup, because it copies each cluster separately. When offloading the copy operation, we sometimes copy more than one cluster at a time, but we only check whether the first one is dirty. Therefore, whenever copy offloading is possible, the backup job currently produces wrong output when the guest writes to an area of which an inner part has already been backed up, because that inner part will be re-copied. Fixes: 9ded4a0114968e98b41494fc035ba14f84cdf700 Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20190801173900.23851-2-mreitz@redhat.com Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com>
* nvme: Limit blkshift to 12 (for 4 kB blocks)Max Reitz2019-07-301-11/+11
| | | | | | | | | | | | | Linux does not support blocks greater than 4 kB anyway, so we might as well limit blkshift to 12 and thus save us from some potential trouble. Reported-by: Peter Maydell <peter.maydell@linaro.org> Suggested-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190730114812.10493-1-mreitz@redhat.com Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Coverity: CID 1403771 Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/copy-on-read: Fix permissions for inactive nodeKevin Wolf2019-07-301-9/+7Star
| | | | | | | | | | | | | | | | | | | The copy-on-read drive must not request the WRITE_UNCHANGED permission for its child if the node is inactive, otherwise starting a migration destination with -incoming will fail because the child cannot provide write access yet: qemu-system-x86_64: -blockdev copy-on-read,file=img,node-name=cor: Block node is read-only Earlier QEMU versions additionally ran into an abort() on the migration source side: bdrv_inactivate_recurse() failed to update permissions. This is silently ignored today because it was only supposed to loosen restrictions. This is the symptom that was originally reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1733022 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* block: Dec. drained_end_counter before bdrv_wakeupMax Reitz2019-07-221-3/+2Star
| | | | | | | | | | | | | | | | | | | | Decrementing drained_end_counter after bdrv_dec_in_flight() (which in turn invokes bdrv_wakeup() and thus aio_wait_kick()) is not very clever. We should decrement it beforehand, so that any waiting aio_poll() that is woken by bdrv_dec_in_flight() sees the decremented drained_end_counter. Because the time window between decrementing drained_end_counter and aio_wait_kick() is very small, I cannot supply a reliable regression test. However, running e.g. the /bdrv-drain/blockjob/iothread/drain_all test in test-bdrv-drain has a small chance of hanging without this patch (about 1/200 or so; it gets to nearly 100 % if you add e.g. an fputc(' ', stderr); after the bdrv_dec_in_flight()). Fixes: e037c09c78520cbdb6da7cfc6ad0256d5870b814 Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190722133054.21781-2-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/nvme: don't touch the completion entriesMaxim Levitsky2019-07-221-4/+1Star
| | | | | | | | | | | Completion entries are meant to be only read by the host and written by the device. The driver is supposed to scan the completions from the last point where it left, and until it sees a completion with non flipped phase bit. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716163020.13383-4-mlevitsk@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/nvme: support larger that 512 bytes sector devicesMaxim Levitsky2019-07-221-5/+40
| | | | | | | | | | | | Currently the driver hardcodes the sector size to 512, and doesn't check the underlying device. Fix that. Also fail if underlying nvme device is formatted with metadata as this needs special support. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Message-id: 20190716163020.13383-3-mlevitsk@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/nvme: fix doorbell strideMaxim Levitsky2019-07-221-1/+1
| | | | | | | | | Fix the math involving non standard doorbell stride Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190716163020.13383-2-mlevitsk@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2019-07-19' into ↵Peter Maydell2019-07-221-3/+2Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging nbd patches for 2019-07-19 - silence harmless compiler/valgrind warning # gpg: Signature made Fri 19 Jul 2019 21:17:12 BST # gpg: using RSA key A7A16B4A2527436A # gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full] # gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full] # gpg: aka "[jpeg image of size 6874]" [full] # Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A * remotes/ericb/tags/pull-nbd-2019-07-19: nbd: Initialize reply on failure Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * nbd: Initialize reply on failureEric Blake2019-07-191-3/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've had two separate reports of different callers running into use of uninitialized data if s->quit is set (one detected by gcc -O3, another by valgrind), due to checking 'nbd_reply_is_simple(reply) || s->quit' in the wrong order. Rather than chasing down which callers need to pre-initialize reply, and whether there are any other uninitialized uses, it's easier to guarantee that reply will always be set by nbd_co_receive_one_chunk() even on failure. The uninitialized use happens to be harmless (the only time the variable is uninitialized is if s->quit is set, so the conditional results in the same action regardless of what was read from reply), and was introduced in commit 65e01d47. In fixing the problem, it can also be seen that all (one) callers pass in a non-NULL reply, so there is a dead conditional to also be cleaned up. Reported-by: Thomas Huth <thuth@redhat.com> Reported-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190719172001.19770-1-eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
* | block: Loop unsafely in bdrv*drained_end()Max Reitz2019-07-191-4/+4
| | | | | | | | | | | | | | | | | | | | The graph must not change in these loops (or a QLIST_FOREACH_SAFE would not even be enough). We now ensure this by only polling once in the root bdrv_drained_end() call, so we can drop the _SAFE suffix. Doing so makes it clear that the graph must not change. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | block: Do not poll in bdrv_do_drained_end()Max Reitz2019-07-192-26/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should never poll anywhere in bdrv_do_drained_end() (including its recursive callees like bdrv_drain_invoke()), because it does not cope well with graph changes. In fact, it has been written based on the postulation that no graph changes will happen in it. Instead, the callers that want to poll must poll, i.e. all currently globally available wrappers: bdrv_drained_end(), bdrv_subtree_drained_end(), bdrv_unapply_subtree_drain(), and bdrv_drain_all_end(). Graph changes there do not matter. They can poll simply by passing a pointer to a drained_end_counter and wait until it reaches 0. This patch also adds a non-polling global wrapper for bdrv_do_drained_end() that takes a drained_end_counter pointer. We need such a variant because now no function called anywhere from bdrv_do_drained_end() must poll. This includes BdrvChildRole.drained_end(), which already must not poll according to its interface documentation, but bdrv_child_cb_drained_end() just violates that by invoking bdrv_drained_end() (which does poll). Therefore, BdrvChildRole.drained_end() must take a *drained_end_counter parameter, which bdrv_child_cb_drained_end() can pass on to the new bdrv_drained_end_no_poll() function. Note that we now have a pattern of all drained_end-related functions either polling or receiving a *drained_end_counter to let the caller poll based on that. A problem with a single poll loop is that when the drained section in bdrv_set_aio_context_ignore() ends, some nodes in the subgraph may be in the old contexts, while others are in the new context already. To let the collective poll in bdrv_drained_end() work correctly, we must not hold a lock to the old context, so that the old context can make progress in case it is different from the current context. (In the process, remove the comment saying that the current context is always the old context, because it is wrong.) In all other places, all nodes in a subtree must be in the same context, so we can just poll that. The exception of course is bdrv_drain_all_end(), but that always runs in the main context, so we can just poll NULL (like bdrv_drain_all_begin() does). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | block: Make bdrv_parent_drained_[^_]*() staticMax Reitz2019-07-191-4/+4
| | | | | | | | | | | | | | | | These functions are not used outside of block/io.c, there is no reason why they should be globally available. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | block: Add @drained_end_counterMax Reitz2019-07-191-18/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Callers can now pass a pointer to an integer that bdrv_drain_invoke() (and its recursive callees) will increment for every bdrv_drain_invoke_entry() operation they schedule. bdrv_drain_invoke_entry() in turn will decrement it once it has invoked BlockDriver.bdrv_co_drain_end(). We use atomic operations to access the pointee, because the bdrv_do_drained_end() caller may wish to end drained sections for multiple nodes in different AioContexts (bdrv_drain_all_end() does, for example). This is the first step to moving the polling for BdrvCoDrainData.done to become true out of bdrv_drain_invoke() and into the root drained_end function. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | block: Introduce BdrvChild.parent_quiesce_counterMax Reitz2019-07-191-3/+11
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5cb2737e925042e6c7cd3fb0b01313950b03cddf laid out why bdrv_do_drained_end() must decrement the quiesce_counter after bdrv_drain_invoke(). It did not give a very good reason why it has to happen after bdrv_parent_drained_end(), instead only claiming symmetry to bdrv_do_drained_begin(). It turns out that delaying it for so long is wrong. Situation: We have an active commit job (i.e. a mirror job) from top to base for the following graph: filter | [file] | v top --[backing]--> base Now the VM is closed, which results in the job being cancelled and a bdrv_drain_all() happening pretty much simultaneously. Beginning the drain means the job is paused once whenever one of its nodes is quiesced. This is reversed when the drain ends. With how the code currently is, after base's drain ends (which means that it will have unpaused the job once), its quiesce_counter remains at 1 while it goes to undrain its parents (bdrv_parent_drained_end()). For some reason or another, undraining filter causes the job to be kicked and enter mirror_exit_common(), where it proceeds to invoke block_job_remove_all_bdrv(). Now base will be detached from the job. Because its quiesce_counter is still 1, it will unpause the job once more. So in total, undraining base will unpause the job twice. Eventually, this will lead to the job's pause_count going negative -- well, it would, were there not an assertion against this, which crashes qemu. The general problem is that if in bdrv_parent_drained_end() we undrain parent A, and then undrain parent B, which then leads to A detaching the child, bdrv_replace_child_noperm() will undrain A as if we had not done so yet; that is, one time too many. It follows that we cannot decrement the quiesce_counter after invoking bdrv_parent_drained_end(). Unfortunately, decrementing it before bdrv_parent_drained_end() would be wrong, too. Imagine the above situation in reverse: Undraining A leads to B detaching the child. If we had already decremented the quiesce_counter by that point, bdrv_replace_child_noperm() would undrain B one time too little; because it expects bdrv_parent_drained_end() to issue this undrain. But bdrv_parent_drained_end() won't do that, because B is no longer a parent. Therefore, we have to do something else. This patch opts for introducing a second quiesce_counter that counts how many times a child's parent has been quiesced (though c->role->drained_*). With that, bdrv_replace_child_noperm() just has to undrain the parent exactly that many times when removing a child, and it will always be right. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell2019-07-161-15/+14Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * VFIO bugfix for AMD SEV (Alex) * Kconfig improvements (Julio, Philippe) * MemoryRegion reference counting bugfix (King Wang) * Build system cleanups (Marc-André, myself) * rdmacm-mux off-by-one (Marc-André) * ZBC passthrough fixes (Shinichiro, myself) * WHPX build fix (Stefan) * char-pty fix (Wei Yang) # gpg: Signature made Tue 16 Jul 2019 08:31:27 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: vl: make sure char-pty message displayed by moving setbuf to the beginning create_config: remove $(CONFIG_SOFTMMU) hack Makefile: do not repeat $(CONFIG_SOFTMMU) in hw/Makefile.objs hw/usb/Kconfig: USB_XHCI_NEC requires USB_XHCI hw/usb/Kconfig: Add CONFIG_USB_EHCI_PCI target/i386: sev: Do not unpin ram device memory region checkpatch: detect doubly-encoded UTF-8 hw/lm32/Kconfig: Milkymist One provides a USB 1.1 Controller util: merge main-loop.c and iohandler.c Fix broken build with WHPX enabled memory: unref the memory region in simplify flatview hw/i386: turn off vmport if CONFIG_VMPORT is disabled rdmacm-mux: fix strcpy string warning build-sys: remove slirp cflags from main-loop.o iscsi: base all handling of check condition on scsi_sense_to_errno iscsi: fix busy/timeout/task set full scsi: add guest-recoverable ZBC errors scsi: explicitly list guest-recoverable sense codes scsi-disk: pass sense correctly for guest-recoverable errors Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * iscsi: base all handling of check condition on scsi_sense_to_errnoPaolo Bonzini2019-07-151-15/+14Star
| | | | | | | | | | | | | | Now that scsi-disk is not using scsi_sense_to_errno to separate guest-recoverable sense codes, we can modify it to simplify iscsi's own sense handling. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * iscsi: fix busy/timeout/task set fullPaolo Bonzini2019-07-151-1/+1
| | | | | | | | | | | | | | In this case, do_retry was set without calling aio_co_wake, thus never waking up the coroutine. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | gluster: fix .bdrv_reopen_prepare when backing file is a JSON objectStefano Garzarella2019-07-151-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When the backing_file is specified as a JSON object, the qemu_gluster_reopen_prepare() fails with this message: invalid URI json:{"server.0.host": ...} In this case, we should call qemu_gluster_init() using the QDict 'state->options' that contains the JSON parameters already parsed. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1542445 Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20190715132844.506584-1-sgarzare@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* | block/stream: Swap backing file change orderMax Reitz2019-07-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | bdrv_change_backing_file() can result in yields. Therefore, @base may no longer be the the backing_bs() of s->bottom afterwards. Just swap the order of the two calls to fix this. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-4-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* | block/stream: Fix error pathMax Reitz2019-07-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | As of commit c624b015bf14fe01f1e6452a36e63b3ea1ae4998, the stream job only freezes the chain until the overlay of the base node. The error path must consider this. Fixes: c624b015bf14fe01f1e6452a36e63b3ea1ae4998 Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20190703172813.6868-3-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>