summaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
* Merge remote-tracking branch 'kwolf/tags/for-upstream' into stagingStefan Hajnoczi2017-05-306-30/+16Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block layer patches # gpg: Signature made Mon 29 May 2017 03:34:59 PM BST # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kwolf/tags/for-upstream: block/file-*: *_parse_filename() and colons block: Fix backing paths for filenames with colons block: Tweak error message related to qemu-img amend qemu-img: Fix leakage of options on error qemu-img: copy *key-secret opts when opening newly created files qemu-img: introduce --target-image-opts for 'convert' command qemu-img: fix --image-opts usage with dd command qemu-img: add support for --object with 'dd' command qemu-img: Fix documentation of convert qcow2: remove extra local_error variable mirror: Drop permissions on s->target on completion nvme: Add support for Controller Memory Buffers iotests: 147: Don't test inet6 if not available qemu-iotests: Test streaming with missing job ID stream: fix crash in stream_start() when block_job_create() fails Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * block/file-*: *_parse_filename() and colonsMax Reitz2017-05-292-24/+5Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The file drivers' *_parse_filename() implementations just strip the optional protocol prefix off the filename. However, for e.g. "file:foo:bar", this would lead to "foo:bar" being stored as the BDS's filename which looks like it should be managed using the "foo" protocol. This is especially troublesome if you then try to resolve a backing filename based on "foo:bar". This issue can only occur if the stripped part is a relative filename ("file:/foo:bar" will be shortened to "/foo:bar" and having a slash before the first colon means that "/foo" is not recognized as a protocol part). Therefore, we can easily fix it by prepending "./" to such filenames. Before this patch: $ ./qemu-img create -f qcow2 backing.qcow2 64M Formatting 'backing.qcow2', fmt=qcow2 size=67108864 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 $ ./qemu-img create -f qcow2 -b backing.qcow2 file:top:image.qcow2 Formatting 'file:top:image.qcow2', fmt=qcow2 size=67108864 backing_file=backing.qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 $ ./qemu-io file:top:image.qcow2 can't open device file:top:image.qcow2: Could not open backing file: Unknown protocol 'top' After this patch: $ ./qemu-io file:top:image.qcow2 [no error] Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20170522195217.12991-3-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * block: Tweak error message related to qemu-img amendEric Blake2017-05-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When converting a 1.1 image down to 0.10, qemu-iotests 060 forces a contrived failure where allocating a cluster used to replace a zero cluster reads unaligned data. Since it is a zero cluster rather than a data cluster being converted, changing the error message to match our earlier change in 'qcow2: Make distinction between zero cluster types obvious' is worthwhile. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170508171302.17805-1-eblake@redhat.com [mreitz: Commit message fixes] Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: remove extra local_error variableAlberto Garcia2017-05-291-3/+2Star
| | | | | | | | | | | | | | | | | | | | | | Commit d7086422b1c1e75e320519cfe26176db6ec97a37 added a local_err variable global to the qcow2_amend_options() function, so there's no need to have this other one. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 20170511150337.21470-1-berto@igalia.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * mirror: Drop permissions on s->target on completionKevin Wolf2017-05-291-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes an assertion failure that was triggered by qemu-iotests 129 on some CI host, while the same test case didn't seem to fail on other hosts. Essentially the problem is that the blk_unref(s->target) in mirror_exit() doesn't necessarily mean that the BlockBackend goes away immediately. It is possible that the job completion was triggered nested in mirror_drain(), which looks like this: BlockBackend *target = s->target; blk_ref(target); blk_drain(target); blk_unref(target); In this case, the write permissions for s->target are retained until after blk_drain(), which makes removing mirror_top_bs fail for the active commit case (can't have a writable backing file in the chain without the filter driver). Explicitly dropping the permissions first means that the additional reference doesn't hurt and the job can complete successfully even if called from the nested blk_drain(). Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
| * stream: fix crash in stream_start() when block_job_create() failsAlberto Garcia2017-05-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code that tries to reopen a BlockDriverState in stream_start() when the creation of a new block job fails crashes because it attempts to dereference a pointer that is known to be NULL. This is a regression introduced in a170a91fd3eab6155da39e740381867e, likely because the code was copied from stream_complete(). Cc: qemu-stable@nongnu.org Reported-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Alberto Garcia <berto@igalia.com> Tested-by: Kashyap Chamarthy <kchamart@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | block/gluster: glfs_lseek() workaroundJeff Cody2017-05-241-2/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On current released versions of glusterfs, glfs_lseek() will sometimes return invalid values for SEEK_DATA or SEEK_HOLE. For SEEK_DATA and SEEK_HOLE, the returned value should be >= the passed offset, or < 0 in the case of error: LSEEK(2): off_t lseek(int fd, off_t offset, int whence); [...] SEEK_HOLE Adjust the file offset to the next hole in the file greater than or equal to offset. If offset points into the middle of a hole, then the file offset is set to offset. If there is no hole past offset, then the file offset is adjusted to the end of the file (i.e., there is an implicit hole at the end of any file). [...] RETURN VALUE Upon successful completion, lseek() returns the resulting offset location as measured in bytes from the beginning of the file. On error, the value (off_t) -1 is returned and errno is set to indicate the error However, occasionally glfs_lseek() for SEEK_HOLE/DATA will return a value less than the passed offset, yet greater than zero. For instance, here are example values observed from this call: offs = glfs_lseek(s->fd, start, SEEK_HOLE); if (offs < 0) { return -errno; /* D1 and (H3 or H4) */ } start == 7608336384 offs == 7607877632 This causes QEMU to abort on the assert test. When this value is returned, errno is also 0. This is a reported and known bug to glusterfs: https://bugzilla.redhat.com/show_bug.cgi?id=1425293 Although this is being fixed in gluster, we still should work around it in QEMU, given that multiple released versions of gluster behave this way. This patch treats the return case of (offs < start) the same as if an error value other than ENXIO is returned; we will assume we learned nothing, and there are no holes in the file. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Niels de Vos <ndevos@redhat.com> Message-id: 87c0140e9407c08f6e74b04131b610f2e27c014c.1495560397.git.jcody@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* | blockjob: introduce block_job_pause/resume_allPaolo Bonzini2017-05-241-16/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | Remove use of block_job_pause/resume from outside blockjob.c, thus making them static. The new functions are used by the block layer, so place them in blockjob_int.h. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20170508141310.8674-5-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* | blockjob: introduce block_job_early_failPaolo Bonzini2017-05-243-3/+3
|/ | | | | | | | | | | | | | | Outside blockjob.c, block_job_unref is only used when a block job fails to start, and block_job_ref is not used at all. The reference counting thus is pretty well hidden. Introduce a separate function to be used by block jobs; because block_job_ref and block_job_unref now become static, move them earlier in blockjob.c. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20170508141310.8674-4-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* migration: migration.h was not neededJuan Quintela2017-05-181-1/+0Star
| | | | | | | | This files don't use any function from migration.h, so drop it. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
* Merge remote-tracking branch 'quintela/tags/migration/20170517' into stagingStefan Hajnoczi2017-05-186-6/+6
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | migration/next for 20170517 # gpg: Signature made Wed 17 May 2017 11:46:36 AM BST # gpg: using RSA key 0xF487EF185872D723 # gpg: Good signature from "Juan Quintela <quintela@redhat.com>" # gpg: aka "Juan Quintela <quintela@trasno.org>" # Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723 * quintela/tags/migration/20170517: migration: Move check_migratable() into qdev.c migration: Move postcopy stuff to postcopy-ram.c migration: Move page_cache.c to migration/ migration: Create migration/blocker.h ram: Rename RAM_SAVE_FLAG_COMPRESS to RAM_SAVE_FLAG_ZERO migration: Pass Error ** argument to {save,load}_vmstate migration: Fix regression with compression threads Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * migration: Create migration/blocker.hJuan Quintela2017-05-176-6/+6
| | | | | | | | | | | | | | | | This allows us to remove lots of includes of migration/migration.h Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* | Merge remote-tracking branch 'jtc/tags/block-pull-request' into stagingStefan Hajnoczi2017-05-171-97/+144
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # gpg: Signature made Tue 16 May 2017 04:47:09 PM BST # gpg: using RSA key 0xBDBE7B27C0DE3057 # gpg: Good signature from "Jeffrey Cody <jcody@redhat.com>" # gpg: aka "Jeffrey Cody <jeff@codyprime.org>" # gpg: aka "Jeffrey Cody <codyprime@gmail.com>" # Primary key fingerprint: 9957 4B4D 3474 90E7 9D98 D624 BDBE 7B27 C0DE 3057 * jtc/tags/block-pull-request: curl: do not do aio_poll when waiting for a free CURLState curl: convert readv to coroutines curl: convert CURLAIOCB to byte values curl: split curl_find_state/curl_init_state curl: avoid recursive locking of BDRVCURLState mutex curl: never invoke callbacks with s->mutex held curl: strengthen assertion in curl_clean_state block: curl: Allow passing cookies via QCryptoSecret Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * curl: do not do aio_poll when waiting for a free CURLStatePaolo Bonzini2017-05-161-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead, put the CURLAIOCB on a wait list and yield; curl_clean_state will wake the corresponding coroutine. Because of CURL's callback-based structure, we cannot easily convert everything to CoMutex/CoQueue; keeping the QemuMutex is simpler. However, CoQueue is a simple wrapper around a linked list, so we can easily use QSIMPLEQ and open-code a CoQueue, protected by the BDRVCURLState QemuMutex instead of a CoMutex. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-8-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
| * curl: convert readv to coroutinesPaolo Bonzini2017-05-161-56/+38Star
| | | | | | | | | | | | | | | | | | | | | | | | | | This is pretty simple. The bottom half goes away because, unlike bdrv_aio_readv, coroutine-based read can return immediately without yielding. However, for simplicity I kept the former bottom half handler in a separate function. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-7-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
| * curl: convert CURLAIOCB to byte valuesPaolo Bonzini2017-05-161-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | This is in preparation for the conversion from bdrv_aio_readv to bdrv_co_preadv, and it also requires changing some of the size_t values to uint64_t. This was broken before for disks > 2TB, but now it would break at 4GB. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-6-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
| * curl: split curl_find_state/curl_init_statePaolo Bonzini2017-05-161-22/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If curl_easy_init fails, a CURLState is left with s->in_use = 1. Split curl_init_state in two, so that we can distinguish the two failures and call curl_clean_state if needed. While at it, simplify curl_find_state, removing a dummy loop. The aio_poll loop is moved to the sole caller that needs it. Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-5-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
| * curl: avoid recursive locking of BDRVCURLState mutexPaolo Bonzini2017-05-161-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The curl driver has a ugly hack where, if it cannot find an empty CURLState, it just uses aio_poll to wait for one to be empty. This is probably buggy when used together with dataplane, and the simplest way to fix it is to use coroutines instead. A more immediate effect of the bug however is that it can cause a recursive call to curl_readv_bh_cb and recursively taking the BDRVCURLState mutex. This causes a deadlock. The fix is to unlock the mutex around aio_poll, but for cleanliness we should also take the mutex around all calls to curl_init_state, even if reaching the unlock/lock pair is impossible. The same is true for curl_clean_state. Reported-by: Kun Wei <kuwei@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20170515100059.15795-4-pbonzini@redhat.com Cc: qemu-stable@nongnu.org Cc: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
| * curl: never invoke callbacks with s->mutex heldPaolo Bonzini2017-05-161-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All curl callbacks go through curl_multi_do, and hence are called with s->mutex held. Note that with comments, and make curl_read_cb drop the lock before invoking the callback. Likewise for curl_find_buf, where the callback can be invoked by the caller. Cc: qemu-stable@nongnu.org Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-3-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
| * curl: strengthen assertion in curl_clean_statePaolo Bonzini2017-05-161-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | curl_clean_state should only be called after all AIOCBs have been completed. This is not so obvious for the call from curl_detach_aio_context, so assert that. Cc: qemu-stable@nongnu.org Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170515100059.15795-2-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
| * block: curl: Allow passing cookies via QCryptoSecretPeter Krempa2017-05-161-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Since cookies can contain sensitive data (session ID, etc ...) it is desired to hide them from the prying eyes of users. Add a possibility to pass them via the secret infrastructure. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1447413 Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: f4a22cdebdd0bca6a13a43a2a6deead7f2ec4bb3.1493906281.git.pkrempa@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* | block/win32: fix 'ret not initialized' warningGerd Hoffmann2017-05-161-0/+1
|/ | | | | | | | Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 20170516074256.24731-1-kraxel@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Merge tag 'block-pull-request' into stagingStefan Hajnoczi2017-05-121-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | # gpg: Signature made Fri 12 May 2017 10:37:12 AM EDT # gpg: using RSA key 0x9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * tag 'block-pull-request': aio: add missing aio_notify() to aio_enable_external() block: Simplify BDRV_BLOCK_RAW recursion coroutine: remove GThread implementation Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * block: Simplify BDRV_BLOCK_RAW recursionEric Blake2017-05-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we are already in coroutine context during the body of bdrv_co_get_block_status(), we can shave off a few layers of wrappers when recursing to query the protocol when a format driver returned BDRV_BLOCK_RAW. Note that we are already using the correct recursion later on in the same function, when probing whether the protocol layer is sparse in order to find out if we can add BDRV_BLOCK_ZERO to an existing BDRV_BLOCK_DATA|BDRV_BLOCK_OFFSET_VALID. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 20170504173745.27414-1-eblake@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | Merge remote-tracking branch 'kwolf/tags/for-upstream' into stagingStefan Hajnoczi2017-05-129-282/+796
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block layer patches # gpg: Signature made Thu 11 May 2017 10:31:37 AM EDT # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kwolf/tags/for-upstream: (58 commits) MAINTAINERS: Add qemu-progress to the block layer qcow2: Discard/zero clusters by byte count qcow2: Assert that cluster operations are aligned qcow2: Optimize write zero of unaligned tail cluster iotests: Add test 179 to cover write zeroes with unmap iotests: Improve _filter_qemu_img_map qcow2: Optimize zero_single_l2() to minimize L2 churn qcow2: Make distinction between zero cluster types obvious qcow2: Name typedef for cluster type qcow2: Correctly report status of preallocated zero clusters block: Update comments on BDRV_BLOCK_* meanings qcow2: Use consistent switch indentation qcow2: Nicer variable names in qcow2_update_snapshot_refcount() tests: Add coverage for recent block geometry fixes blkdebug: Add ability to override unmap geometries blkdebug: Simplify override logic blkdebug: Add pass-through write_zero and discard support blkdebug: Refactor error injection blkdebug: Sanity check block layer guarantees qemu-io: Switch 'map' output to byte-based reporting ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * qcow2: Discard/zero clusters by byte countEric Blake2017-05-114-41/+39Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Passing a byte offset, but sector count, when we ultimately want to operate on cluster granularity, is madness. Clean up the external interfaces to take both offset and count as bytes, while still keeping the assertion added previously that the caller must align the values to a cluster. Then rename things to make sure backports don't get confused by changed units: instead of qcow2_discard_clusters() and qcow2_zero_clusters(), we now have qcow2_cluster_discard() and qcow2_cluster_zeroize(). The internal functions still operate on clusters at a time, and return an int for number of cleared clusters; but on an image with 2M clusters, a single L2 table holds 256k entries that each represent a 2M cluster, totalling well over INT_MAX bytes if we ever had a request for that many bytes at once. All our callers currently limit themselves to 32-bit bytes (and therefore fewer clusters), but by making this function 64-bit clean, we have one less place to clean up if we later improve the block layer to support 64-bit bytes through all operations (with the block layer auto-fragmenting on behalf of more-limited drivers), rather than the current state where some interfaces are artificially limited to INT_MAX at a time. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-13-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Assert that cluster operations are alignedEric Blake2017-05-111-4/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We already audited (in commit 0c1bd469) that qcow2_discard_clusters() is only passed cluster-aligned start values; but we can further tighten the assertion that the only unaligned end value is at EOF. Recent commits have taken advantage of an unaligned tail cluster, for both discard and write zeroes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-12-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Optimize write zero of unaligned tail clusterEric Blake2017-05-111-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've already improved discards to operate efficiently on the tail of an unaligned qcow2 image; it's time to make a similar improvement to write zeroes. The special case is only valid at the tail cluster of a file, where we must recognize that any sectors beyond the image end would implicitly read as zero, and therefore should not penalize our logic for widening a partial cluster into writing the whole cluster as zero. However, note that for now, the special case of end-of-file is only recognized if there is no backing file, or if the backing file has the same length; that's because when the backing file is shorter than the active layer, we don't have code in place to recognize that reads of a sector unallocated at the top and beyond the backing end-of-file are implicitly zero. It's not much of a real loss, because most people don't use images that aren't cluster-aligned, or where the active layer is a different size than the backing layer (especially where the difference falls within a single cluster). Update test 154 to cover the new scenarios, using two images of intentionally differing length. While at it, fix the test to gracefully skip when run as ./check -qcow2 -o compat=0.10 154 since the older format lacks zero clusters already required earlier in the test. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-11-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Optimize zero_single_l2() to minimize L2 churnEric Blake2017-05-111-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to discard_single_l2(), we should try to avoid dirtying the L2 cache when the cluster we are changing already has the right characteristics. Note that by the time we get to zero_single_l2(), BDRV_REQ_MAY_UNMAP is a requirement to unallocate a cluster (this is because the block layer clears that flag if discard.* flags during open requested that we never punch holes - see the conversation around commit 170f4b2e, https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg07306.html). Therefore, this patch can only reuse a zero cluster as-is if either unmapping is not requested, or if the zero cluster was not associated with an allocation. Technically, there are some cases where an unallocated cluster already reads as all zeroes (namely, when there is no backing file [easy: check bs->backing], or when the backing file also reads as zeroes [harder: we can't check bdrv_get_block_status since we are already holding the lock]), where the guest would not immediately see a difference if we left that cluster unallocated. But if the user did not request unmapping, leaving an unallocated cluster is wrong; and even if the user DID request unmapping, keeping a cluster unallocated risks a subtle semantic change of guest-visible contents if a backing file is later added, and it is not worth auditing whether all internal uses such as mirror properly avoid an unmap request. Thus, this patch is intentionally limited to just clusters that are already marked as zero. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-8-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Make distinction between zero cluster types obviousEric Blake2017-05-114-80/+60Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Treat plain zero clusters differently from allocated ones, so that we can simplify the logic of checking whether an offset is present. Do this by splitting QCOW2_CLUSTER_ZERO into two new enums, QCOW2_CLUSTER_ZERO_PLAIN and QCOW2_CLUSTER_ZERO_ALLOC. I tried to arrange the enum so that we could use 'ret <= QCOW2_CLUSTER_ZERO_PLAIN' for all unallocated types, and 'ret >= QCOW2_CLUSTER_ZERO_ALLOC' for allocated types, although I didn't actually end up taking advantage of the layout. In many cases, this leads to simpler code, by properly combining cases (sometimes, both zero types pair together, other times, plain zero is more like unallocated while allocated zero is more like normal). Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-7-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Name typedef for cluster typeEric Blake2017-05-113-14/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although it doesn't add all that much type safety (this is C, after all), it does add a bit of legibility to use the name QCow2ClusterType instead of a plain int. In particular, qcow2_get_cluster_offset() has an overloaded return type; a QCow2ClusterType on success, and -errno on failure; keeping the cluster type in a separate variable makes it slightly easier for the next patch to make further computations based on the type. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-6-eblake@redhat.com [mreitz: Use the new type in two more places (one of them pulled from the next patch)] Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Correctly report status of preallocated zero clustersEric Blake2017-05-111-10/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were throwing away the preallocation information associated with zero clusters. But we should be matching the well-defined semantics in bdrv_get_block_status(), where (BDRV_BLOCK_ZERO | BDRV_BLOCK_OFFSET_VALID) informs the user which offset is reserved, while still reminding the user that reading from that offset is likely to read garbage. count_contiguous_clusters_by_type() is now used only for unallocated cluster runs, hence it gets renamed and tightened. Making this change lets us see which portions of an image are zero but preallocated, when using qemu-img map --output=json. The --output=human side intentionally ignores all zero clusters, whether or not they are preallocated. The fact that there is no change to qemu-iotests './check -qcow2' merely means that we aren't yet testing this aspect of qemu-img; a later patch will add a test. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-5-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Use consistent switch indentationEric Blake2017-05-112-58/+58
| | | | | | | | | | | | | | | | | | | | | | Fix a couple of inconsistent indentations, before an upcoming patch further tweaks the switch statements. (best viewed with 'git diff -b'). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-3-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * qcow2: Nicer variable names in qcow2_update_snapshot_refcount()Eric Blake2017-05-111-20/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to keep checkpatch happy when the next patch changes indentation, we first have to shorten some long lines. The easiest approach is to use a new variable in place of 'offset & L2E_OFFSET_MASK', except that 'offset' is the best name for that variable. Change '[old_]offset' to '[old_]entry' to make room. While touching things, also fix checkpatch warnings about unusual 'for' statements. Suggested by Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170507000552.20847-2-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * blkdebug: Add ability to override unmap geometriesEric Blake2017-05-111-1/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make it easier to simulate various unusual hardware setups (for example, recent commits 3482b9b and b8d0a98 affect the Dell Equallogic iSCSI with its 15M preferred and maximum unmap and write zero sizing, or b2f95fe deals with the Linux loopback block device having a max_transfer of 64k), by allowing blkdebug to wrap any other device with further restrictions on various alignments. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-9-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * blkdebug: Simplify override logicEric Blake2017-05-111-10/+6Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than store into a local variable, then copy to the struct if the value is valid, then reporting errors otherwise, it is simpler to just store into the struct and report errors if the value is invalid. This however requires that the struct store a 64-bit number, rather than a narrower type. Likewise, setting a sane errno value in ret prior to the sequence of parsing and jumping to out: on error makes it easier for the next patch to add a chain of similar checks. Signed-off-by: Eric Blake <eblake@redhat.com> Message-id: 20170429191419.30051-8-eblake@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * blkdebug: Add pass-through write_zero and discard supportEric Blake2017-05-111-0/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to test the effects of artificial geometry constraints on operations like write zero or discard, we first need blkdebug to manage these actions. It also allows us to inject errors on those operations, just like we can for read/write/flush. We can also test the contract promised by the block layer; namely, if a device has specified limits on alignment or maximum size, then those limits must be obeyed (for now, the blkdebug driver merely inherits limits from whatever it is wrapping, but the next patch will further enhance it to allow specific limit overrides). This patch intentionally refuses to service requests smaller than the requested alignments; this is because an upcoming patch adds a qemu-iotest to prove that the block layer is correctly handling fragmentation, but the test only works if there is a way to tell the difference at artificial alignment boundaries when blkdebug is using a larger-than-default alignment. If we let the blkdebug layer always defer to the underlying layer, which potentially has a smaller granularity, the iotest will be thwarted. Tested by setting up an NBD server with export 'foo', then invoking: $ ./qemu-io qemu-io> open -o driver=blkdebug blkdebug::nbd://localhost:10809/foo qemu-io> d 0 15M qemu-io> w -z 0 15M Pre-patch, the server never sees the discard (it was silently eaten by the block layer); post-patch it is passed across the wire. Likewise, pre-patch the write is always passed with NBD_WRITE (with 15M of zeroes on the wire), while post-patch it can utilize NBD_WRITE_ZEROES (for less traffic). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-7-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * blkdebug: Refactor error injectionEric Blake2017-05-111-41/+33Star
| | | | | | | | | | | | | | | | | | | | | | | | | | Rather than repeat the logic at each caller of checking if a Rule exists that warrants an error injection, fold that logic into inject_error(); and rename it to rule_check() for legibility. This will help the next patch, which adds two more callers that need to check rules for the potential of injecting errors. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-6-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * blkdebug: Sanity check block layer guaranteesEric Blake2017-05-111-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 04ed95f4 and 1a62d0ac updated the block layer to auto-fragment any I/O to fit within device boundaries. Additionally, when using a minimum alignment of 4k, we want to ensure the block layer does proper read-modify-write rather than requesting I/O on a slice of a sector. Let's enforce that the contract is obeyed when using blkdebug. For now, blkdebug only allows alignment overrides, and just inherits other limits from whatever device it is wrapping, but a future patch will further enhance things. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170429191419.30051-5-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
| * file-posix: Remove .bdrv_inactivate/invalidate_cacheKevin Wolf2017-05-111-33/+0Star
| | | | | | | | | | | | | | | | | | Now that the block layer takes care to request a lot less permissions for inactive nodes, the special-casing in file-posix isn't necessary any more. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
| * block: Drop permissions when migration completesKevin Wolf2017-05-111-0/+25
| | | | | | | | | | | | | | | | | | With image locking, permissions affect other qemu processes as well. We want to be sure that the destination can run, so let's drop permissions on the source when migration completes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
| * block: New BdrvChildRole.activate() for blk_resume_after_migration()Kevin Wolf2017-05-111-28/+28
| | | | | | | | | | | | | | | | | | | | | | Instead of manually calling blk_resume_after_migration() in migration code after doing bdrv_invalidate_cache_all(), integrate the BlockBackend activation with cache invalidation into a single function. This is achieved with a new callback in BdrvChildRole that is called by bdrv_invalidate_cache_all(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
| * qcow2: Discard preallocated zero clustersMax Reitz2017-05-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | In discard_single_l2(), we completely discard normal clusters instead of simply turning them into preallocated zero clusters. That means we should probably do the same with such preallocated zero clusters: Discard them instead of keeping them allocated. Reported-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * qcow2: Reuse preallocated zero clustersMax Reitz2017-05-112-24/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of just freeing preallocated zero clusters and completely allocating them from scratch, reuse them. We cannot do this in handle_copied(), however, since this is a COW operation. Therefore, we have to add the new logic to handle_alloc() and simply return the existing offset if it exists. The only catch is that we have to convince qcow2_alloc_cluster_link_l2() not to free the old clusters (because we have reused them). Reported-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * qcow2: Fix preallocation size formulaMax Reitz2017-05-111-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | When calculating the number of reftable entries, we should actually use the number of refblocks and not (wrongly[1]) re-calculate it. [1] "Wrongly" means: Dividing the number of clusters by the number of entries per refblock and rounding down instead of up. Reported-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * file-posix: Add image locking to perm operationsFam Zheng2017-05-111-1/+275
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This extends the permission bits of op blocker API to external using Linux OFD locks. Each permission in @perm and @shared_perm is represented by a locked byte in the image file. Requesting a permission in @perm is translated to a shared lock of the corresponding byte; rejecting to share the same permission is translated to a shared lock of a separate byte. With that, we use 2x number of bytes of distinct permission types. virtlockd in libvirt locks the first byte, so we do locking from a higher offset. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * file-win32: Error out if locking=onFam Zheng2017-05-111-0/+5
| | | | | | | | | | | | | | | | We share the same set of QAPI options with file-posix, but locking is not supported here. So error out if it is specified as 'on' for now. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * file-posix: Add 'locking' optionFam Zheng2017-05-111-0/+5
| | | | | | | | | | | | | | | | | | | | Making this option available even before implementing it will let converting tests easier: in coming patches they can specify the option already when necessary, before we actually write code to lock the images. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | Merge remote-tracking branch 'mjt/tags/trivial-patches-fetch' into stagingStefan Hajnoczi2017-05-101-22/+22
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | trivial patches for 2017-05-10 # gpg: Signature made Wed 10 May 2017 03:19:30 AM EDT # gpg: using RSA key 0x701B4F6B1A693E59 # gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>" # gpg: aka "Michael Tokarev <mjt@corpit.ru>" # gpg: aka "Michael Tokarev <mjt@debian.org>" # Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5 # Subkey fingerprint: 7B73 BAD6 8BE7 A2C2 8931 4B22 701B 4F6B 1A69 3E59 * mjt/tags/trivial-patches-fetch: (23 commits) tests: Remove redundant assignment MAINTAINERS: Update paths for AioContext implementation MAINTAINERS: Update paths for main loop jazz_led: fix bad snprintf tests: Ignore another built executable (test-hmp) scripts: Switch to more portable Perl shebang scripts/qemu-binfmt-conf.sh: Fix shell portability issue virtfs: allow a device id to be specified in the -virtfs option hw/core/generic-loader: Fix crash when running without CPU virtio-blk: Remove useless condition around g_free() qemu-doc: Fix broken URLs of amnhltm.zip and dosidle210.zip use _Static_assert in QEMU_BUILD_BUG_ON channel-file: fix wrong parameter comments block: Make 'replication_state' an enum util: Use g_malloc/g_free in envlist.c qga: fix compiler warnings (clang 5) device_tree: fix compiler warnings (clang 5) usb-ccid: make ccid_write_data_block() cope with null buffers tests: Ignore more test executables Add 'none' as type for drive's if option ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * | block: Make 'replication_state' an enumFam Zheng2017-05-071-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BDRVReplicationState.replication_state is a name with a bit of duplication, plus it could be an enum like BDRVReplicationState.mode, which is more readable and also more straightforward in a debugger. Rename it, and improve the type while at it. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>