summaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
* Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2021-01-28' into ↵Peter Maydell2021-01-286-67/+22Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging QAPI patches patches for 2021-01-28 # gpg: Signature made Thu 28 Jan 2021 07:10:21 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-qapi-2021-01-28: qapi: More complex uses of QAPI_LIST_APPEND qapi: Use QAPI_LIST_APPEND in trivial cases qapi: Introduce QAPI_LIST_APPEND qapi: A couple more QAPI_LIST_PREPEND() stragglers net: Clarify early exit condition Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * qapi: More complex uses of QAPI_LIST_APPENDEric Blake2021-01-282-22/+5Star
| | | | | | | | | | | | | | | | | | | | | | | | These cases require a bit more thought to review; in each case, the code was appending to a list, but not with a FOOList **tail variable. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210113221013.390592-6-eblake@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Flawed change to qmp_guest_network_get_interfaces() dropped] Signed-off-by: Markus Armbruster <armbru@redhat.com>
| * qapi: Use QAPI_LIST_APPEND in trivial casesEric Blake2021-01-285-45/+17Star
| | | | | | | | | | | | | | | | | | | | | | | | The easiest spots to use QAPI_LIST_APPEND are where we already have an obvious pointer to the tail of a list. While at it, consistently use the variable name 'tail' for that purpose. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20210113221013.390592-5-eblake@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* | block: Separate blk_is_writable() and blk_supports_write_perm()Kevin Wolf2021-01-271-3/+16
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, blk_is_read_only() tells whether a given BlockBackend can only be used in read-only mode because its root node is read-only. Some callers actually try to answer a slightly different question: Is the BlockBackend configured to be writable, by taking write permissions on the root node? This can differ, for example, for CD-ROM devices which don't take write permissions, but may be backed by a writable image file. scsi-cd allows write requests to the drive if blk_is_read_only() returns false. However, the write request will immediately run into an assertion failure because the write permission is missing. This patch introduces separate functions for both questions. blk_supports_write_perm() answers the question whether the block node/image file can support writable devices, whereas blk_is_writable() tells whether the BlockBackend is currently configured to be writable. All calls of blk_is_read_only() are converted to one of the two new functions. Fixes: https://bugs.launchpad.net/bugs/1906693 Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20210118123448.307825-2-kwolf@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: report errno when flock fcntl failsDavid Edmondson2021-01-261-10/+28
| | | | | | | | | | | | | | | | | When a call to fcntl(2) for the purpose of adding file locks fails with an error other than EAGAIN or EACCES, report the error returned by fcntl. EAGAIN or EACCES are elided as they are considered to be common failures, indicating that a conflicting lock is held by another process. No errors are elided when removing file locks. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Message-Id: <20210113164447.2545785-1-david.edmondson@oracle.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: drop unused argument of block_copy()Vladimir Sementsov-Ogievskiy2021-01-262-9/+3Star
| | | | | | | Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-21-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: drop unused block_copy_set_progress_callback()Vladimir Sementsov-Ogievskiy2021-01-261-15/+0Star
| | | | | | | | | Drop unused code. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-20-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* backup: move to block-copyVladimir Sementsov-Ogievskiy2021-01-261-67/+120
| | | | | | | | | | This brings async request handling and block-status driven chunk sizes to backup out of the box, which improves backup performance. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-18-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/backup: drop extra gotos from backup_run()Vladimir Sementsov-Ogievskiy2021-01-261-7/+5Star
| | | | | | | Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-17-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: make progress_bytes_callback optionalVladimir Sementsov-Ogievskiy2021-01-261-1/+3
| | | | | | | | | | | We are going to stop use of this callback in the following commit. Still the callback handling code will be dropped in a separate commit. So, for now let's make it optional. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-16-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qapi: backup: add max-chunk and max-workers to x-perf structVladimir Sementsov-Ogievskiy2021-01-262-6/+24
| | | | | | | | | | | | | Add new parameters to configure future backup features. The patch doesn't introduce aio backup requests (so we actually have only one worker) neither requests larger than one cluster. Still, formally we satisfy these maximums anyway, so add the parameters now, to facilitate further patch which will really change backup job behavior. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-11-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: add block_copy_cancelVladimir Sementsov-Ogievskiy2021-01-261-5/+19
| | | | | | | | | | Add function to cancel running async block-copy call. It will be used in backup. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-8-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: add ratelimit to block-copyVladimir Sementsov-Ogievskiy2021-01-263-3/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are going to directly use one async block-copy operation for backup job, so we need rate limiter. We want to maintain current backup behavior: only background copying is limited and copy-before-write operations only participate in limit calculation. Therefore we need one rate limiter for block-copy state and boolean flag for block-copy call state for actual limitation. Note, that we can't just calculate each chunk in limiter after successful copying: it will not save us from starting a lot of async sub-requests which will exceed limit too much. Instead let's use the following scheme on sub-request creation: 1. If at the moment limit is not exceeded, create the request and account it immediately. 2. If at the moment limit is already exceeded, drop create sub-request and handle limit instead (by sleep). With this approach we'll never exceed the limit more than by one sub-request (which pretty much matches current backup behavior). Note also, that if there is in-flight block-copy async call, block_copy_kick() should be used after set-speed to apply new setup faster. For that block_copy_kick() published in this patch. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-7-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: add list of all call-statesVladimir Sementsov-Ogievskiy2021-01-261-1/+10
| | | | | | | | | It simplifies debugging. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-6-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: add max_chunk and max_workers parametersVladimir Sementsov-Ogievskiy2021-01-261-2/+9
| | | | | | | | | They will be used for backup. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-5-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: implement block_copy_asyncVladimir Sementsov-Ogievskiy2021-01-261-4/+77
| | | | | | | | | We'll need async block-copy invocation to use in backup directly. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-4-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/block-copy: More explicit call_stateVladimir Sementsov-Ogievskiy2021-01-261-13/+38
| | | | | | | | | | | | Refactor common path to use BlockCopyCallState pointer as parameter, to prepare it for use in asynchronous block-copy (at least, we'll need to run block-copy in a coroutine, passing the whole parameters as one pointer). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-3-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qapi: backup: add perf.use-copy-range parameterVladimir Sementsov-Ogievskiy2021-01-265-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Experiments show, that copy_range is not always making things faster. So, to make experimentation simpler, let's add a parameter. Some more perf parameters will be added soon, so here is a new struct. For now, add new backup qmp parameter with x- prefix for the following reasons: - We are going to add more performance parameters, some will be related to the whole block-copy process, some only to background copying in backup (ignored for copy-before-write operations). - On the other hand, we are going to use block-copy interface in other block jobs, which will need performance options as well.. And it should be the same structure or at least somehow related. So, there are too much unclean things about how the interface and now we need the new options mostly for testing. Let's keep them experimental for a while. In do_backup_common() new x-perf parameter handled in a way to make further options addition simpler. We add use-copy-range with default=true, and we'll change the default in further patch, after moving backup to use block-copy. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210116214705.822267-2-vsementsov@virtuozzo.com> [mreitz: s/5\.2/6.0/] Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: apply COR-filter to block-stream jobsAndrey Shinkevich2021-01-261-43/+62
| | | | | | | | | | | | | | | | | | | | | This patch completes the series with the COR-filter applied to block-stream operations. Adding the filter makes it possible in future implement discarding copied regions in backing files during the block-stream job, to reduce the disk overuse (we need control on permissions). Also, the filter now is smart enough to do copy-on-read with specified base, so we have benefit on guest reads even when doing block-stream of the part of the backing chain. Several iotests are slightly modified due to filter insertion. Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201216061703.70908-14-vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/stream: add s->target_bsVladimir Sementsov-Ogievskiy2021-01-261-13/+10Star
| | | | | | | | | | | | Add a direct link to target bs for convenience and to simplify following commit which will insert COR filter above target bs. This is a part of original commit written by Andrey. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201216061703.70908-13-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qapi: block-stream: add "bottom" argumentVladimir Sementsov-Ogievskiy2021-01-262-17/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | The code already don't freeze base node and we try to make it prepared for the situation when base node is changed during the operation. In other words, block-stream doesn't own base node. Let's introduce a new interface which should replace the current one, which will in better relations with the code. Specifying bottom node instead of base, and requiring it to be non-filter gives us the following benefits: - drop difference between above_base and base_overlay, which will be renamed to just bottom, when old interface dropped - clean way to work with parallel streams/commits on the same backing chain, which otherwise become a problem when we introduce a filter for stream job - cleaner interface. Nobody will surprised the fact that base node may disappear during block-stream, when there is no word about "base" in the interface. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201216061703.70908-11-vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* stream: rework backing-file changingAndrey Shinkevich2021-01-261-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stream in stream_prepare calls bdrv_change_backing_file() to change backing-file in the metadata of bs. It may use either backing-file parameter given by user or just take filename of base on job start. Backing file format is determined by base on job finish. There are some problems with this design, we solve only two by this patch: 1. Consider scenario with backing-file unset. Current concept of stream supports changing of the base during the job (we don't freeze link to the base). So, we should not save base filename at job start, - let's determine name of the base on job finish. 2. Using direct base to determine filename and format is not very good: base node may be a filter, so its filename may be JSON, and format_name is not good for storing into qcow2 metadata as backing file format. - let's use unfiltered_base Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [vsementsov: change commit subject, change logic in stream_prepare] Message-Id: <20201216061703.70908-10-vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* copy-on-read: skip non-guest reads if no copy neededAndrey Shinkevich2021-01-261-4/+10
| | | | | | | | | | | | | | | | | | | | If the flag BDRV_REQ_PREFETCH was set, skip idling read/write operations in COR-driver. It can be taken into account for the COR-algorithms optimization. That check is being made during the block stream job by the moment. Add the BDRV_REQ_PREFETCH flag to the supported_read_flags of the COR-filter. block: Modify the comment for the flag BDRV_REQ_PREFETCH as we are going to use it alone and pass it to the COR-filter driver for further processing. Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201216061703.70908-9-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: include supported_read_flags into BDS structureAndrey Shinkevich2021-01-261-2/+8
| | | | | | | | | | | | | | | Add the new member supported_read_flags to the BlockDriverState structure. It will control the flags set for copy-on-read operations. Make the block generic layer evaluate supported read flags before they go to a block driver. Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [vsementsov: use assert instead of abort] Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201216061703.70908-8-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qapi: copy-on-read filter: add 'bottom' optionAndrey Shinkevich2021-01-261-2/+96
| | | | | | | | | | | | | | | Add an option to limit copy-on-read operations to specified sub-chain of backing-chain, to make copy-on-read filter useful for block-stream job. Suggested-by: Max Reitz <mreitz@redhat.com> Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [vsementsov: change subject, modified to freeze the chain, do some fixes] Message-Id: <20201216061703.70908-6-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qapi: add filter-node-name to block-streamAndrey Shinkevich2021-01-262-3/+5
| | | | | | | | | | | | | Provide the possibility to pass the 'filter-node-name' parameter to the block-stream job as it is done for the commit block job. Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [vsementsov: comment indentation, s/Since: 5.2/Since: 6.0/] Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201216061703.70908-5-vsementsov@virtuozzo.com> [mreitz: s/commit/stream/] Signed-off-by: Max Reitz <mreitz@redhat.com>
* copy-on-read: add filter drop functionAndrey Shinkevich2021-01-262-0/+88
| | | | | | | | | | | | | | | | | | Provide API for the COR-filter removal. Also, drop the filter child permissions for an inactive state when the filter node is being removed. To insert the filter, the block generic layer function bdrv_insert_node() can be used. The new function bdrv_cor_filter_drop() may be considered as an intermediate solution before the QEMU permission update system has overhauled. Then we are able to implement the API function bdrv_remove_node() on the block generic layer. Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201216061703.70908-4-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* copy-on-read: support preadv/pwritev_part functionsAndrey Shinkevich2021-01-261-12/+16
| | | | | | | | | | | | | Add support for the recently introduced functions bdrv_co_preadv_part() and bdrv_co_pwritev_part() to the COR-filter driver. Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201216061703.70908-2-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* Merge remote-tracking branch 'remotes/armbru/tags/pull-yank-2021-01-13' into ↵Peter Maydell2021-01-131-61/+92
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging Yank patches patches for 2021-01-13 # gpg: Signature made Wed 13 Jan 2021 09:25:46 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-yank-2021-01-13: tests/test-char.c: Wait for the chardev to connect in char_socket_client_dupid_test io: Document qmp oob suitability of qio_channel_shutdown and io_shutdown io/channel-tls.c: make qio_channel_tls_shutdown thread-safe migration: Add yank feature chardev/char-socket.c: Add yank feature block/nbd.c: Add yank feature Introduce yank feature Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * block/nbd.c: Add yank featureLukas Straub2021-01-131-61/+92
| | | | | | | | | | | | | | | | | | | | | | | | Register a yank function which shuts down the socket and sets s->state = NBD_CLIENT_QUIT. This is the same behaviour as if an error occured. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <b73eb07db6d1fcd00667beb13ae6117260f002c3.1609167865.git.lukasstraub2@web.de> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* | meson: Propagate gnutls dependencyRoman Bolshakov2021-01-121-1/+1
|/ | | | | | | | | | | | crypto/tlscreds.h includes GnuTLS headers if CONFIG_GNUTLS is set, but GNUTLS_CFLAGS, that describe include path, are not propagated transitively to all users of crypto and build fails if GnuTLS headers reside in non-standard directory (which is a case for homebrew on Apple Silicon). Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com> Message-Id: <20210102125213.41279-1-r.bolshakov@yadro.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Remove superfluous timer_del() callsPeter Maydell2021-01-083-4/+0Star
| | | | | | | | | | | | This commit is the result of running the timer-del-timer-free.cocci script on the whole source tree. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Corey Minyard <cminyard@mvista.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201215154107.3255-4-peter.maydell@linaro.org
* libiscsi: convert to mesonPaolo Bonzini2021-01-021-1/+1
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* curl: remove compatibility code, require 7.29.0Paolo Bonzini2021-01-021-28/+0Star
| | | | | | | | | | | | | | cURL 7.16.0 was released in October 2006. Just remove code that is in all likelihood not being used anywhere, and require the oldest version found in currently supported distros, which is 7.29.0 from CentOS 7. pkg-config is enough for QEMU, since it does not need extra information such as the path for certicate authorities. All supported platforms today will all have pkg-config for curl, so we can drop curl-config. Suggested-by: Daniel Berrangé <berrange@redhat.com> Reviewed-by: Daniel Berrangé <berrange@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* meson: use dependency to gate block modulesPaolo Bonzini2021-01-021-10/+10
| | | | | | | This allows converting the dependencies to meson options one by one. Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2020-12-19' into ↵Peter Maydell2021-01-013-8/+5Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging QAPI patches patches for 2020-12-19 # gpg: Signature made Sat 19 Dec 2020 09:40:05 GMT # gpg: using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653 # gpg: issuer "armbru@redhat.com" # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full] # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full] # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-qapi-2020-12-19: (33 commits) qobject: Make QString immutable block: Use GString instead of QString to build filenames keyval: Use GString to accumulate value strings json: Use GString instead of QString to accumulate strings migration: Replace migration's JSON writer by the general one qobject: Factor JSON writer out of qobject_to_json() qobject: Factor quoted_str() out of to_json() qobject: Drop qstring_get_try_str() qobject: Drop qobject_get_try_str() Revert "qobject: let object_property_get_str() use new API" block: Avoid qobject_get_try_str() qmp: Fix tracing of non-string command IDs qobject: Move internals to qobject-internal.h hw/rdma: Replace QList by GQueue Revert "qstring: add qstring_free()" qobject: Change qobject_to_json()'s value to GString qobject: Use GString instead of QString to accumulate JSON qobject: Make qobject_to_json_pretty() take a pretty argument monitor: Use GString instead of QString for output buffer hmp: Simplify how qmp_human_monitor_command() gets output ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * qobject: Change qobject_to_json()'s value to GStringMarkus Armbruster2020-12-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | qobject_to_json() and qobject_to_json_pretty() build a GString, then covert it to QString. Just one of the callers actually needs a QString: qemu_rbd_parse_filename(). A few others need a string they can modify: qmp_send_response(), qga's send_response(), to_json_str(), and qmp_fd_vsend_fds(). The remainder just need a string. Change qobject_to_json() and qobject_to_json_pretty() to return the GString. qemu_rbd_parse_filename() now has to convert to QString. All others save a QString temporary. to_json_str() actually becomes a bit simpler, because GString provides more convenient modification functions. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20201211171152.146877-6-armbru@redhat.com>
| * qapi: Use QAPI_LIST_PREPEND() where possibleEric Blake2020-12-192-7/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Anywhere we create a list of just one item or by prepending items (typically because order doesn't matter), we can use QAPI_LIST_PREPEND(). But places where we must keep the list in order by appending remain open-coded until later patches. Note that as a side effect, this also performs a cleanup of two minor issues in qga/commands-posix.c: the old code was performing new = g_malloc0(sizeof(*ret)); which 1) is confusing because you have to verify whether 'new' and 'ret' are variables with the same type, and 2) would conflict with C++ compilation (not an actual problem for this file, but makes copy-and-paste harder). Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20201113011340.463563-5-eblake@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> [Straightforward conflicts due to commit a8aa94b5f8 "qga: update schema for guest-get-disks 'dependents' field" and commit a10b453a52 "target/mips: Move mips_cpu_add_definition() from helper.c to cpu.c" resolved. Commit message tweaked.] Signed-off-by: Markus Armbruster <armbru@redhat.com>
* | Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2020-12-18' ↵Peter Maydell2021-01-016-57/+747
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into staging Block patches: - New block filter: preallocate (which, on writes beyond an image file's end, allocates big chunks of data so that such post-EOF writes will occur less frequently) - write-zeroes and block-status support for Quorum - Implementation of truncate for the nvme block driver similarly to the existing implementations for host block devices and iscsi devices - Block layer refactoring: Drop the tighten_restrictions concept in the block permission functions - iotest fixes # gpg: Signature made Fri 18 Dec 2020 14:45:30 GMT # gpg: using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40 # gpg: issuer "mreitz@redhat.com" # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full] # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40 * remotes/maxreitz/tags/pull-block-2020-12-18: (30 commits) iotests: Fix _send_qemu_cmd with bash 5.1 iotests/102: Pass $QEMU_HANDLE to _send_qemu_cmd block/nvme: Implement fake truncate() coroutine quorum: Implement bdrv_co_pwrite_zeroes() quorum: Implement bdrv_co_block_status() scripts/simplebench: add bench_prealloc.py simplebench/results_to_text: make executable simplebench/results_to_text: add difference line to the table simplebench/results_to_text: improve view of the table simplebench: move results_to_text() into separate file simplebench: rename ascii() to results_to_text() scripts/simplebench: use standard deviation for +- error scripts/simplebench: support iops scripts/simplebench: fix grammar: s/successed/succeeded/ iotests: add 298 to test new preallocate filter driver iotests.py: execute_setup_common(): add required_fmts argument iotests: qemu_io_silent: support --image-opts qemu-io: add preallocate mode parameter for truncate command block: introduce preallocate filter block: bdrv_check_perm(): process children anyway ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * | block/nvme: Implement fake truncate() coroutinePhilippe Mathieu-Daudé2020-12-181-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NVMe drive cannot be shrunk. Since commit c80d8b06cfa we can use the @exact parameter (set to false) to return success if the block device is larger than the requested offset (even if we can not be shrunk). Use this parameter to implement the NVMe truncate() coroutine, similarly how it is done for the iscsi and file-posix drivers (see commit 82325ae5f2f "Evaluate @exact in protocol drivers"). Reported-by: Xueqiang Wei <xuwei@redhat.com> Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20201210125202.858656-1-philmd@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | quorum: Implement bdrv_co_pwrite_zeroes()Alberto Garcia2020-12-181-2/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This simply calls bdrv_co_pwrite_zeroes() in all children. bs->supported_zero_flags is also set to the flags that are supported by all children. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-Id: <2f09c842781fe336b4c2e40036bba577b7430190.1605286097.git.berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | quorum: Implement bdrv_co_block_status()Alberto Garcia2020-12-181-0/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The quorum driver does not implement bdrv_co_block_status() and because of that it always reports to contain data even if all its children are known to be empty. One consequence of this is that if we for example create a quorum with a size of 10GB and we mirror it to a new image the operation will write 10GB of actual zeroes to the destination image wasting a lot of time and disk space. Since a quorum has an arbitrary number of children of potentially different formats there is no way to report all possible allocation status flags in a way that makes sense, so this implementation only reports when a given region is known to contain zeroes (BDRV_BLOCK_ZERO) or not (BDRV_BLOCK_DATA). If all children agree that a region contains zeroes then we can return BDRV_BLOCK_ZERO using the smallest size reported by the children (because all agree that a region of at least that size contains zeroes). If at least one child disagrees we have to return BDRV_BLOCK_DATA. In this case we use the largest of the sizes reported by the children that didn't return BDRV_BLOCK_ZERO (because we know that there won't be an agreement for at least that size). Signed-off-by: Alberto Garcia <berto@igalia.com> Tested-by: Tao Xu <tao3.xu@intel.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <db83149afcf0f793effc8878089d29af4c46ffe1.1605286097.git.berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | block: introduce preallocate filterVladimir Sementsov-Ogievskiy2020-12-182-0/+560
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's intended to be inserted between format and protocol nodes to preallocate additional space (expanding protocol file) on writes crossing EOF. It improves performance for file-systems with slow allocation. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20201021145859.11201-9-vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> [mreitz: Two comment fixes, and bumped the version from 5.2 to 6.0] Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | block: introduce BDRV_REQ_NO_WAIT flagVladimir Sementsov-Ogievskiy2020-12-181-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add flag to make serialising request no wait: if there are conflicting requests, just return error immediately. It's will be used in upcoming preallocate filter. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201021145859.11201-7-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | block: bdrv_mark_request_serialising: split non-waiting functionVladimir Sementsov-Ogievskiy2020-12-182-13/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We'll need a separate function, which will only "mark" request serialising with specified align but not wait for conflicting requests. So, it will be like old bdrv_mark_request_serialising(), before merging bdrv_wait_serialising_requests_locked() into it. To reduce the possible mess, let's do the following: Public function that does both marking and waiting will be called bdrv_make_request_serialising, and private function which will only "mark" will be called tracked_request_set_serialising(). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20201021145859.11201-6-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | block/io: bdrv_wait_serialising_requests_locked: drop extra bs argVladimir Sementsov-Ogievskiy2020-12-181-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bs is linked in req, so no needs to pass it separately. Most of tracked-requests API doesn't have bs argument. Actually, after this patch only tracked_request_begin has it, but it's for purpose. While being here, also add a comment about what "_locked" is. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20201021145859.11201-5-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | block/io: split out bdrv_find_conflicting_requestVladimir Sementsov-Ogievskiy2020-12-181-30/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | To be reused in separate. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20201021145859.11201-4-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
| * | block/io.c: drop assertion on double waiting for request serialisationVladimir Sementsov-Ogievskiy2020-12-181-10/+1Star
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The comments states, that on misaligned request we should have already been waiting. But for bdrv_padding_rmw_read, we called bdrv_mark_request_serialising with align = request_alignment, and now we serialise with align = cluster_size. So we may have to wait again with larger alignment. Note, that the only user of BDRV_REQ_SERIALISING is backup which issues cluster-aligned requests, so seems the assertion should not fire for now. But it's wrong anyway. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20201021145859.11201-3-vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* | block/vpc: Use sizeof() instead of HEADER_SIZE for footer sizeMarkus Armbruster2020-12-181-15/+14Star
| | | | | | | | | | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20201217162003.1102738-10-armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | block/vpc: Pass footer buffers as VHDFooter * instead of uint8_t *Markus Armbruster2020-12-181-7/+7
| | | | | | | | | | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20201217162003.1102738-9-armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>