summaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
* block/file-win32: add reopen handlersViktor Prutyanov2021-09-011-1/+100
| | | | | | | | | | | | | | | | Make 'qemu-img commit' work on Windows. Command 'commit' requires reopening backing file in RW mode. So, add reopen prepare/commit/abort handlers and change dwShareMode for CreateFile call in order to allow further read/write reopening. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/418 Suggested-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Viktor Prutyanov <viktor.prutyanov@phystech.edu> Tested-by: Helge Konetzka <hk@zapateado.de> Message-Id: <20210825173625.19415-1-viktor.prutyanov@phystech.edu> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/export/fuse.c: fix fuse-lseek on uclibc or muslFabrice Fontaine2021-09-011-0/+3
| | | | | | | | | | | | | | | | | | | | | | Include linux/fs.h to avoid the following build failure on uclibc or musl raised since version 6.0.0: ../block/export/fuse.c: In function 'fuse_lseek': ../block/export/fuse.c:641:19: error: 'SEEK_HOLE' undeclared (first use in this function) 641 | if (whence != SEEK_HOLE && whence != SEEK_DATA) { | ^~~~~~~~~ ../block/export/fuse.c:641:19: note: each undeclared identifier is reported only once for each function it appears in ../block/export/fuse.c:641:42: error: 'SEEK_DATA' undeclared (first use in this function); did you mean 'SEEK_SET'? 641 | if (whence != SEEK_HOLE && whence != SEEK_DATA) { | ^~~~~~~~~ | SEEK_SET Fixes: - http://autobuild.buildroot.org/results/33c90ebf04997f4d3557cfa66abc9cf9a3076137 Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com> Message-Id: <20210827220301.272887-1-fontaine.fabrice@gmail.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/block-copy: block_copy_state_new(): drop extra argumentsVladimir Sementsov-Ogievskiy2021-09-012-4/+3Star
| | | | | | | | | | The only caller pass copy_range and compress both false. Let's just drop these arguments. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210824083856.17408-35-vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: make public block driverVladimir Sementsov-Ogievskiy2021-09-011-32/+28Star
| | | | | | | | | | | | | | | | | | | | | Finally, copy-before-write gets own .bdrv_open and .bdrv_close handlers, block_init() call and becomes available through bdrv_open(). To achieve this: - cbw_init gets unused flags argument and becomes cbw_open - block_copy_state_free() call moved to new cbw_close() - in bdrv_cbw_append: - options are completed with driver and node-name, and we can simply use bdrv_insert_node() to do both open and drained replacing - in bdrv_cbw_drop: - cbw_close() is now responsible for freeing s->bcs, so don't do it here Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-22-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/block-copy: make setting progress optionalVladimir Sementsov-Ogievskiy2021-09-011-7/+11
| | | | | | | | | | | | Now block-copy will crash if user don't set progress meter by block_copy_set_progress_meter(). copy-before-write filter will be used in separate of backup job, and it doesn't want any progress meter (for now). So, allow not setting it. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-21-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: initialize block-copy bitmapVladimir Sementsov-Ogievskiy2021-09-012-9/+11
| | | | | | | | | | | | | | We are going to publish copy-before-write filter to be used in separate of backup. Future step would support bitmap for the filter. But let's start from full set bitmap. We have to modify backup, as bitmap is first initialized by copy-before-write filter, and then backup modifies it. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-20-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: cbw_init(): use optionsVladimir Sementsov-Ogievskiy2021-09-011-14/+15
| | | | | | | | | | | One more step closer to .bdrv_open(): use options instead of plain arguments. Move to bdrv_open_child() calls, native for drive open handlers. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-19-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: bdrv_cbw_append(): drop unused compress argVladimir Sementsov-Ogievskiy2021-09-013-6/+4Star
| | | | | | | Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-18-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: cbw_init(): use file child after attachingVladimir Sementsov-Ogievskiy2021-09-011-7/+7
| | | | | | | | | | | | In the next commit we'll get rid of source argument of cbw_init(). Prepare to it now, to make next commit simpler: move the code block that uses source below attaching the child and use bs->file->bs instead of source variable. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-17-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: cbw_init(): rename variablesVladimir Sementsov-Ogievskiy2021-09-011-15/+14Star
| | | | | | | | | | One more step closer to real .bdrv_open() handler: use more usual names for bs being initialized and its state. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-16-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: introduce cbw_init()Vladimir Sementsov-Ogievskiy2021-09-011-28/+41
| | | | | | | | | | | | Move part of bdrv_cbw_append() to new function cbw_open(). It's an intermediate step for adding normal .bdrv_open() handler to the filter. With this commit no logic is changed, but we have a function which will be turned into .bdrv_open() handler in future commit. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-15-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: bdrv_cbw_append(): replace child at lastVladimir Sementsov-Ogievskiy2021-09-011-22/+11Star
| | | | | | | | | | | | | Refactor the function to replace child at last. Thus we don't need to revert it and code is simplified. block-copy state initialization being done before replacing the child doesn't need any drained section. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-14-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: use file child instead of backingVladimir Sementsov-Ogievskiy2021-09-011-17/+22
| | | | | | | | | | | We are going to publish copy-before-write filter, and there no public backing-child-based filter in Qemu. No reason to create a precedent, so let's refactor copy-before-write filter instead. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-13-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: drop extra bdrv_unref on failure pathVladimir Sementsov-Ogievskiy2021-09-011-1/+0Star
| | | | | | | | | | bdrv_attach_child() do bdrv_unref() on failure, so we shouldn't do it by hand here. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-12-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/copy-before-write: relax permission requirements when no parentsVladimir Sementsov-Ogievskiy2021-09-011-3/+5
| | | | | | | | | | | | | | | | | | | We are going to publish copy-before-write filter. So, user should be able to create it with blockdev-add first, specifying both filtered and target children. And then do blockdev-reopen, to actually insert the filter where needed. Currently, filter unshares write permission unconditionally on source node. It's good, but it will not allow to do blockdev-add. So, let's relax restrictions when filter doesn't have any parent. Test output is modified, as now permission conflict happens only when job creates a blk parent for filter node. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-11-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/backup: move cluster size calculation to block-copyVladimir Sementsov-Ogievskiy2021-09-014-61/+64
| | | | | | | | | | | | | | | | | The main consumer of cluster-size is block-copy. Let's calculate it here instead of passing through backup-top. We are going to publish copy-before-write filter soon, so it will be created through options. But we don't want for now to make explicit option for cluster-size, let's continue to calculate it automatically. So, now is the time to get rid of cluster_size argument for bdrv_cbw_append(). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-10-vsementsov@virtuozzo.com> [hreitz: Add qemu/error-report.h include to block/block-copy.c] Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/backup: set copy_range and compress after filter insertionVladimir Sementsov-Ogievskiy2021-09-013-5/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are going to publish copy-before-write filter, so it would be initialized through options. Still we don't want to publish compress and copy-range options, as 1. Modern way to enable compression is to use compress filter. 2. For copy-range it's unclean how to make proper interface: - it's has experimental prefix for backup job anyway - the whole BackupPerf structure doesn't make sense for the filter So, let's just add copy-range possibility to the filter later if needed. Still, we are going to continue support for compression and experimental copy-range in backup job. So, set these options after filter creation. Note, that we can drop "compress" argument of bdrv_cbw_append() now, as well as "perf". The only reason not doing so is that now, when I prepare this patch the big series around it is already reviewed and I want to avoid extra rebase conflicts to simplify review of the following version. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-9-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/block-copy: introduce block_copy_set_copy_opts()Vladimir Sementsov-Ogievskiy2021-09-011-20/+29
| | | | | | | | | | | We'll need a possibility to set compress and use_copy_range options after initialization of the state. So make corresponding part of block_copy_state_new() separate and public. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210824083856.17408-8-vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block-copy: move detecting fleecing scheme to block-copyVladimir Sementsov-Ogievskiy2021-09-014-26/+25Star
| | | | | | | | | | | | | | | | We want to simplify initialization interface of copy-before-write filter as we are going to make it public. So, let's detect fleecing scheme exactly in block-copy code, to not pass this information through extra levels. Why not just set BDRV_REQ_SERIALISING unconditionally: because we are going to implement new more efficient fleecing scheme which will not rely on backing feature. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Hanna Reitz <hreitz@redhat.com> Message-Id: <20210824083856.17408-7-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block: rename backup-top to copy-before-writeVladimir Sementsov-Ogievskiy2021-09-014-76/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are going to convert backup_top to full featured public filter, which can be used in separate of backup job. Start from renaming from "how it used" to "what it does". While updating comments in 283 iotest, drop and rephrase also things about ".active", as this field is now dropped, and filter doesn't have "inactive" mode. Note that this change may be considered as incompatible interface change, as backup-top filter format name was visible through query-block and query-named-block-nodes. Still, consider the following reasoning: 1. backup-top was never documented, so if someone depends on format name (for driver that can't be used other than it is automatically inserted on backup job start), it's a kind of "undocumented feature use". So I think we are free to change it. 2. There is a hope, that there is no such users: it's a lot more native to give a good node-name to backup-top filter if need to operate with it somehow, and don't touch format name. 3. Another "incompatible" change in further commit would be moving copy-before-write filter from using backing child to file child. And this is even more reasonable than renaming: for now all public filters are file-child based. So, it's a risky change, but risk seems small and good interface worth it. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-6-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block: introduce blk_replace_bsVladimir Sementsov-Ogievskiy2021-09-011-0/+8
| | | | | | | | | Add function to change bs inside blk. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210824083856.17408-3-vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* raw-format: drop WRITE and RESIZE child perms when possibleStefan Hajnoczi2021-09-011-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following command-line fails due to a permissions conflict: $ qemu-storage-daemon \ --blockdev driver=nvme,node-name=nvme0,device=0000:08:00.0,namespace=1 \ --blockdev driver=raw,node-name=l1-1,file=nvme0,offset=0,size=1073741824 \ --blockdev driver=raw,node-name=l1-2,file=nvme0,offset=1073741824,size=1073741824 \ --nbd-server addr.type=unix,addr.path=/tmp/nbd.sock,max-connections=2 \ --export type=nbd,id=nbd-l1-1,node-name=l1-1,name=l1-1,writable=on \ --export type=nbd,id=nbd-l1-2,node-name=l1-2,name=l1-2,writable=on qemu-storage-daemon: --export type=nbd,id=nbd-l1-1,node-name=l1-1,name=l1-1,writable=on: Permission conflict on node 'nvme0': permissions 'resize' are both required by node 'l1-1' (uses node 'nvme0' as 'file' child) and unshared by node 'l1-2' (uses node 'nvme0' as 'file' child). The problem is that block/raw-format.c relies on bdrv_default_perms() to set permissions on the nvme node. The default permissions add RESIZE in anticipation of a format driver like qcow2 that needs to grow the image file. This fails because RESIZE is unshared, so we cannot get the RESIZE permission. Max Reitz pointed out that block/crypto.c already handles this case by implementing a custom ->bdrv_child_perm() function that adjusts the result of bdrv_default_perms(). This patch takes the same approach in block/raw-format.c so that RESIZE is only required if it's actually necessary (e.g. the parent is qcow2). Cc: Max Reitz <mreitz@redhat.com> Cc: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20210726122839.822900-1-stefanha@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/monitor: Consolidate hmp_handle_error calls to reduce redundant codeMao Zhongyi2021-09-011-6/+6
| | | | | | Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com> Message-Id: <20210802062507.347555-1-maozhongyi@cmss.chinamobile.com> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block/export/fuse.c: fix musl buildFabrice Fontaine2021-08-091-2/+6
| | | | | | | | | | | | | | | | | | | | Fix the following build failure on musl raised since version 6.0.0 and https://gitlab.com/qemu-project/qemu/-/commit/4ca37a96a75aafe7a37ba51ab1912b09b7190a6b because musl does not define FALLOC_FL_ZERO_RANGE: ../block/export/fuse.c: In function 'fuse_fallocate': ../block/export/fuse.c:563:23: error: 'FALLOC_FL_ZERO_RANGE' undeclared (first use in this function) 563 | } else if (mode & FALLOC_FL_ZERO_RANGE) { | ^~~~~~~~~~~~~~~~~~~~ Fixes: - http://autobuild.buildroot.org/results/b96e3d364fd1f8bbfb18904a742e73327d308f64 Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com> Message-Id: <20210809095101.1101336-1-fontaine.fabrice@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
* block: Fix in_flight leak in request padding error pathKevin Wolf2021-08-031-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | When bdrv_pad_request() fails in bdrv_co_preadv_part(), bs->in_flight has been increased, but is never decreased again. This leads to a hang when trying to drain the block node. This bug was observed with Windows guests which issue a request that fully uses IOV_MAX during installation, so that when padding is necessary (O_DIRECT with a 4k sector size block device on the host), adding another entry causes failure. Call bdrv_dec_in_flight() to fix this. There is a larger problem to solve here because this request shouldn't even fail, but Windows doesn't seem to care and with this minimal fix the installation succeeds. So given that we're already in freeze, let's take this minimal fix for 6.1. Fixes: 98ca45494fcd6bf0336ecd559e440b6de6ea4cd3 Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1972079 Reported-by: Qing Wang <qinwang@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20210727154923.91067-1-kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/io_uring: resubmit when result is -EAGAINFabian Ebner2021-07-291-1/+15
| | | | | | | | | | | | | | | Linux SCSI can throw spurious -EAGAIN in some corner cases in its completion path, which will end up being the result in the completed io_uring request. Resubmitting such requests should allow block jobs to complete, even if such spurious errors are encountered. Co-authored-by: Stefan Hajnoczi <stefanha@gmail.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com> Message-id: 20210729091029.65369-1-f.ebner@proxmox.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/nvme: Fix VFIO_MAP_DMA failed: No space left on devicePhilippe Mathieu-Daudé2021-07-261-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the NVMe block driver was introduced (see commit bdd6a90a9e5, January 2018), Linux VFIO_IOMMU_MAP_DMA ioctl was only returning -ENOMEM in case of error. The driver was correctly handling the error path to recycle its volatile IOVA mappings. To fix CVE-2019-3882, Linux commit 492855939bdb ("vfio/type1: Limit DMA mappings per container", April 2019) added the -ENOSPC error to signal the user exhausted the DMA mappings available for a container. The block driver started to mis-behave: qemu-system-x86_64: VFIO_MAP_DMA failed: No space left on device (qemu) (qemu) info status VM status: paused (io-error) (qemu) c VFIO_MAP_DMA failed: No space left on device (qemu) c VFIO_MAP_DMA failed: No space left on device (The VM is not resumable from here, hence stuck.) Fix by handling the new -ENOSPC error (when DMA mappings are exhausted) without any distinction to the current -ENOMEM error, so we don't change the behavior on old kernels where the CVE-2019-3882 fix is not present. An easy way to reproduce this bug is to restrict the DMA mapping limit (65535 by default) when loading the VFIO IOMMU module: # modprobe vfio_iommu_type1 dma_entry_limit=666 Cc: qemu-stable@nongnu.org Cc: Fam Zheng <fam@euphon.net> Cc: Maxim Levitsky <mlevitsk@redhat.com> Cc: Alex Williamson <alex.williamson@redhat.com> Reported-by: Michal Prívozník <mprivozn@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20210723195843.1032825-1-philmd@redhat.com Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver") Buglink: https://bugs.launchpad.net/qemu/+bug/1863333 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/65 Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* iotests: Improve and rename test 291 to qemu-img-bitmapEric Blake2021-07-211-1/+1
| | | | | | | | | | | | | | | | | Enhance the test to demonstrate existing less-than-stellar behavior of qemu-img with a qcow2 image containing an inconsistent bitmap: we don't diagnose the problem until after copying the entire image (a potentially long time), and when we do diagnose the failure, we still end up leaving an empty bitmap in the destination. This mess will be cleaned up in the next patch. While at it, rename the test now that we support useful iotest names, and fix a missing newline in the error message thus exposed. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20210709153951.2801666-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Nir Soffer <nsoffer@redhat.com>
* linux-aio: limit the batch size using `aio-max-batch` parameterStefano Garzarella2021-07-211-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When there are multiple queues attached to the same AIO context, some requests may experience high latency, since in the worst case the AIO engine queue is only flushed when it is full (MAX_EVENTS) or there are no more queues plugged. Commit 2558cb8dd4 ("linux-aio: increasing MAX_EVENTS to a larger hardcoded value") changed MAX_EVENTS from 128 to 1024, to increase the number of in-flight requests. But this change also increased the potential maximum batch to 1024 elements. When there is a single queue attached to the AIO context, the issue is mitigated from laio_io_unplug() that will flush the queue every time is invoked since there can't be others queue plugged. Let's use the new `aio-max-batch` IOThread parameter to mitigate this issue, limiting the number of requests in a batch. We also define a default value (32): this value is obtained running some benchmarks and it represents a good tradeoff between the latency increase while a request is queued and the cost of the io_submit(2) system call. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20210721094211.69853-4-sgarzare@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/export: Conditionally ignore set-context errorMax Reitz2021-07-201-1/+4
| | | | | | | | | | | | | | | | | | | | | | When invoking block-export-add with some iothread and fixed-iothread=false, and changing the node's iothread fails, the error is supposed to be ignored. However, it is still stored in *errp, which is wrong. If a second error occurs, the "*errp must be NULL" assertion in error_setv() fails: qemu-system-x86_64: ../util/error.c:59: error_setv: Assertion `*errp == NULL' failed. So if fixed-iothread=false, we should ignore the error by passing NULL to bdrv_try_set_aio_context(). Fixes: f51d23c80af73c95e0ce703ad06a300f1b3d63ef ("block/export: add iothread and fixed-iothread options") Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20210624083825.29224-2-mreitz@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vvfat: fix: drop backingVladimir Sementsov-Ogievskiy2021-07-201-39/+4Star
| | | | | | | | | | | | | | | | | Most probably this fake backing child doesn't work anyway (see notes about it in a8a4d15c1c34d). Still, since 25f78d9e2de528473d52 drivers are required to set .supports_backing if they want to call bdrv_set_backing_hd, so now vvfat just doesn't work because of this check. Let's finally drop this fake backing file. Fixes: 25f78d9e2de528473d52acfcf7acdfb64e3453d4 Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210715124853.13335-1-vsementsov@virtuozzo.com> Tested-by: John Arbuckle <programmingkidx@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* replication: Remove workaroundLukas Straub2021-07-201-11/+1Star
| | | | | | | | | | | | | | | | | Remove the workaround introduced in commit 6ecbc6c52672db5c13805735ca02784879ce8285 "replication: Avoid blk_make_empty() on read-only child". It is not needed anymore since s->hidden_disk is guaranteed to be writable when secondary_do_checkpoint() runs. Because replication_start(), _do_checkpoint() and _stop() are only called by COLO migration code and COLO-migration activates all disks via bdrv_invalidate_cache_all() before it calls these functions. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <d3acfad43879e9f376bffa7dd797ae74d0a7c81a.1626619393.git.lukasstraub2@web.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* replication: Properly attach childrenLukas Straub2021-07-201-3/+27
| | | | | | | | | | | | | | | | | | | | | The replication driver needs access to the children block-nodes of it's child so it can issue bdrv_make_empty() and bdrv_co_pwritev() to manage the replication. However, it does this by directly copying the BdrvChilds, which is wrong. Fix this by properly attaching the block-nodes with bdrv_attach_child() and requesting the required permissions. This ultimatively fixes a potential crash in replication_co_writev(), because it may write to s->secondary_disk if it is in state BLOCK_REPLICATION_FAILOVER_FAILED, without requesting write permissions first. And now the workaround in secondary_do_checkpoint() can be removed. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <5d0539d729afb8072d0d7cde977c5066285591b4.1626619393.git.lukasstraub2@web.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* replication: Reduce usage of s->hidden_disk and s->secondary_diskLukas Straub2021-07-201-17/+28
| | | | | | | | | | | In preparation for the next patch, initialize s->hidden_disk and s->secondary_disk later and replace access to them with local variables in the places where they aren't initialized yet. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <1eb9dc179267207d9c7eccaeb30761758e32e9ab.1626619393.git.lukasstraub2@web.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* replication: Remove s->active_diskLukas Straub2021-07-201-17/+17
| | | | | | | | | | s->active_disk is bs->file. Remove it and use local variables instead. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <2534f867ea9be5b666dfce19744b7d4e2b96c976.1626619393.git.lukasstraub2@web.de> Reviewed-by: Zhang Chen <chen.zhang@intel.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/mirror: fix active mirror dead-lock in mirror_wait_on_conflictsVladimir Sementsov-Ogievskiy2021-07-201-0/+12
| | | | | | | | | | | | | | | | It's possible that requests start to wait each other in mirror_wait_on_conflicts(). To avoid it let's use same technique as in block/io.c in bdrv_wait_serialising_requests_locked() / bdrv_find_conflicting_request(): don't wait on intersecting request if it is already waiting for some other request. For details of the dead-lock look at testIntersectingActiveIO() test-case which we actually fixing now. Fixes: d06107ade0ce74dc39739bac80de84b51ec18546 Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210702211636.228981-4-vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/mirror: set .co for active-write MirrorOp objectsVladimir Sementsov-Ogievskiy2021-07-201-0/+1
| | | | | | | | This field is unused, but it very helpful for debugging. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210702211636.228981-2-vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* blkdebug: protect rules and suspended_reqs with a lockEmanuele Giuseppe Esposito2021-07-191-10/+39
| | | | | | | | | | | | | | | | First, categorize the structure fields to identify what needs to be protected and what doesn't. We essentially need to protect only .state, and the 3 lists in BDRVBlkdebugState. Then, add the lock and mark the functions accordingly. Co-developed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20210614082931.24925-7-eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/blkdebug: remove new_state field and instead use a local variableEmanuele Giuseppe Esposito2021-07-191-7/+6Star
| | | | | | | | | | | | | | There seems to be no benefit in using a field. Replace it with a local variable, and move the state update before the yields. The state update has do be done before the yields because now using a local variable does not allow the new updated state to be visible by the other yields. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20210614082931.24925-6-eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* blkdebug: do not suspend in the middle of QLIST_FOREACH_SAFEEmanuele Giuseppe Esposito2021-07-191-1/+6
| | | | | | | | | | | | | | | | | | That would be unsafe in case a rule other than the current one is removed while the coroutine has yielded. Keep FOREACH_SAFE because suspend_request deletes the current rule. After this patch, *all* matching rules are deleted before suspending the coroutine, rather than just one. This doesn't affect the existing testcases. Use actions_count to see how many yield to issue. Co-developed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210614082931.24925-5-eesposit@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* blkdebug: track all actionsEmanuele Giuseppe Esposito2021-07-191-9/+8Star
| | | | | | | | | | | | Add a counter for each action that a rule can trigger. This is mainly used to keep track of how many coroutine_yield() we need to perform after processing all rules in the list. Co-developed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210614082931.24925-4-eesposit@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* blkdebug: move post-resume handling to resume_req_by_tagEmanuele Giuseppe Esposito2021-07-191-13/+18
| | | | | | | | | | | | | | | We want to move qemu_coroutine_yield() after the loop on rules, because QLIST_FOREACH_SAFE is wrong if the rule list is modified while the coroutine has yielded. Therefore move the suspended request to the heap and clean it up from the remove side. All that is left is for blkdebug_debug_event to handle the yielding. Co-developed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210614082931.24925-3-eesposit@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* blkdebug: refactor removal of a suspended requestEmanuele Giuseppe Esposito2021-07-191-11/+22
| | | | | | | | | | | | | | Extract to a separate function. Do not rely on FOREACH_SAFE, which is only "safe" if the *current* node is removed---not if another node is removed. Instead, just walk the entire list from the beginning when asked to resume all suspended requests with a given tag. Co-developed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20210614082931.24925-2-eesposit@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* nbd: register yank function earlierLukas Straub2021-07-121-3/+5
| | | | | | | | | | | | Although unlikely, qemu might hang in nbd_send_request(). Allow recovery in this case by registering the yank function before calling it. Signed-off-by: Lukas Straub <lukasstraub2@web.de> Message-Id: <20210704000730.1befb596@gecko.fritz.box> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into ↵Peter Maydell2021-07-112-1/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | staging * More SVM fixes (Lara) * Module annotation database (Gerd) * Memory leak fixes (myself) * Build fixes (myself) * --with-devices-* support (Alex) # gpg: Signature made Fri 09 Jul 2021 17:23:52 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini-gitlab/tags/for-upstream: (48 commits) meson: Use input/output for entitlements target configure: allow the selection of alternate config in the build configs: rename default-configs to configs and reorganise hw/arm: move CONFIG_V7M out of default-devices hw/arm: add dependency on OR_IRQ for XLNX_VERSAL meson: Introduce target-specific Kconfig meson: switch function tests from compilation to linking vl: fix leak of qdict_crumple return value target/i386: fix exceptions for MOV to DR target/i386: Added DR6 and DR7 consistency checks target/i386: Added MSRPM and IOPM size check monitor/tcg: move tcg hmp commands to accel/tcg, register them dynamically usb: build usb-host as module monitor/usb: register 'info usbhost' dynamically usb: drop usb_host_dev_is_scsi_storage hook monitor: allow register hmp commands accel: build tcg modular accel: add tcg module annotations accel: build qtest modular accel: add qtest module annotations ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * modules: add block module annotationsGerd Hoffmann2021-07-091-0/+1
| | | | | | | | | | | | | | Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Jose R. Ziviani <jziviani@suse.de> Message-Id: <20210624103836.2382472-14-kraxel@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * meson: fix missing preprocessor symbolsPaolo Bonzini2021-07-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While most libraries do not need a CONFIG_* symbol because the "when:" clauses are enough, some do. Add them back or stop using them if possible. In the case of libpmem, the statement to add the CONFIG_* symbol was still in configure, but could not be triggered because it checked for "no" instead of "disabled" (and it would be wrong anyway since the test for the library has not been done yet). Reported-by: Li Zhijian <lizhijian@cn.fujitsu.com> Fixes: 587d59d6cc ("configure, meson: convert virgl detection to meson", 2021-07-06) Fixes: 83ef16821a ("configure, meson: convert libdaxctl detection to meson", 2021-07-06) Fixes: e36e8c70f6 ("configure, meson: convert libpmem detection to meson", 2021-07-06) Fixes: 53c22b68e3 ("configure, meson: convert liburing detection to meson", 2021-07-06) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell2021-07-106-316/+611
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block layer patches - Make blockdev-reopen stable - Remove deprecated qemu-img backing file without format - rbd: Convert to coroutines and add write zeroes support - rbd: Updated MAINTAINERS - export/fuse: Allow other users access to the export - vhost-user: Fix backends without multiqueue support - Fix drive-backup transaction endless drained section # gpg: Signature made Fri 09 Jul 2021 13:49:22 BST # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: (28 commits) block: Make blockdev-reopen stable API iotests: Test reopening multiple devices at the same time block: Support multiple reopening with x-blockdev-reopen block: Acquire AioContexts during bdrv_reopen_multiple() block: Add bdrv_reopen_queue_free() qcow2: Fix dangling pointer after reopen for 'file' qemu-img: Improve error for rebase without backing format qemu-img: Require -F with -b backing image qcow2: Prohibit backing file changes in 'qemu-img amend' blockdev: fix drive-backup transaction endless drained section vhost-user: Fix backends without multiqueue support MAINTAINERS: add block/rbd.c reviewer block/rbd: fix type of task->complete iotests/fuse-allow-other: Test allow-other iotests/308: Test +w on read-only FUSE exports export/fuse: Let permissions be adjustable export/fuse: Give SET_ATTR_SIZE its own branch export/fuse: Add allow-other option export/fuse: Pass default_permissions for mount util/uri: do not check argument of uri_free() ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * block: Acquire AioContexts during bdrv_reopen_multiple()Kevin Wolf2021-07-091-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As the BlockReopenQueue can contain nodes in multiple AioContexts, only one of which may be locked when AIO_WAIT_WHILE() can be called, we can't let the caller lock the right contexts. Instead, individually lock the AioContext of a single node when iterating the queue. Reintroduce bdrv_reopen() as a wrapper for reopening a single node that drains the node and temporarily drops the AioContext lock for bdrv_reopen_multiple(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210708114709.206487-4-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
| * qcow2: Fix dangling pointer after reopen for 'file'Kevin Wolf2021-07-091-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without an external data file, s->data_file is a second pointer with the same value as bs->file. When changing bs->file to a different BdrvChild and freeing the old BdrvChild, s->data_file must also be updated, otherwise it points to freed memory and causes crashes. This problem was caught by iotests case 245. Fixes: df2b7086f169239ebad5d150efa29c9bb6d4f820 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20210708114709.206487-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>