diff options
author | Kevin Wolf | 2020-07-07 16:46:29 +0200 |
---|---|---|
committer | Kevin Wolf | 2020-07-14 15:18:59 +0200 |
commit | d0ceea88dea053e0c1c038d42ca98782c2e3872d (patch) | |
tree | 64388483bf4ed6fb56a4eee157092efb2007e56c /qemu-img.c | |
parent | iotests: Simplify _filter_img_create() a bit (diff) | |
download | qemu-d0ceea88dea053e0c1c038d42ca98782c2e3872d.tar.gz qemu-d0ceea88dea053e0c1c038d42ca98782c2e3872d.tar.xz qemu-d0ceea88dea053e0c1c038d42ca98782c2e3872d.zip |
qemu-img map: Don't limit block status request size
Limiting each loop iteration of qemu-img map to 1 GB was arbitrary from
the beginning, though it only cut the maximum in half then because the
interface was a signed 32 bit byte count. These days, bdrv_block_status
supports a 64 bit byte count, so the arbitrary limit is even worse.
On file-posix, bdrv_block_status() eventually maps to SEEK_HOLE and
SEEK_DATA, which don't support a limit, but always do all of the work
necessary to find the start of the next hole/data. Much of this work may
be repeated if we don't use this information fully, but query with an
only slightly larger offset in the next loop iteration. Therefore, if
bdrv_block_status() is called in a loop, it should always pass the
full number of bytes that the whole loop is interested in.
This removes the arbitrary limit and speeds up 'qemu-img map'
significantly on heavily fragmented images.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200707144629.51235-1-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'qemu-img.c')
-rw-r--r-- | qemu-img.c | 5 |
1 files changed, 1 insertions, 4 deletions
diff --git a/qemu-img.c b/qemu-img.c index 498fbf42fe..4548dbff82 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -3210,12 +3210,9 @@ static int img_map(int argc, char **argv) curr.start = start_offset; while (curr.start + curr.length < length) { int64_t offset = curr.start + curr.length; - int64_t n; + int64_t n = length - offset; - /* Probe up to 1 GiB at a time. */ - n = MIN(1 * GiB, length - offset); ret = get_block_status(bs, offset, n, &next); - if (ret < 0) { error_report("Could not read file metadata: %s", strerror(-ret)); goto out; |