summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorVladimir Sementsov-Ogievskiy2020-12-11 19:39:24 +0100
committerEric Blake2021-02-03 15:14:00 +0100
commit801625e69d947b0f9291d77bb1deb7545525c66d (patch)
treeba4a16aaf67f7e295e367ebbfbeae375451e304c /include
parentblock/io: bdrv_pad_request(): support qemu_iovec_init_extended failure (diff)
downloadqemu-801625e69d947b0f9291d77bb1deb7545525c66d.tar.gz
qemu-801625e69d947b0f9291d77bb1deb7545525c66d.tar.xz
qemu-801625e69d947b0f9291d77bb1deb7545525c66d.zip
block/throttle-groups: throttle_group_co_io_limits_intercept(): 64bit bytes
The function is called from 64bit io handlers, and bytes is just passed to throttle_account() which is 64bit too (unsigned though). So, let's convert intermediate argument to 64bit too. This patch is a first in the 64-bit-blocklayer series, so we are generally moving to int64_t for both offset and bytes parameters on all io paths. Main motivation is realization of 64-bit write_zeroes operation for fast zeroing large disk chunks, up to the whole disk. We chose signed type, to be consistent with off_t (which is signed) and with possibility for signed return type (where negative value means error). Patch-correctness audit by Eric Blake: Caller has 32-bit, this patch now causes widening which is safe: block/block-backend.c: blk_do_preadv() passes 'unsigned int' block/block-backend.c: blk_do_pwritev_part() passes 'unsigned int' block/throttle.c: throttle_co_pwrite_zeroes() passes 'int' block/throttle.c: throttle_co_pdiscard() passes 'int' Caller has 64-bit, this patch fixes potential bug where pre-patch could narrow, except it's easy enough to trace that callers are still capped at 2G actions: block/throttle.c: throttle_co_preadv() passes 'uint64_t' block/throttle.c: throttle_co_pwritev() passes 'uint64_t' Implementation in question: block/throttle-groups.c throttle_group_co_io_limits_intercept() takes 'unsigned int bytes' and uses it: argument to util/throttle.c throttle_account(uint64_t) All safe: it patches a latent bug, and does not introduce any 64-bit gotchas once throttle_co_p{read,write}v are relaxed, and assuming throttle_account() is not buggy. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Message-Id: <20201211183934.169161-7-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
Diffstat (limited to 'include')
-rw-r--r--include/block/throttle-groups.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/block/throttle-groups.h b/include/block/throttle-groups.h
index 8bf7d233fa..9541b32432 100644
--- a/include/block/throttle-groups.h
+++ b/include/block/throttle-groups.h
@@ -77,7 +77,7 @@ void throttle_group_unregister_tgm(ThrottleGroupMember *tgm);
void throttle_group_restart_tgm(ThrottleGroupMember *tgm);
void coroutine_fn throttle_group_co_io_limits_intercept(ThrottleGroupMember *tgm,
- unsigned int bytes,
+ int64_t bytes,
bool is_write);
void throttle_group_attach_aio_context(ThrottleGroupMember *tgm,
AioContext *new_context);