diff options
| author | Vladimir Sementsov-Ogievskiy | 2019-08-06 17:26:11 +0200 |
|---|---|---|
| committer | John Snow | 2019-10-17 23:02:32 +0200 |
| commit | 48557b138383aaf69c2617ca9a88bfb394fc50ec (patch) | |
| tree | 7344a7f905f1fd6d10ae34efe8e43335032f519b /include | |
| parent | Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20191013' into staging (diff) | |
| download | qemu-48557b138383aaf69c2617ca9a88bfb394fc50ec.tar.gz qemu-48557b138383aaf69c2617ca9a88bfb394fc50ec.tar.xz qemu-48557b138383aaf69c2617ca9a88bfb394fc50ec.zip | |
util/hbitmap: strict hbitmap_reset
hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.
Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20190806152611.280389-1-vsementsov@virtuozzo.com>
[Maintainer edit: Max's suggestions from on-list. --js]
[Maintainer edit: Eric's suggestion for aligned macro. --js]
Signed-off-by: John Snow <jsnow@redhat.com>
Diffstat (limited to 'include')
| -rw-r--r-- | include/qemu/hbitmap.h | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h index 4afbe6292e..1bf944ca3d 100644 --- a/include/qemu/hbitmap.h +++ b/include/qemu/hbitmap.h @@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count); * @count: Number of bits to reset. * * Reset a consecutive range of bits in an HBitmap. + * @start and @count must be aligned to bitmap granularity. The only exception + * is resetting the tail of the bitmap: @count may be equal to hb->orig_size - + * @start, in this case @count may be not aligned. The sum of @start + @count is + * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size + * and @start + @count = ALIGN_UP(hb->orig_size, granularity). */ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count); |
