summaryrefslogtreecommitdiffstats
path: root/block/qcow2-snapshot.c
Commit message (Collapse)AuthorAgeFilesLines
* migration: introduce icount field for snapshotsPavel Dovgalyuk2020-10-061-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Saving icount as a parameters of the snapshot allows navigation between them in the execution replay scenario. This information can be used for finding a specific snapshot for proceeding the recorded execution to the specific moment of the time. E.g., 'reverse step' action (introduced in one of the following patches) needs to load the nearest snapshot which is prior to the current moment of time. This patch also updates snapshot test which verifies qemu monitor output. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Acked-by: Markus Armbruster <armbru@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com> -- v4 changes: - squashed format update with test output update v7 changes: - introduced the spaces between the fields in snapshot info output - updated the test to match new field widths Message-Id: <160174518865.12451.14327573383978752463.stgit@pasha-ThinkPad-X280> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qcow2: introduce icount field for snapshotsPavel Dovgalyuk2020-10-061-0/+7
| | | | | | | | | | | | | | | | This patch introduces the icount field for saving within the snapshot. It is required for navigation between the snapshots in record/replay mode. Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Acked-by: Kevin Wolf <kwolf@redhat.com> -- v7 changes: - also fix the test which checks qcow2 snapshot extra data Message-Id: <160174518284.12451.2301137308458777398.stgit@pasha-ThinkPad-X280> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qcow2: Use macros for the L1, refcount and bitmap table entry sizesAlberto Garcia2020-09-151-10/+10
| | | | | | | | | This patch replaces instances of sizeof(uint64_t) in the qcow2 driver with macros that indicate what those sizes are actually referring to. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-Id: <20200828110828.13833-1-berto@igalia.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Allow resize of images with internal snapshotsEric Blake2020-05-051-4/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We originally refused to allow resize of images with internal snapshots because the v2 image format did not require the tracking of snapshot size, making it impossible to safely revert to a snapshot with a different size than the current view of the image. But the snapshot size tracking was rectified in v3, and our recent fixes to qemu-img amend (see 0a85af35) guarantee that we always have a valid snapshot size. Thus, we no longer need to artificially limit image resizes, but it does become one more thing that would prevent a downgrade back to v2. And now that we support different-sized snapshots, it's also easy to fix reverting to a snapshot to apply the new size. Upgrade iotest 61 to cover this (we previously had NO coverage of refusal to resize while snapshots exist). Note that the amend process can fail but still have effects: in particular, since we break things into upgrade, resize, downgrade, a failure during resize does not roll back changes made during upgrade, nor does failure in downgrade roll back a resize. But this situation is pre-existing even without this patch; and without journaling, the best we could do is minimize the chance of partial failure by collecting all changes prior to doing any writes - which adds a lot of complexity but could still fail with EIO. On the other hand, we are careful that even if we have partial modification but then fail, the image is left viable (that is, we are careful to sequence things so that after each successful cluster write, there may be transient leaked clusters but no corrupt metadata). And complicating the code to make it more transaction-like is not worth the effort: a user can always request multiple 'qemu-img amend' changing one thing each, if they need finer-grained control over detecting the first failure than what they get by letting qemu decide how to sequence multiple changes. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200428192648.749066-3-eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Don't round the L1 table allocation up to the sector sizeAlberto Garcia2020-02-061-2/+1Star
| | | | | | | | | | | The L1 table is read from disk using the byte-based bdrv_pread() and is never accessed beyond its last element, so there's no need to allocate more memory than that. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: b2e27214ec7b03a585931bcf383ee1ac3a641a10.1579374329.git.berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Fix v3 snapshot table entry compliancyMax Reitz2019-10-281-0/+18
| | | | | | | | | | qcow2 v3 images require every snapshot table entry to have at least 16 bytes of extra data. If they do not, let qemu-img check -r all fix it. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-15-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Repair snapshot table with too many entriesMax Reitz2019-10-281-0/+14
| | | | | | | | | | | | | | | | The user cannot choose which snapshots are removed. This is fine because we have chosen the maximum snapshot table size to be so large (65536 entries) that it cannot be reasonably reached. If the snapshot table exceeds this size, the image has probably been corrupted in some way; in this case, it is most important to just make the image usable such that the user can copy off at least the active layer. (Also note that the snapshots will be removed only with "-r all", so a plain "check" or "check -r leaks" will not delete any data.) Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-14-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Fix overly long snapshot tablesMax Reitz2019-10-281-10/+78
| | | | | | | | | | | | | | | | | | | | We currently refuse to open qcow2 images with overly long snapshot tables. This patch makes qemu-img check -r all drop all offending entries past what we deem acceptable. The user cannot choose which snapshots are removed. This is fine because we have chosen the maximum snapshot table size to be so large (64 MB) that it cannot be reasonably reached. If the snapshot table exceeds this size, the image has probably been corrupted in some way; in this case, it is most important to just make the image usable such that the user can copy off at least the active layer. (Also note that the snapshots will be removed only with "-r all", so a plain "check" or "check -r leaks" will not delete any data.) Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-13-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Keep track of the snapshot table lengthMax Reitz2019-10-281-1/+13
| | | | | | | | | | | When repairing the snapshot table, we truncate entries that have too much extra data. This frees up space that we do not have to count towards the snapshot table size. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-12-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Fix broken snapshot table entriesMax Reitz2019-10-281-11/+56
| | | | | | | | | | | | The only case where we currently reject snapshot table entries is when they have too much extra data. Fix them with qemu-img check -r all by counting it as a corruption, reducing their extra_data_size, and then letting qcow2_check_fix_snapshot_table() do the rest. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-11-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Add qcow2_check_fix_snapshot_table()Max Reitz2019-10-281-0/+25
| | | | | | | | | | | | | | | | | | | qcow2_check_read_snapshot_table() can perform consistency checks, but it cannot fix everything. Specifically, it cannot allocate new clusters, because that should wait until the refcount structures are known to be consistent (i.e., after qcow2_check_refcounts()). Thus, it cannot call qcow2_write_snapshots(). Do that in qcow2_check_fix_snapshot_table(), which is called after qcow2_check_refcounts(). Currently, there is nothing that would set result->corruptions, so this is a no-op. A follow-up patch will change that. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-10-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Separate qcow2_check_read_snapshot_table()Max Reitz2019-10-281-0/+58
| | | | | | | | | | | | | | | Reading the snapshot table can fail. That is a problem when we want to repair the image. Therefore, stop reading the snapshot table in qcow2_do_open() in check mode. Instead, add a new function qcow2_check_read_snapshot_table() that reads the snapshot table at a later point. In the future, we want to handle errors here and fix them. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-9-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Make qcow2_write_snapshots() publicMax Reitz2019-10-281-1/+1
| | | | | | | | | | Updating the snapshot list will be useful when upgrading a v2 image to v3, so we will need to call this function in qcow2.c. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-6-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Keep unknown extra snapshot dataMax Reitz2019-10-281-10/+51
| | | | | | | | | | | | | | | The qcow2 specification says to ignore unknown extra data fields in snapshot table entries. Currently, we discard it whenever we update the image, which is a bit different from "ignore". This patch makes the qcow2 driver keep all unknown extra data fields when updating an image's snapshot table. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-5-mreitz@redhat.com [mreitz: Adjusted comments as proposed by Eric] Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Add Error ** to qcow2_read_snapshots()Max Reitz2019-10-281-1/+6
| | | | | | | Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-4-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Use endof()Max Reitz2019-10-281-3/+4
| | | | | | | Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-3-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2.h: add missing includeVladimir Sementsov-Ogievskiy2019-05-281-1/+0Star
| | | | | | | | | | | | qcow2.h depends on block_int.h. Compilation isn't broken currently only due to block_int.h always included before qcow2.h. Though, it seems better to directly include block_int.h in qcow2.h. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20190506142741.41731-2-vsementsov@virtuozzo.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Return error for snapshot operation with data fileKevin Wolf2019-03-081-0/+15
| | | | | | | | Internal snapshots and an external data file are incompatible because snapshots require refcounting and non-linear mapping. Return an error for all of the snapshot operations if an external data file is in use. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: External file I/OKevin Wolf2019-03-081-3/+4
| | | | | | | | | This changes the qcow2 implementation to direct all guest data I/O to s->data_file rather than bs->file, while metadata I/O still uses bs->file. At the moment, this is still always the same, but soon we'll add options to set s->data_file to an external data file. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2-snapshot: remove redundant find_snapshot_by_id_and_name callDaniel Henrique Barboza2019-02-251-5/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In qcow2_snapshot_create there is the following code block: /* Generate an ID */ find_new_snapshot_id(bs, sn_info->id_str, sizeof(sn_info->id_str)); /* Check that the ID is unique */ if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) { return -EEXIST; } find_new_snapshot_id cycles through all snapshots, getting the id_str as an unsigned long int, calculating the max id_max value of all the existing id_strs and writing in the id_str pointer id_max + 1: for(i = 0; i < s->nb_snapshots; i++) { sn = s->snapshots + i; id = strtoul(sn->id_str, NULL, 10); if (id > id_max) id_max = id; } snprintf(id_str, id_str_size, "%lu", id_max + 1); Here, sn_info->id_str will have the unique value id_max + 1. Right after that, find_snapshot_by_id_and_name is called with id = sn_info->id_str and name = NULL. This will cause the function to execute the following: } else if (id) { for (i = 0; i < s->nb_snapshots; i++) { if (!strcmp(s->snapshots[i].id_str, id)) { return i; } } } In short, we're searching the existing snapshots to see if sn_info->id_str matches any existing id, right after we set in the previous line a sn_info->id_str value that is already unique. The first code block goes way back to commit 585f8587ad, a 2006 commit from Fabrice Bellard that simply says "new qcow2 disk image format". No more info is provided about this logic in any subsequent commits that moved this code block around. I can't say about the original design, but the current logic is redundant. bdrv_snapshot_create is called in aio_context lock, forbidding any concurrent call to accidentally create a new snapshot between the find_new_snapshot_id and find_snapshot_by_id_and_name calls. What we're ending up doing is to cycle through the snapshots two times for no viable reason. This patch eliminates the redundancy by removing the 'id is unique' check that calls find_snapshot_by_id_and_name. Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: use local path for local headersMichael S. Tsirkin2018-05-311-1/+1
| | | | | | | | | | When pulling in headers that are in the same directory as the C file (as opposed to one in include/), we should use its relative path, without a directory. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
* qcow2: Check snapshot L1 table in qcow2_snapshot_delete()Alberto Garcia2018-03-091-0/+7
| | | | | | | | | | | | | | | | This function deletes a snapshot from disk, removing its entry from the snapshot table, freeing its L1 table and decreasing the refcounts of all clusters. The L1 table offset and size are however not validated. If we use invalid values in this function we'll probably corrupt the image even more, so we should return an error instead. We now have a function to take care of this, so let's use it. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Check snapshot L1 table in qcow2_snapshot_goto()Alberto Garcia2018-03-091-0/+9
| | | | | | | | | | | | This function copies a snapshot's L1 table into the active one without validating it first. We now have a function to take care of this, so let's use it. Cc: Eric Blake <eblake@redhat.com> Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Check L1 table offset in qcow2_snapshot_load_tmp()Alberto Garcia2018-03-091-3/+5
| | | | | | | | | | | This function checks that the size of a snapshot's L1 table is not too large, but it doesn't validate the offset. We now have a function to take care of this, so let's use it. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Replace align_offset() with ROUND_UP()Alberto Garcia2018-03-021-5/+5
| | | | | | | | | | | | | | | | The align_offset() function is equivalent to the ROUND_UP() macro so there's no need to use the former. The ROUND_UP() name is also a bit more explicit. This patch uses ROUND_UP() instead of the slower QEMU_ALIGN_UP() because align_offset() already requires that the second parameter is a power of two. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180215131008.5153-1-berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* qcow2: Discard/zero clusters by byte countEric Blake2017-05-111-4/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Passing a byte offset, but sector count, when we ultimately want to operate on cluster granularity, is madness. Clean up the external interfaces to take both offset and count as bytes, while still keeping the assertion added previously that the caller must align the values to a cluster. Then rename things to make sure backports don't get confused by changed units: instead of qcow2_discard_clusters() and qcow2_zero_clusters(), we now have qcow2_cluster_discard() and qcow2_cluster_zeroize(). The internal functions still operate on clusters at a time, and return an int for number of cleared clusters; but on an image with 2M clusters, a single L2 table holds 256k entries that each represent a 2M cluster, totalling well over INT_MAX bytes if we ever had a request for that many bytes at once. All our callers currently limit themselves to 32-bit bytes (and therefore fewer clusters), but by making this function 64-bit clean, we have one less place to clean up if we later improve the block layer to support 64-bit bytes through all operations (with the block layer auto-fragmenting on behalf of more-limited drivers), rather than the current state where some interfaces are artificially limited to INT_MAX at a time. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 20170507000552.20847-13-eblake@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Convert bdrv_pwrite(v/_sync) to BdrvChildKevin Wolf2016-07-051-7/+7
| | | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Convert bdrv_pread(v) to BdrvChildKevin Wolf2016-07-051-6/+6
| | | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* qemu-common: stop including qemu/bswap.h from qemu-common.hPaolo Bonzini2016-05-191-0/+1
| | | | | | | Move it to the actual users. There are still a few includes of qemu/bswap.h in headers; removing them is left for future work. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* util: move declarations out of qemu-common.hVeronia Bahaa2016-03-221-1/+1
| | | | | | | | | | Move declarations out of qemu-common.h for functions declared in utils/ files: e.g. include/qemu/path.h for utils/path.c. Move inline functions out of qemu-common.h and into new files (e.g. include/qemu/bcd.h) Signed-off-by: Veronia Bahaa <veroniabahaa@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster2016-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* block: Clean up includesPeter Maydell2016-01-201-0/+1
| | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Convert bs->file to BdrvChildKevin Wolf2015-10-161-14/+16
| | | | | | | | | | | This patch removes the temporary duplication between bs->file and bs->file_child by converting everything to BdrvChild. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Rename BDRVQcowState to BDRVQcow2StateKevin Wolf2015-09-141-10/+10
| | | | | | | | | | BDRVQcowState is already used by qcow1, and gdb is always confused which one to use. Rename the qcow2 one so they can be distinguished. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com>
* qerror: Move #include out of qerror.hMarkus Armbruster2015-06-221-0/+1
| | | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
* savevm: create snapshot failed when id_str already existsYi Wang2015-04-281-4/+2Star
| | | | | | | | | | | | | | | The command "virsh create" will fail in such condition: vm has two disks: vda and vdb. vda has snapshot s1 with id "1", vdb doesn't have s1 but has snapshot s2 with id "1". When we want to run command "virsh create s1", del_existing_snapshots() only deletes s1 in vda, and bdrv_snapshot_create() tries to create vdb's snapshot s1 with id "1", but id "1" alreay exists in vdb with name "s2"! The simplest way is call find_new_snapshot_id() unconditionally. Signed-off-by: Yi Wang <up2wing@gmail.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: fix the macro QCOW_MAX_L1_SIZE's useWen Congyang2015-03-121-1/+1
| | | | | | | | | QCOW_MAX_L1_SIZE's unit is byte, and l1_size's unit is l1 table entry size(8 bytes). Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Message-id: 54FFB0F1.5010307@cn.fujitsu.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Allow "full" discardMax Reitz2014-11-031-1/+1
| | | | | | | | | | | | | Normally, discarded sectors should read back as zero. However, there are cases in which a sector (or rather cluster) should be discarded as if they were never written in the first place, that is, reading them should fall through to the backing file again. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1414159063-25977-2-git-send-email-mreitz@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Use g_new() & friends where that makes obvious senseMarkus Armbruster2014-08-201-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | g_new(T, n) is neater than g_malloc(sizeof(T) * n). It's also safer, for two reasons. One, it catches multiplication overflowing size_t. Two, it returns T * rather than void *, which lets the compiler catch more type errors. Patch created with Coccinelle, with two manual changes on top: * Add const to bdrv_iterate_format() to keep the types straight * Convert the allocation in bdrv_drop_intermediate(), which Coccinelle inexplicably misses Coccinelle semantic patch: @@ type T; @@ -g_malloc(sizeof(T)) +g_new(T, 1) @@ type T; @@ -g_try_malloc(sizeof(T)) +g_try_new(T, 1) @@ type T; @@ -g_malloc0(sizeof(T)) +g_new0(T, 1) @@ type T; @@ -g_try_malloc0(sizeof(T)) +g_try_new0(T, 1) @@ type T; expression n; @@ -g_malloc(sizeof(T) * (n)) +g_new(T, n) @@ type T; expression n; @@ -g_try_malloc(sizeof(T) * (n)) +g_try_new(T, n) @@ type T; expression n; @@ -g_malloc0(sizeof(T) * (n)) +g_new0(T, n) @@ type T; expression n; @@ -g_try_malloc0(sizeof(T) * (n)) +g_try_new0(T, n) @@ type T; expression p, n; @@ -g_realloc(p, sizeof(T) * (n)) +g_renew(T, p, n) @@ type T; expression p, n; @@ -g_try_realloc(p, sizeof(T) * (n)) +g_try_renew(T, p, n) Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Handle failure for potentially large allocationsKevin Wolf2014-08-151-5/+18
| | | | | | | | | | | Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the qcow2 block driver. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Limit snapshot table sizeKevin Wolf2014-04-011-1/+14
| | | | | | | | | | | | | | | | Even with a limit of 64k snapshots, each snapshot could have a filename and an ID with up to 64k, which would still lead to pretty large allocations, which could potentially lead to qemu aborting. Limit the total size of the snapshot table to an average of 1k per entry when the limit of 64k snapshots is fully used. This should be plenty for any reasonable user. This also fixes potential integer overflows of s->snapshot_size. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Check maximum L1 size in qcow2_snapshot_load_tmp() (CVE-2014-0143)Kevin Wolf2014-04-011-0/+4
| | | | | | | | This avoids an unbounded allocation. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Fix L1 allocation size in qcow2_snapshot_load_tmp() (CVE-2014-0145)Kevin Wolf2014-04-011-1/+1
| | | | | | | | | | | For the L1 table to loaded for an internal snapshot, the code allocated only enough memory to hold the currently active L1 table. If the snapshot's L1 table is actually larger than the current one, this leads to a buffer overflow. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Validate snapshot table offset/size (CVE-2014-0144)Kevin Wolf2014-04-011-25/+4Star
| | | | | | | | | This avoid unbounded memory allocation and fixes a potential buffer overflow on 32 bit hosts. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Don't throw away errno via error_setgJeff Cody2014-02-141-3/+5
| | | | | | | | | | | | | | | | | There are a handful of places in the block layer where a failure path has a valid -errno value, yet error_setg() is used. Those instances should instead use error_setg_errno(), to preserve as much error information as possible. This patch replaces those instances with error_setg_errno(), so that errno is passed up the stack in the error message. Reported-By: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* snapshot: distinguish id and name in load_tmpWenchao Xia2013-12-041-2/+8
| | | | | | | | | | | | Since later this function will be used so improve it. The only caller of it now is qemu-img, and it is not impacted by introduce function bdrv_snapshot_load_tmp_by_id_or_name() that call bdrv_snapshot_load_tmp() twice to keep old search logic. bdrv_snapshot_load_tmp_by_id_or_name() return int to let caller know the errno, and errno will be used later. Also fix a typo in comments of bdrv_snapshot_delete(). Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qcow2: Use negated overflow check maskMax Reitz2013-10-111-7/+5Star
| | | | | | | | | | | In qcow2_check_metadata_overlap and qcow2_pre_write_overlap_check, change the parameter signifying the checks to perform from its current positive form to a negative one, i.e., it will no longer explicitly specify every check to perform but rather a mask of checks not to perform. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Use better type for numerical snapshot IDMax Reitz2013-10-111-2/+3
| | | | | | | | | When trying to find a new snapshot ID, the existing ones are converted to integers using strtoul. This function returns an unsigned long, therefore its result should be saved in an unsigned long as well. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Fix snapshot restoration in snapshot_createMax Reitz2013-10-111-0/+1
| | | | | | | | | | If the new snapshot table could not be written in qcow2_snapshot_create, the old snapshot table has to be restored in memory and the new one released. This should include restoration of the old snapshot count as well, which is added by this patch. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Assert against snapshot name/ID overflowMax Reitz2013-10-111-0/+1
| | | | | | | | | | | | | | | | qcow2_write_snapshots relies on the length of every snapshot ID and name fitting into an unsigned 16 bit integer. This is currently ensured by QEMU through generally only allowing 128 byte IDs and 256 byte names. However, if this should change in the future, the length written to the image file should not be silently truncated (though the name itself would be written completely). Since this is currently not an issue but might require attention due to internal QEMU changes in the future, an assert ensuring sanity is enough for now. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>