summaryrefslogtreecommitdiffstats
path: root/block
Commit message (Collapse)AuthorAgeFilesLines
* curl: Fix hang reading from slow connectionsMatthew Booth2014-04-301-1/+2
| | | | | | | | | | | | | When receiving a new aio read request, we first look for an existing transaction whose range will cover the read request by the time it completes. However, we weren't checking that the existing transaction was still active. If it had timed out, we were adding the request to a transaction which would never complete and had already been cancelled, resulting in a hang. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Ensure all informationals are checked for completionMatthew Booth2014-04-301-30/+23Star
| | | | | | | | | | According to the documentation, the correct way to ensure all informationals have been returned by curl_multi_info_read is to loop until it returns NULL. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Eliminate unnecessary use of curl_multi_socket_allMatthew Booth2014-04-301-10/+22
| | | | | | | | | | | | | | | | curl_multi_socket_all is a deprecated catch-all which checks for activities on all open curl sockets. We have enough information from the event loop to check only the sockets with activity. This change removes use of curl_multi_socket_all in favour of curl_multi_socket_action called with the relevant handle. At the same time, it also ensures that the driver only checks for completion of read operations after reading from a socket, rather than both reading and writing. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove unnecessary explicit calls to internal event handlerMatthew Booth2014-04-301-2/+3
| | | | | | | | | | | | Remove calls to curl_multi_do where the relevant handles are already registered to the event loop. Ensure that we kick off socket handling with CURL_SOCKET_TIMEOUT after adding a new handle. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove erroneous sleep waiting for curl completionMatthew Booth2014-04-301-2/+1Star
| | | | | | | | | | | | | | | | The driver will not start more than a fixed number of curl sessions. If it needs more, it must wait for the completion of an existing one. The driver was sleeping, which will prevent the main loop from running, and therefore the event it's waiting on. It was also directly calling its internal handler rather than waiting on existing registered handlers to be called from the main loop. This change causes it simply to wait for a period of time whilst allowing the main loop to execute. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Fix return from curl_read_cb with invalid stateMatthew Booth2014-04-301-2/+1Star
| | | | | | | | | | A curl write callback is supposed to return the number of bytes it handled. curl_read_cb would have erroneously reported it had handled all bytes in the event that the internal curl state was invalid. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove unnecessary use of gotoMatthew Booth2014-04-301-28/+27Star
| | | | | | | | This isn't any of the usually acceptable uses of goto. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Fix long lineMatthew Booth2014-04-301-1/+2
| | | | | | Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vdi: Error out immediately in vdi_create()Max Reitz2014-04-301-1/+6
| | | | | | | | | | | | | | Currently, if an error occurs during the part of vdi_create() which actually writes the image, the function stores -errno, but continues anyway. Instead of trying to write data which (if it can be written at all) does not make any sense without the operations before succeeding (e.g., writing the image header), just error out immediately. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/bochs: Fix error handling for seek_to_sector()Max Reitz2014-04-301-9/+14
| | | | | | | | | | | | | | | | | | | Currently, seek_to_sector() returns -1 both for errors and unallocated sectors, resulting in silent errors. As 0 is an invalid offset of data clusters (bitmap_offset is greater than 0 because s->data_offset is greater than 0), just return 0 for unallocated sectors and -errno in case of error. This should then be propagated by bochs_read(), the sole user of seek_to_sector(). That function also has a case of "return -1 in case of error", which is fixed by this patch as well. bochs_read() is called by bochs_co_read() which passes the return value through, therefore it is indeed correct for bochs_read() to return -errno. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Check min_size in qcow2_grow_l1_table()Max Reitz2014-04-301-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | First, new_l1_size is an int64_t, whereas min_size is a uint64_t. Therefore, during the loop which adjusts new_l1_size until it equals or exceeds min_size, new_l1_size might overflow and become negative. The comparison in the loop condition however will take it as an unsigned value (because min_size is unsigned) and therefore recognize it as exceeding min_size. Therefore, the loop is left with a negative new_l1_size, which is not correct. This could be fixed by making new_l1_size uint64_t. On the other hand, however, by doing this, the while loop may take forever. If min_size is e.g. UINT64_MAX, it will take new_l1_size probably multiple overflows to reach the exact same value (if it reaches it at all). Then, right after the loop, new_l1_size will be recognized as being too big anyway. Both problems require a ridiculously high min_size value, which is very unlikely to occur; but both problems are also simply avoided by checking whether min_size is sane before calculating new_l1_size (which should still be checked separately, though). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Catch bdrv_getlength() errorMax Reitz2014-04-301-0/+5
| | | | | | | | The call to bdrv_getlength() from qcow2_check_refcounts() may result in an error. Check this and abort if necessary. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Use correct width in format stringsMax Reitz2014-04-306-24/+28
| | | | | | | | | | | Instead of blindly relying on a normal integer having a width of 32 bits (which is a pretty good assumption, but we should not rely on it if there is no need), use the correct format string macros. This does not touch DEBUG output. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qcow2: Avoid overflow in alloc_clusters_noref()Max Reitz2014-04-301-0/+7
| | | | | | | | | | | alloc_clusters_noref() stores the cluster index in a uint64_t. However, offsets are often represented as int64_t (as for example the return value of alloc_clusters_noref() itself demonstrates). Therefore, we should make sure all offsets in the allocated range of clusters are representable using int64_t without overflows. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Use error_abort in bdrv_image_info_specific_dump()Max Reitz2014-04-301-2/+1Star
| | | | | | | | | | | | | Currently, bdrv_image_info_specific_dump() uses an error variable for visit_type_ImageInfoSpecific, but ignores the result. As this function is used here with an output visitor to transform the ImageInfoSpecific object to a generic QDict, an error should actually be impossible. It is however better to assert that this is indeed the case. This is done by this patch using error_abort instead of an unused local Error variable. Signed-off-by: Max Reitz <mreitz@redhat.com> Reported-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Unlink temporary files in raw-posix/win32Kevin Wolf2014-04-302-1/+7
| | | | | | | | | | | | | | Instead of having unlink() calls in the generic block layer, where we aren't even guarateed to have a file name, move them to those block drivers that are actually used and that always have a filename. Gets us rid of some #ifdefs as well. The patch also converts bs->is_temporary to a new BDRV_O_TEMPORARY open flag so that it is inherited in the protocol layer and the raw-posix and raw-win32 drivers can unlink the file. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* qcow2: Fix discardMax Reitz2014-04-291-8/+18
| | | | | | | | | | | | | discard_single_l2() should not implement its own version of qcow2_get_cluster_type(), but rather rely on this already existing function. By doing so, it will work for compressed clusters as well (which it did not so far). Also, rename "old_offset" to "old_l2_entry", as both are quite different (and the value is indeed of the latter kind). Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* mirror: Check for bdrv_get_info resultFam Zheng2014-04-291-1/+4
| | | | | | | bdrv_get_info could fail. Add check before using the returned value. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* mirror: Fix resource leak when bdrv_getlength failsFam Zheng2014-04-291-2/+2
| | | | | | | | The direct return will skip releasing of all the resouces at immediate_exit, don't miss that. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* mirror: Use DIV_ROUND_UPFam Zheng2014-04-281-1/+1
| | | | | | | | | Although bdrv_getlength() was just called above this, and checked for error, it is better to just use the value we already get, and use DIV_ROUND_UP. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* Merge remote-tracking branch 'remotes/qmp-unstable/queue/qmp' into stagingPeter Maydell2014-04-281-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * remotes/qmp-unstable/queue/qmp: monitor: fix qmp_getfd() fd leak in error case HMP: support specifying dump format for dump-guest-memory HMP: fix doc of dump-guest-memory qmp: object-add: Validate class before creating object monitor: Add device_add and device_del completion. monitor: Add command_completion callback to mon_cmd_t. monitor: Fix drive_del id argument type completion. error: Remove some unused headers qerror.h: Replace QERR_NOT_SUPPORTED with QERR_UNSUPPORTED qerror.h: Remove QERR defines that are only used once qerror.h: Remove unused error classes error: Print error_report() to stderr if using qmp monitor: Remove unused monitor_print_filename error: Privatize error_print_loc vnc: Remove default_mon usage slirp: Remove default_mon usage Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * qerror.h: Remove QERR defines that are only used onceCole Robinson2014-04-251-1/+1
| | | | | | | | | | | | | | | | | | | | Just hardcode them in the callers Cc: Luiz Capitulino <lcapitulino@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
* | iscsi: Don't use error_is_set() to suppress additional errorsMarkus Armbruster2014-04-251-5/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using error_is_set(errp) that way can sweep programming errors under the carpet when we get called incorrectly with an error set. Commit 24d3bd6 added a broken error path to iscsi_do_inquiry(): it first calls error_setg(), then jumps to the preexisting error label, where error_setg() gets called again, triggering an assertion failure. Commit cbee81f fixed this by guarding the second error_setg() with an error_is_set(). Replace this fix by a simpler and safer one: jump right behind the second error_setg(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | nbd: Use return values instead of error_is_set(errp)Markus Armbruster2014-04-251-1/+1
| | | | | | | | | | | | | | | | | | | | Using error_is_set(errp) to check whether a function call failed is fragile: it breaks when errp is null. Check perfectly suitable return values instead when possible. errp can't be null there now, but this is more robust and more obviously correct Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | Use error_is_set() only when necessary (again)Markus Armbruster2014-04-253-4/+4
|/ | | | | | | | | | | error_is_set(&var) is the same as var != NULL, but it takes whole-program analysis to figure that out. Unnecessarily hard for optimizers, static checkers, and human readers. Commit 84d18f0 dumbed it down to obvious, but a few more have crept in since, and documentation was overlooked. Dumb these down, too. Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/cloop: use PRIu32 format specifier for uint32_tStefan Hajnoczi2014-04-231-6/+6
| | | | | | | | | PRIu32 is the format string specifier for uint32_t, let's use it. Variables ->block_size, ->n_blocks, and i are all uint32_t. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vmdk: Fix "%x" to PRIx32 in format strings for cidFam Zheng2014-04-221-5/+5
| | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Add errp to bdrv_new()Kevin Wolf2014-04-222-2/+2
| | | | | | | | | | | | | This patch adds an errp parameter to bdrv_new() and updates all its callers. The next patches will make use of this in order to check for duplicate IDs. Most of the callers know that their ID is fine, so they can simply assert that there is no error. Behaviour doesn't change with this patch yet as bdrv_new() doesn't actually assign errors to errp. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* convert fprintf() calls to error_setg() in block/qed.c:bdrv_qed_create()Aakriti Gupta2014-04-221-7/+9
| | | | | | | | | This patch converts fprintf() calls to error_setg() in block/qed.c:bdrv_qed_create() (error_setg() is part of error reporting API in include/qapi/error.h) Signed-off-by: Aakriti Gupta <aakritty@gmail.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Replaced old error handling with error reporting API.Maria Kustova2014-04-221-1/+1
| | | | | | Signed-off-by: Maria Kustova <maria.k@catit.be> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Handle error of bdrv_getlength in bdrv_create_dirty_bitmapFam Zheng2014-04-221-1/+4
| | | | | | | | | | bdrv_getlength could fail, check the return value before using it. Return NULL and set errno if it fails. Callers are updated to handle the error case. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vmdk: Fix %d and %lld to PRI* in format stringsFam Zheng2014-04-221-6/+7
| | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* iscsi: Remember to set ret for iscsi_open in error caseFam Zheng2014-04-111-0/+1
| | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* bochs: Fix catalog size checkKevin Wolf2014-04-111-3/+11
| | | | | | | | | The old check was off by a factor of 512 and didn't consider cases where we don't get an exact division. This could lead to an out-of-bounds array access in seek_to_sector(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* bochs: Fix memory leak in bochs_open() error pathKevin Wolf2014-04-111-2/+4
| | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
* qcow2: Put cache reference in error caseKevin Wolf2014-04-041-0/+1
| | | | | | | | | When qcow2_get_cluster_offset() sees a zero cluster in a version 2 image, it (rightfully) returns an error. But in doing so it shouldn't leak an L2 table cache reference. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* qcow2: Flush metadata during read-only reopenKevin Wolf2014-04-041-4/+21
| | | | | | | | | | | | | | | | | | | | | | If lazy refcounts are enabled for a backing file, committing to this backing file may leave it in a dirty state even if the commit succeeds. The reason is that the bdrv_flush() call in bdrv_commit() doesn't flush refcount updates with lazy refcounts enabled, and qcow2_reopen_prepare() doesn't take care to flush metadata. In order to fix this, this patch also fixes qcow2_mark_clean(), which contains another ineffective bdrv_flush() call beause lazy refcounts are disabled only afterwards. All existing callers of qcow2_mark_clean() either don't modify refcounts or already flush manually, so that this fixes only a latent, but not yet actually triggerable bug. Another instance of the same problem is live snapshots. Again, a real corruption is prevented by an explicit flush for non-read-only images in external_snapshot_prepare(), but images using lazy refcounts stay dirty. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* iscsi: Don't set error if already set in iscsi_do_inquiryFam Zheng2014-04-041-2/+4
| | | | | | | | This eliminates the possible assertion failure in error_setg(). Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini/scsi-next' into stagingPeter Maydell2014-04-031-6/+13
|\ | | | | | | | | | | | | | | | | | | * remotes/bonzini/scsi-next: iscsi: always query max WRITE SAME length iscsi: ignore flushes on scsi-generic devices iscsi: recognize "invalid field" ASCQ from WRITE SAME command scsi-bus: remove bogus assertion Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * iscsi: always query max WRITE SAME lengthPaolo Bonzini2014-04-031-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | Max WRITE SAME length is also used when the UNMAP bit is zero, so it should be queried even if LBPWS=0. Same for the optimal transfer length. However, the write_zeroes_alignment only matters for UNMAP=1 so we still restrict it to LBPWS=1. Reviewed-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * iscsi: ignore flushes on scsi-generic devicesPaolo Bonzini2014-04-031-0/+4
| | | | | | | | | | | | | | | | Non-block SCSI devices do not support flushing, but we may still send them requests via bdrv_flush_all. Just ignore them. Reviewed-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
| * iscsi: recognize "invalid field" ASCQ from WRITE SAME commandPaolo Bonzini2014-04-031-1/+2
| | | | | | | | | | | | | | | | | | Some targets may return "invalid field" as the ASCQ from WRITE SAME if they support the command only without the UNMAP field. Recognize that, and return ENOTSUP just like for "invalid operation code". Reviewed-by: Peter Lieven <pl@kamp.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | qcow2: link all L2 meta updates in preallocate()Stefan Hajnoczi2014-04-011-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | preallocate() only links the first QCowL2Meta's data clusters into the L2 table and ignores any chained QCowL2Metas in the linked list. Chains of QCowL2Meta structs are built up when contiguous clusters span L2 tables. Each QCowL2Meta describes one L2 table update. This is a rare case in preallocate() but can happen. This patch fixes preallocate() by iterating over the whole list of QCowL2Metas. Compare with the qcow2_co_writev() function's implementation, which is similar but also also handles request dependencies. preallocate() only performs one allocation at a time so there can be no dependencies. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | parallels: Sanity check for s->tracks (CVE-2014-0142)Kevin Wolf2014-04-011-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | This avoids a possible division by zero. Convert s->tracks to unsigned as well because it feels better than surviving just because the results of calculations with s->tracks are converted to unsigned anyway. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | parallels: Fix catalog size integer overflow (CVE-2014-0143)Kevin Wolf2014-04-011-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The first test case would cause a huge memory allocation, leading to a qemu abort; the second one to a too small malloc() for the catalog (smaller than s->catalog_size), which causes a read-only out-of-bounds array access and on big endian hosts an endianess conversion for an undefined memory area. The sample image used here is not an original Parallels image. It was created using an hexeditor on the basis of the struct that qemu uses. Good enough for trying to crash the driver, but not for ensuring compatibility. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | qcow2: Limit snapshot table sizeKevin Wolf2014-04-012-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even with a limit of 64k snapshots, each snapshot could have a filename and an ID with up to 64k, which would still lead to pretty large allocations, which could potentially lead to qemu aborting. Limit the total size of the snapshot table to an average of 1k per entry when the limit of 64k snapshots is fully used. This should be plenty for any reasonable user. This also fixes potential integer overflows of s->snapshot_size. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | qcow2: Check maximum L1 size in qcow2_snapshot_load_tmp() (CVE-2014-0143)Kevin Wolf2014-04-013-3/+9
| | | | | | | | | | | | | | | | This avoids an unbounded allocation. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | qcow2: Fix L1 allocation size in qcow2_snapshot_load_tmp() (CVE-2014-0145)Kevin Wolf2014-04-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | For the L1 table to loaded for an internal snapshot, the code allocated only enough memory to hold the currently active L1 table. If the snapshot's L1 table is actually larger than the current one, this leads to a buffer overflow. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | qcow2: Fix NULL dereference in qcow2_open() error path (CVE-2014-0146)Kevin Wolf2014-04-011-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | The qcow2 code assumes that s->snapshots is non-NULL if s->nb_snapshots != 0. By having the initialisation of both fields separated in qcow2_open(), any error occuring in between would cause the error path to dereference NULL in qcow2_free_snapshots() if the image had any snapshots. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* | qcow2: Fix copy_sectors() with VM stateKevin Wolf2014-04-011-9/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bs->total_sectors is not the highest possible sector number that could be involved in a copy on write operation: VM state is after the end of the virtual disk. This resulted in wrong values for the number of sectors to be copied (n). The code that checks for the end of the image isn't required any more because the code hasn't been calling the block layer's bdrv_read() for a long time; instead, it directly calls qcow2_readv(), which doesn't error out on VM state sector numbers. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>