summaryrefslogtreecommitdiffstats
path: root/block/vpc.c
Commit message (Collapse)AuthorAgeFilesLines
* vpc: Check failure of bdrv_getlength()Eric Blake2017-08-111-1/+8
| | | | | | | | | | | | vpc_open() was checking for bdrv_getlength() failure in one, but not the other, location. Reported-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: fix uninitialised variable compiler warningMark Cave-Ayland2017-07-211-1/+1
| | | | | | | | | | | | | | | Since commit cfc87e00 "block/vpc.c: Handle write failures in get_image_offset()" older versions of gcc (in this case 4.7) incorrectly warn that "ret" can be used uninitialised in vpc_co_pwritev(). Setting ret to 0 at the start of vpc_co_pwritev() prevents the warning in gcc 4.7 and enables compilation with -Werror to succeed. Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1500625265-23844-1-git-send-email-mark.cave-ayland@ilande.co.uk Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* block/vpc.c: Handle write failures in get_image_offset()Peter Maydell2017-07-181-7/+23
| | | | | | | | | | | | | | Coverity (CID 1355236) points out that get_image_offset() doesn't check that it actually succeeded in writing the updated block bitmap to the file. Check the error return from bdrv_pwrite_sync() and propagate an error response back up to the function which calls get_image_offset() for a write so that it can return the error to its caller. get_sector_offset() is only used for reads, but we move it to the same API for consistency. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: make it thread-safePaolo Bonzini2017-07-171-10/+10
| | | | | | | | | Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20170629132749.997-5-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
* block: Add PreallocMode to blk_truncate()Max Reitz2017-07-111-1/+1
| | | | | | | | | | | blk_truncate() itself will pass that value to bdrv_truncate(), and all callers of blk_truncate() just set the parameter to PREALLOC_MODE_OFF for now. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 20170613202107.10125-4-mreitz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Simplify use of BDRV_BLOCK_RAWEric Blake2017-07-101-1/+1
| | | | | | | | | | | The lone caller that cares about a return of BDRV_BLOCK_RAW (namely, io.c:bdrv_co_get_block_status) completely replaces the return value, so there is no point in passing BDRV_BLOCK_DATA. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* migration: Create migration/blocker.hJuan Quintela2017-05-171-1/+1
| | | | | | | | This allows us to remove lots of includes of migration/migration.h Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
* block: Add errp to b{lk,drv}_truncate()Max Reitz2017-04-281-6/+7
| | | | | | | | | | | For one thing, this allows us to drop the error message generation from qemu-img.c and blockdev.c and instead have it unified in bdrv_truncate(). Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20170328205129.15138-3-mreitz@redhat.com Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Add BDRV_O_RESIZE for blk_new_open()Kevin Wolf2017-02-281-1/+2
| | | | | | | | | | | | | | | | | | | | | | blk_new_open() is a convenience function that processes flags rather than QDict options as a simple way to just open an image file. In order to keep it convenient in the future, it must automatically request the necessary permissions. This can easily be inferred from the flags for read and write, but we need another flag that tells us whether to get the resize permission. We can't just always request it because that means that no block jobs can run on the resulting BlockBackend (which is something that e.g. qemu-img commit wants to do), but we also can't request it never because most of the .bdrv_create() implementations call blk_truncate(). The solution is to introduce another flag that is passed by all users that want to resize the image. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* block: Request child permissions in format driversKevin Wolf2017-02-281-0/+1
| | | | | | | | | | | | | | This makes use of the .bdrv_child_perm() implementation for formats that we just added. All format drivers expose the permissions they actually need nows, so that they can be set accordingly and updated when parents are attached or detached. The only format not included here is raw, which was already converted with the other filter drivers. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
* block: Attach bs->file only during .bdrv_open()Kevin Wolf2017-02-241-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | The way that attaching bs->file worked was a bit unusual in that it was the only child that would be attached to a node which is not opened yet. Because of this, the block layer couldn't know yet which permissions the driver would eventually need. This patch moves the point where bs->file is attached to the beginning of the individual .bdrv_open() implementations, so drivers already know what they are going to do with the child. This is also more consistent with how driver-specific children work. For a moment, bdrv_open() gets its own BdrvChild to perform image probing, but instead of directly assigning this BdrvChild to the BDS, it becomes a temporary one and the node name is passed as an option to the drivers, so that they can simply use bdrv_open_child() to create another reference for their own use. This duplicated child for (the not opened yet) bs is not the final state, a follow-up patch will change the image probing code to use a BlockBackend, which is completely independent of bs. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* migration: disallow migrate_add_blocker during migrationAshijeet Acharya2017-01-241-3/+8
| | | | | | | | | | | | | | | If a migration is already in progress and somebody attempts to add a migration blocker, this should rightly fail. Add an errp parameter and a retcode return value to migrate_add_blocker. Signed-off-by: John Snow <jsnow@redhat.com> Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com> Message-Id: <1484566314-3987-5-git-send-email-ashijeetacharya@gmail.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Acked-by: Greg Kurz <groug@kaod.org> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Merged with recent 'Allow invtsc migration' change
* vpc: Use QEMU UUID APIFam Zheng2016-09-231-7/+3Star
| | | | | | | | | | | Previously we conditionally generated footer->uuid, when libuuid was available. Now that we have a built-in implementation, we can switch to it. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-Id: <1474432046-325-6-git-send-email-famz@redhat.com>
* block: Convert bdrv_co_preadv/pwritev to BdrvChildKevin Wolf2016-07-051-4/+4
| | | | | | | | | | | | | | | | | | | This is the final patch for converting the common I/O path to take a BdrvChild parameter instead of BlockDriverState. The completion of this conversion means that all users that perform I/O on an image need to actually hold a reference (in the form of BdrvChild, possible as part of a BlockBackend) to that image. This also protects against inconsistent use of BlockBackend vs. BlockDriverState functions because direct use of a BlockDriverState isn't possible any more and blk->root is private for block-backends.c. In addition, we can now distinguish different users in the I/O path, and the future op blockers work is going to add assertions based on permissions stored in BdrvChild. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Convert bdrv_pwrite(v/_sync) to BdrvChildKevin Wolf2016-07-051-4/+4
| | | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Convert bdrv_pread(v) to BdrvChildKevin Wolf2016-07-051-4/+4
| | | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
* qemu-common: stop including qemu/bswap.h from qemu-common.hPaolo Bonzini2016-05-191-0/+1
| | | | | | | Move it to the actual users. There are still a few includes of qemu/bswap.h in headers; removing them is left for future work. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* block: Allow BDRV_REQ_FUA through blk_pwrite()Eric Blake2016-05-121-5/+5
| | | | | | | | | | | | | | | | | We have several block drivers that understand BDRV_REQ_FUA, and emulate it in the block layer for the rest by a full flush. But without a way to actually request BDRV_REQ_FUA during a pass-through blk_pwrite(), FUA-aware block drivers like NBD are forced to repeat the emulation logic of a full flush regardless of whether the backend they are writing to could do it more efficiently. This patch just wires up a flags argument; followup patches will actually make use of it in the NBD driver and in qemu-io. Signed-off-by: Eric Blake <eblake@redhat.com> Acked-by: Denis V. Lunev <den@openvz.org> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: Implement .bdrv_co_pwritev() interfaceKevin Wolf2016-05-121-43/+43
| | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* vpc: Implement .bdrv_co_preadv() interfaceKevin Wolf2016-05-121-37/+42
| | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* block/vpc: update comments to be compliant w/coding guidelinesJeff Cody2016-04-151-34/+34
| | | | | | Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: set errp in vpc_openJeff Cody2016-04-151-0/+9
| | | | | | | | Add more useful error information to failure paths in vpc_open Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: make checks on max table size a bit more laxJeff Cody2016-04-151-4/+0Star
| | | | | | | | | | | | | | | | | | | | | The check on the max_table_size field not being larger than required is valid, and in accordance with the VHD spec. However, there have been VHD images encountered in the wild that have an out-of-spec max table size that is technically too large. There is no issue in allowing this larger table size, as we also later verify that the computed size (used for the pagetable) is large enough to fit all sectors. In addition, max_table_entries is bounds checked against SIZE_MAX and INT_MAX. Remove the strict check, so that we can accomodate these sorts of images that are benignly out of spec. Reported-by: Stefan Hajnoczi <stefanha@redhat.com> Reported-by: Grant Wu <grantwwu@gmail.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: Use the correct max sector count for VHD imagesJeff Cody2016-04-151-5/+5
| | | | | | | | | | | | | | | | The old VHD_MAX_SECTORS value is incorrect, and is a throwback to the CHS calculations. The VHD specification allows images up to 2040 GiB, which (using 512 byte sectors) corresponds to a maximum number of sectors of 0xff000000, rather than the old value of 0xfe0001ff. Update VHD_MAX_SECTORS to reflect the correct value. Also, update comment references to the actual size limit, and correct one compare so that we can have sizes up to the limit. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: use current_size field for XenConverter VHD imagesJeff Cody2016-04-151-0/+2
| | | | | | | | | | | XenConverter VHD images are another VHD image where current_size is different from the CHS values in the the format header. Use current_size as the default, by looking at the creator_app signature field. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: use current_size field for XenServer VHD imagesStefan Hajnoczi2016-04-151-1/+3
| | | | | | | | | | | | | | The vpc driver has two methods of determining virtual disk size. The correct one to use depends on the software that generated the image file. Add the XenServer creator_app signature so that image size is correctly detected for those images. Reported-by: Grant Wu <grantwwu@gmail.com> Reported-by: Spencer Baugh <sbaugh@catern.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: set errp in vpc_createJeff Cody2016-04-151-0/+5
| | | | | | | | Add more useful error information to failure paths in vpc_create(). Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: fix return value check for blk_pwritePaolo Bonzini2016-04-121-1/+1
| | | | | | | | | | | | | | | bdrv_pwrite_sync used to return zero or negative error, while blk_pwrite returns the number of written bytes when successful. This caused VPC image creation to fail spectacularly: it wrote the first 512 bytes, and then exited immediately because of the non-zero answer from blk_pwrite. But the truly spectacular part is that it returns a positive value (the 512 that blk_pwrite returned) causing everyone to believe that it succeeded. This fixes qemu-iotests with vpc format. Fixes: b8f45cdf7827e39f9a1e6cc446f5972cc6144237 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Always set writeback mode in blk_new_open()Kevin Wolf2016-03-301-2/+1Star
| | | | | | | | | | | | | All callers of blk_new_open() either don't rely on the WCE bit set after blk_new_open() because they explicitly set it anyway, or they pass BDRV_O_CACHE_WB unconditionally. This patch changes blk_new_open() so that it always enables writeback mode and asserts that BDRV_O_CACHE_WB is clear. For those callers that used to pass BDRV_O_CACHE_WB unconditionally, the flag is removed now. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster2016-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* blockdev: Split monitor reference from BB creationMax Reitz2016-03-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Before this patch, blk_new() automatically assigned a name to the new BlockBackend and considered it referenced by the monitor. This patch removes the implicit monitor_add_blk() call from blk_new() (and consequently the monitor_remove_blk() call from blk_delete(), too) and thus blk_new() (and related functions) no longer take a BB name argument. In fact, there is only a single point where blk_new()/blk_new_open() is called and the new BB is monitor-owned, and that is in blockdev_init(). Besides thus relieving us from having to invent names for all of the BBs we use in qemu-img, this fixes a bug where qemu cannot create a new image if there already is a monitor-owned BB named "image". If a BB and its BDS tree are created in a single operation, as of this patch the BDS tree will be created before the BB is given a name (whereas it was the other way around before). This results in minor change to the output of iotest 087, whose reference output is amended accordingly. Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: Use BB functions in .bdrv_create()Kevin Wolf2016-03-141-16/+21
| | | | | | | | All users of the block layers are supposed to go through a BlockBackend. The .bdrv_create() implementation is one such user, so this patch converts it. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Use writeback in .bdrv_create() implementationsKevin Wolf2016-03-141-1/+2
| | | | | | | There's no reason to use a writethrough cache mode while creating an image. Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: give option to force the current_size field in .bdrv_createJeff Cody2016-03-141-7/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | When QEMU creates a VHD image, it goes by the original spec, calculating the current_size based on the nearest CHS geometry (with an exception for disks > 127GB). Apparently, Azure will only allow images that are sized to the nearest MB, and the current_size as calculated from CHS cannot guarantee that. Allow QEMU to create images similar to how Hyper-V creates images, by setting current_size to the specified virtual disk size. This introduces an option, force_size, to be passed to the vpc format during image creation, e.g.: qemu-img convert -f raw -o force_size -O vpc test.img test.vhd When using the "force_size" option, the creator app field used by QEMU will be "qem2" instead of "qemu", to indicate the difference. In light of this, we also add parsing of the "qem2" field during vpc_open. Bug reference: https://bugs.launchpad.net/qemu/+bug/1490611 Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: choose size calculation method based on creator_app fieldJeff Cody2016-03-141-5/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VHD file format is used by both Virtual PC, and Hyper-V. However, how the virtual disk size is calculated varies between the two. Virtual PC uses the CHS drive parameters to determine the drive size. Hyper-V, on the other hand, uses the current_size field in the footer when determining image size. This is problematic for a few reasons: * VHD images from Hyper-V, using CHS calculations, will likely be trunctated. * If we just rely always on current_size, then QEMU may have data compatibility issues with Virtual PC (we may write too much data into a VHD file to be used by Virtual PC, for instance). * Existing VHD images created by QEMU have used the CHS calculations, except for images exceeding the 127GB limit. We want to remain compatible with our own generated images. Luckily, the VHD specification defines a 'Creator App' field, that is used to indicate what software created the VHD file. This patch does two things: 1. Uses the 'Creator App' field to help determine how to calculate size, and 2. Adds a VPC format option 'force_size_calc', so that the user can override the 'Creator App' auto-detection, in case there exist VHD images with unknown or contradictory 'Creator App' entries. N.B.: We currently use the maximum CHS value as an indication to use the current_size field. This patch does not change that, even with the 'force_size_calc' option. Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* vpc: Assign bs->file->bs to file in vpc_co_get_block_statusFam Zheng2016-02-021-0/+2
| | | | | | | Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 1453780743-16806-11-git-send-email-famz@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Add "file" output parameter to block status query functionsFam Zheng2016-02-021-1/+1
| | | | | | | | | | | | | | | | | | The added parameter can be used to return the BDS pointer which the valid offset is referring to. Its value should be ignored unless BDRV_BLOCK_OFFSET_VALID in ret is set. Until block drivers fill in the right value, let's clear it explicitly right before calling .bdrv_get_block_status. The "bs->file" condition in bdrv_co_get_block_status is kept now to keep iotest case 102 passing, and will be fixed once all drivers return the right file pointer. Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 1453780743-16806-2-git-send-email-famz@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block: Clean up includesPeter Maydell2016-01-201-0/+1
| | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Convert bs->file to BdrvChildKevin Wolf2015-10-161-16/+18
| | | | | | | | | | | This patch removes the temporary duplication between bs->file and bs->file_child by converting everything to BdrvChild. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Drop drv parameter from bdrv_open()Max Reitz2015-09-141-1/+1
| | | | | | | | | Now that this parameter is effectively unused, we can drop it and just pass NULL on to bdrv_open_inherit(). Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: vpc - prevent overflow if max_table_entries >= 0x40000000Jeff Cody2015-07-271-4/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | When we allocate the pagetable based on max_table_entries, we multiply the max table entry value by 4 to accomodate a table of 32-bit integers. However, max_table_entries is a uint32_t, and the VPC driver accepts ranges for that entry over 0x40000000. So during this allocation: s->pagetable = qemu_try_blockalign(bs->file, s->max_table_entries * 4); The size arg overflows, allocating significantly less memory than expected. Since qemu_try_blockalign() size argument is size_t, cast the multiplication correctly to prevent overflow. The value of "max_table_entries * 4" is used elsewhere in the code as well, so store the correct value for use in all those cases. We also check the Max Tables Entries value, to make sure that it is < SIZE_MAX / 4, so we know the pagetable size will fit in size_t. Cc: qemu-stable@nongnu.org Reported-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: use bdrv_get_device_or_node_name() in error messagesAlberto Garcia2015-04-281-3/+3
| | | | | | | | | | | | | | | | | | There are several error messages that identify a BlockDriverState by its device name. However those errors can be produced in nodes that don't have a device name associated. In those cases we should use bdrv_get_device_or_node_name() to fall back to the node name and produce a more meaningful message. The messages are also updated to use the more generic term 'node' instead of 'device'. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 9823a1f0514fdb0692e92868661c38a9e00a12d6.1428485266.git.berto@igalia.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/vpc: remove disabled code from get_sector_offsetPeter Lieven2015-03-161-32/+0Star
| | | | | | | | | | | | | | | | | | | The code to check the bitmap for the allocation status of each sector has been "disabled by reason" ever since the vpc driver existed. The reason might be that we might end up reading sector by sector in vpc_read if we really used it. This would be a performance desaster. The current code would furthermore not work if the disabled parts get reactivated since vpc_read and vpc_write only use get_sector_offset to check the allocation status of the first sector of a read/write operation. This might lead to sectors incorrectly treated as zero in vpc_read and to sectors getting allocated twice in vpc_write. Signed-off-by: Peter Lieven <pl@kamp.de> Message-id: 1425379316-19639-6-git-send-email-pl@kamp.de Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/vpc: rename footer->size -> footer->current_sizePeter Lieven2015-03-161-4/+5
| | | | | | | | | the field is named current size in the spec. Name it accordingly. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1425379316-19639-5-git-send-email-pl@kamp.de Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/vpc: make calculate_geometry spec conformPeter Lieven2015-03-161-19/+22
| | | | | | | | | | | | | | | | | | | | | | | | The VHD spec [1] allows for total_sectors of 65535 x 16 x 255 (~127GB) represented by a CHS geometry. If total_sectors is greater than 65535 x 16 x 255 this geometry is set as a maximum. Qemu, Hyper-V and disk2vhd use this special geometry as an indicator to use the image current size from the footer as disk size. This patch changes vpc_create to effectively calculate a CxHxS geometry for the given image size if possible while rounding up if necessary. If the image size is too big to be represented in CHS we set the maximum and write the exact requested image size into the footer. This partly reverts commit 258d2edb, but leaves support for >127G disks intact. [1] http://download.microsoft.com/download/f/f/e/ffef50a5-07dd-4cf8-aaa3-442c0673a029/Virtual%20Hard%20Disk%20Format%20Spec_10_18_06.doc Signed-off-by: Peter Lieven <pl@kamp.de> Message-id: 1425379316-19639-4-git-send-email-pl@kamp.de Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* vpc: Ignore geometry for large imagesKevin Wolf2015-03-161-6/+4Star
| | | | | | | | | | | | | | | | | | | | | | | The CHS calculation as done per the VHD spec imposes a maximum image size of ~127 GB. Real VHD images exist that are larger than that. Apparently there are two separate non-standard ways to achieve this: You could use more heads than the spec does - this is the option that qemu-img create chooses. However, other images exist where the geometry is set to the maximum (65535/16/255), but the actual image size is larger. Until now, such images are truncated at 127 GB when opening them with qemu. This patch changes the vpc driver to ignore geometry in this case and only trust the size field in the header. Signed-off-by: Kevin Wolf <kwolf@redhat.com> [PL: Fixed maximum geometry in the commit msg] Signed-off-by: Peter Lieven <pl@kamp.de> Message-id: 1425379316-19639-3-git-send-email-pl@kamp.de Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* block/vpc: optimize vpc_co_get_block_statusPeter Lieven2015-03-161-10/+8Star
| | | | | | | | | | *pnum can't be greater than s->block_size / BDRV_SECTOR_SIZE for allocated sectors since there is always a bitmap in between. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-id: 1425379316-19639-2-git-send-email-pl@kamp.de Signed-off-by: Max Reitz <mreitz@redhat.com>
* vpc: Implement bdrv_co_get_block_status()Kevin Wolf2015-03-091-2/+48
| | | | | | | | | | This implements bdrv_co_get_block_status() for VHD images. This can significantly speed up qemu-img convert operation because only with this function implemented sparseness can be considered. (Before, converting a 1 TB empty image took several minutes for me, now it's instantaneous.) Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* vpc: Fix size in fixed image creationKevin Wolf2015-03-091-7/+3Star
| | | | | | | | | If total_sectors is rounded to match the geometry, total_size needs to be changed as well. Otherwise we end up with an image whose geometry describes a disk larger than the image file, which doesn't end well. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
* block: remove BLOCK_OPT_NOCOW from vpc_create_optsJeff Cody2014-12-101-5/+0Star
| | | | | | | | | | | | | | In commit fef6070, the need for NOCOW was removed from the vpc driver, as we removed the the posix calls. However, the BLOCK_OPT_NOCOW was not removed from vpc_create_opts. This was a mistake - remove the opt from there as well. Signed-off-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Stefan Weil <sw@weilnetz.de> Message-id: 8ba076fa725fed681cde7d8afc4fb239ae06a9c6.1417620301.git.jcody@redhat.com Signed-off-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>