summaryrefslogtreecommitdiffstats
path: root/tests/qemu-iotests/075.out
Commit message (Collapse)AuthorAgeFilesLines
* tests/qemu-iotests: Fix output of qemu-io related testsThomas Huth2019-04-301-7/+7
| | | | | | | | | | | One of the recent commits changed the way qemu-io prints out its errors and warnings - they are now prefixed with the program name. We've got to adapt the iotests accordingly to prevent that they are failing. Fixes: 99e98d7c9fc1a1639fad ("qemu-io: Use error_[gs]et_progname()") Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qemu-io: Return non-zero exit code on failureNir Soffer2017-02-121-7/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The result of openfile was not checked, leading to failure deep in the actual command with confusing error message, and exiting with exit code 0. Here is a simple example - trying to read with the wrong format: $ touch file $ qemu-io -f qcow2 -c 'read -P 1 0 1024' file; echo $? can't open device file: Image is not in qcow2 format no file open, try 'help open' 0 With this patch, we fail earlier with exit code 1: $ ./qemu-io -f qcow2 -c 'read -P 1 0 1024' file; echo $? can't open device file: Image is not in qcow2 format 1 Failing earlier, we don't log this error now: no file open, try 'help open' But some tests expected it; the line was removed from the test output. Signed-off-by: Nir Soffer <nirsof@gmail.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20170201003120.23378-2-nirsof@gmail.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
* qemu-io qemu-nbd: Use error_report() etc. instead of fprintf()Markus Armbruster2016-01-131-7/+7
| | | | | | | | Just three instances left. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <1450452927-8346-16-git-send-email-armbru@redhat.com>
* block/cloop: fix offsets[] size off-by-oneStefan Hajnoczi2014-04-011-0/+4
| | | | | | | | | | | | | | | | | | | | | | cloop stores the number of compressed blocks in the n_blocks header field. The file actually contains n_blocks + 1 offsets, where the extra offset is the end-of-file offset. The following line in cloop_read_block() results in an out-of-bounds offsets[] access: uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num]; This patch allocates and loads the extra offset so that cloop_read_block() works correctly when the last block is accessed. Notice that we must free s->offsets[] unconditionally now since there is always an end-of-file offset. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/cloop: refuse images with bogus offsets (CVE-2014-0144)Stefan Hajnoczi2014-04-011-0/+8
| | | | | | | | | | | | | | The offsets[] array allows efficient seeking and tells us the maximum compressed data size. If the offsets are bogus the maximum compressed data size will be unrealistic. This could cause g_malloc() to abort and bogus offsets mean the image is broken anyway. Therefore we should refuse such images. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/cloop: refuse images with huge offsets arrays (CVE-2014-0144)Stefan Hajnoczi2014-04-011-0/+4
| | | | | | | | | | | | | | | Limit offsets_size to 512 MB so that: 1. g_malloc() does not abort due to an unreasonable size argument. 2. offsets_size does not overflow the bdrv_pread() int size argument. This limit imposes a maximum image size of 16 TB at 256 KB block size. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/cloop: prevent offsets_size integer overflow (CVE-2014-0143)Stefan Hajnoczi2014-04-011-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following integer overflow in offsets_size can lead to out-of-bounds memory stores when n_blocks has a huge value: uint32_t n_blocks, offsets_size; [...] ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4); [...] s->n_blocks = be32_to_cpu(s->n_blocks); /* read offsets */ offsets_size = s->n_blocks * sizeof(uint64_t); s->offsets = g_malloc(offsets_size); [...] for(i=0;i<s->n_blocks;i++) { s->offsets[i] = be64_to_cpu(s->offsets[i]); offsets_size can be smaller than n_blocks due to integer overflow. Therefore s->offsets[] is too small when the for loop byteswaps offsets. This patch refuses to open files if offsets_size would overflow. Note that changing the type of offsets_size is not a fix since 32-bit hosts still only have 32-bit size_t. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/cloop: validate block_size header field (CVE-2014-0144)Stefan Hajnoczi2014-04-011-0/+12
| | | | | | | | | | | | | | Avoid unbounded s->uncompressed_block memory allocation by checking that the block_size header field has a reasonable value. Also enforce the assumption that the value is a non-zero multiple of 512. These constraints conform to cloop 2.639's code so we accept existing image files. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* qemu-iotests: add cloop input validation testsStefan Hajnoczi2014-04-011-0/+6
Add a cloop format-specific test case. Later patches add tests for input validation to the script. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>