summaryrefslogtreecommitdiffstats
path: root/block/curl.c
Commit message (Collapse)AuthorAgeFilesLines
...
* block/curl: Do not wait for data beyond EOFMax Reitz2016-11-151-9/+23
| | | | | | | | | | | | | | | | | libcurl will only give us as much data as there is, not more. The block layer will deny requests beyond the end of file for us; but since this block driver is still using a sector-based interface, we can still get in trouble if the file size is not a multiple of 512. While we have already made sure not to attempt transfers beyond the end of the file, we are currently still trying to receive data from there if the original request exceeds the file size. This patch fixes this issue and invokes qemu_iovec_memset() on the iovec's tail. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20161025025431.24714-5-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block/curl: Remember all socketsMax Reitz2016-11-151-6/+41
| | | | | | | | | | | | | | | For some connection types (like FTP, generally), more than one socket may be used (in FTP's case: control vs. data stream). As of commit 838ef602498b8d1985a231a06f5e328e2946a81d ("curl: Eliminate unnecessary use of curl_multi_socket_all"), we have to remember all of the sockets used by libcurl, but in fact we only did that for a single one. Since one libcurl connection may use multiple sockets, however, we have to remember them all. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20161025025431.24714-4-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block/curl: Fix return value from curl_read_cbMax Reitz2016-11-151-4/+7
| | | | | | | | | | | | | | | | | | | | | | While commit 38bbc0a580f9f10570b1d1b5d3e92f0e6feb2970 is correct in that the callback is supposed to return the number of bytes handled; what it does not mention is that libcurl will throw an error if the callback did not "handle" all of the data passed to it. Therefore, if the callback receives some data that it cannot handle (either because the receive buffer has not been set up yet or because it would not fit into the receive buffer) and we have to ignore it, we still have to report that the data has been handled. Obviously, this should not happen normally. But it does happen at least for FTP connections where some data (that we do not expect) may be generated when the connection is established. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20161025025431.24714-3-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block/curl: Use BDRV_SECTOR_SIZEMax Reitz2016-11-151-6/+5Star
| | | | | | | | | | | Currently, curl defines its own constant SECTOR_SIZE. There is no advantage over using the global BDRV_SECTOR_SIZE, so drop it. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20161025025431.24714-2-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block/curl: Drop TFTP "support"Max Reitz2016-11-151-19/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because TFTP does not support byte ranges, it was never usable with our curl block driver. Since apparently nobody has ever complained loudly enough for someone to take care of the issue until now, it seems reasonable to assume that nobody has ever actually used it. Therefore, it should be safe to just drop it from curl's protocol list. [Jeff Cody: Below is additional summary pulled, with some rewording, from followup emails between Max and Markus, to explain what worked and what didn't] TFTP would sometimes work, to a limited extent, for images <= the curl "readahead" size, so long as reads started at offset zero. By default, that readahead size is 256KB. Reads starting at a non-zero offset would also have returned data from a zero offset. It can become more complicated still, with mixed reads at zero offset and non-zero offsets, due to data buffering. In short, TFTP could only have worked before in very specific scenarios with unrealistic expectations and constraints. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20161102175539.4375-4-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block: use aio_bh_schedule_oneshotPaolo Bonzini2016-10-071-6/+1Star
| | | | | | | | | | This simplifies bottom half handlers by removing calls to qemu_bh_delete and thus removing the need to stash the bottom half pointer in the opaque datum. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Operate on zero-length fileTomáš Golembiovský2016-09-151-4/+21
| | | | | | | | | | | | | | | | | Another attempt to fix the bug 1596870. When creating new disk backed by remote file accessed via HTTPS and the backing file has zero length, qemu-img terminates with uniformative error message: qemu-img: disk.qcow2: CURL: Error opening file: While it may not make much sense to operate on empty file, other block backends (e.g. raw backend for regular files) seem to allow it. This patch fixes it for the curl backend and improves the reported error. Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* curl: Cast fd to int for DPRINTFFam Zheng2016-08-171-1/+1
| | | | | | | | | | | | | | | | | | Currently "make docker-test-mingw@fedora" has a warning like: /tmp/qemu-test/src/block/curl.c: In function 'curl_sock_cb': /tmp/qemu-test/src/block/curl.c:172:6: warning: format '%d' expects argument of type 'int', but argument 4 has type 'curl_socket_t {aka long long unsigned int}' DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd); ^ cc1: all warnings being treated as errors Cast to int to suppress it. Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1470027888-24381-1-git-send-email-famz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>
* block: always compile-check debug printsZhou Jie2016-05-121-2/+8
| | | | | | | | | | Files with conditional debug statements should ensure that the printf is always compiled. This prevents bitrot of the format string of the debug statement. And switch debug output to stderr. Signed-off-by: Zhou Jie <zhoujie2011@cn.fujitsu.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* util: move declarations out of qemu-common.hVeronia Bahaa2016-03-221-0/+1
| | | | | | | | | | Move declarations out of qemu-common.h for functions declared in utils/ files: e.g. include/qemu/path.h for utils/path.c. Move inline functions out of qemu-common.h and into new files (e.g. include/qemu/bcd.h) Signed-off-by: Veronia Bahaa <veroniabahaa@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster2016-03-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* curl: add support for HTTP authentication parametersDaniel P. Berrange2016-02-291-0/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | If connecting to a web server which has authentication turned on, QEMU gets a 401 as curl has not been configured with any authentication credentials. This adds 4 new parameters to the curl block driver options 'username', 'password-secret', 'proxy-username' and 'proxy-password-secret'. Passwords are provided using the recently added 'secret' object type $QEMU \ -object secret,id=sec0,filename=/home/berrange/example.pw \ -object secret,id=sec1,filename=/home/berrange/proxy.pw \ -drive driver=http,url=http://example.com/some.img,\ username=dan,password-secret=sec0,\ proxy-username=dan,proxy-password-secret=sec1 Of course it is possible to use the same secret for both the proxy & server passwords if desired, or omit the proxy auth details, or the server auth details as required. Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-id: 1453385961-10718-3-git-send-email-berrange@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
* block: Clean up includesPeter Maydell2016-01-201-0/+1
| | | | | | | | | | | Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* aio: Add "is_external" flag for event handlersFam Zheng2015-10-231-6/+8
| | | | | | | | | | All callers pass in false, and the real external ones will switch to true in coming patches. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/curl: Don't lose original error when a connection fails.Richard W.M. Jones2015-07-151-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently if qemu is connected to a curl source (eg. web server), and the web server fails / times out / dies, you always see a bogus EIO "Input/output error". For example, choose a large file located on any local webserver which you control: $ qemu-img convert -p http://example.com/large.iso /tmp/test Once it starts copying the file, stop the webserver and you will see qemu-img fail with: qemu-img: error while reading sector 61440: Input/output error This patch does two things: Firstly print the actual error from curl so it doesn't get lost. Secondly, change EIO to EPROTO. EPROTO is a POSIX.1 compatible errno which more accurately reflects that there was a protocol error, rather than some kind of hardware failure. After this patch is applied, the error changes to: $ qemu-img convert -p http://example.com/large.iso /tmp/test qemu-img: curl: transfer closed with 469989 bytes remaining to read qemu-img: error while reading sector 16384: Protocol error Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
* qerror: Move #include out of qerror.hMarkus Armbruster2015-06-221-0/+1
| | | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
* block/curl: Improve type safety of s->timeout.Richard W.M. Jones2014-11-031-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | qemu_opt_get_number returns a uint64_t, and curl_easy_setopt expects a long (not an int). There is no warning about the latter type error because curl_easy_setopt uses a varargs argument. Store the timeout (which is a positive number of seconds) as a uint64_t. Check that the number given by the user is reasonable. Zero is permissible (meaning no timeout is enforced by cURL). Cast it to long before calling curl_easy_setopt to fix the type error. Example error message after this change has been applied: $ ./qemu-img create -f qcow2 /tmp/test.qcow2 \ -b 'json: { "file.driver":"https", "file.url":"https://foo/bar", "file.timeout":-1 }' qemu-img: /tmp/test.qcow2: Could not open 'json: { "file.driver":"https", "file.url":"https://foo/bar", "file.timeout":-1 }': timeout parameter is too large or negative: Invalid argument Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Gonglei <arei.gonglei@huawei.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block: Rename BlockDriverCompletionFunc to BlockCompletionFuncMarkus Armbruster2014-10-201-1/+1
| | | | | | | | | | I'll use it with block backends shortly, and the name is going to fit badly there. It's a block layer thing anyway, not just a block driver thing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Rename BlockDriverAIOCB* to BlockAIOCB*Markus Armbruster2014-10-201-2/+2
| | | | | | | | | | I'll use BlockDriverAIOCB with block backends shortly, and the name is going to fit badly there. It's a block layer thing anyway, not just a block driver thing. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Rename qemu_aio_release -> qemu_aio_unrefFam Zheng2014-09-221-5/+5
| | | | | | Suggested-by: Benoît Canet <benoit.canet@irqsave.net> Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: Drop curl_aiocb_info.cancelFam Zheng2014-09-221-6/+0Star
| | | | | Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: The macro that you have to uncomment to get debugging is DEBUG_CURL.Richard W.M. Jones2014-09-021-1/+1
| | | | | Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* curl: Don't deref NULL pointer in call to aio_poll.Richard W.M. Jones2014-08-291-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 63f0f45f2e89b60ff8245fec81328ddfde42a303 the following mechanical change was made: if (!state) { - qemu_aio_wait(); + aio_poll(state->s->aio_context, true); } The new code now checks if state is NULL and then dereferences it ('state->s') which is obviously incorrect. This commit replaces state->s->aio_context with bdrv_get_aio_context(bs), fixing this problem. The two other hunks are concerned with getting the BlockDriverState pointer bs to where it is needed. The original bug causes a segfault when using libguestfs to access a VMware vCenter Server and doing any kind of complex read-heavy operations. With this commit the segfault goes away. Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Benoît Canet <benoit.canet@nodalink.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: Allow a cookie or cookies to be sent with http/https requests.Richard W.M. Jones2014-08-291-0/+16
| | | | | | | | | | | | | | | | | | | | | | | In order to access VMware ESX efficiently, we need to send a session cookie. This patch is very simple and just allows you to send that session cookie. It punts on the question of how you get the session cookie in the first place, but in practice you can just run a `curl' command against the server and extract the cookie that way. To use it, add file.cookie to the curl URL. For example: $ qemu-img info 'json: { "file.driver":"https", "file.url":"https://vcenter/folder/Windows%202003/Windows%202003-flat.vmdk?dcPath=Datacenter&dsName=datastore1", "file.sslverify":"off", "file.cookie":"vmware_soap_session=\"52a01262-bf93-ccce-d379-8dabb3e55560\""}' image: [...] file format: raw virtual size: 8.0G (8589934592 bytes) disk size: unavailable Signed-off-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block.curl: adding 'timeout' optionDaniel Henrique Barboza2014-08-291-1/+12
| | | | | | | | | | | | | | | | | The curl hardcoded timeout (5 seconds) sometimes is not long enough depending on the remote server configuration and network traffic. The user should be able to set how much long he is willing to wait for the connection. Adding a new option to set this timeout gives the user this flexibility. The previous default timeout of 5 seconds will be used if this option is not present. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com> Reviewed-by: Benoit Canet <benoit.canet@nodalink.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: Handle failure for potentially large allocationsKevin Wolf2014-08-151-1/+7
| | | | | | | | | | | | Some code in the block layer makes potentially huge allocations. Failure is not completely unexpected there, so avoid aborting qemu and handle out-of-memory situations gracefully. This patch addresses the allocations in the curl block driver. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Benoit Canet <benoit@irqsave.net>
* block: Drop superfluous conditionals around g_free()Markus Armbruster2014-06-161-6/+3Star
| | | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: implement .bdrv_detach/attach_aio_context()Stefan Hajnoczi2014-06-041-77/+115
| | | | | | | | | | | | | | | | | | | | The curl block driver uses fd handlers, timers, and BHs. The fd handlers and timers are managed on behalf of libcurl, which controls them using callback functions that the block driver implements. The simplest way to implement .bdrv_detach/attach_aio_context() is to clean up libcurl in the old event loop and initialize it again in the new event loop. We do not need to keep track of anything since there are no pending requests when the AioContext is changed. Also make sure to use aio_set_fd_handler() instead of qemu_aio_set_fd_handler() and aio_bh_new() instead of qemu_bh_new() so the current AioContext is passed in. Cc: Alexander Graf <agraf@suse.de> Cc: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
* curl: Add sslverify optionMatthew Booth2014-05-191-0/+12
| | | | | | | | This allows qemu to use images over https with a self-signed certificate. It defaults to verifying the certificate. Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove broken parsing of options from urlMatthew Booth2014-05-191-42/+10Star
| | | | | | | | The block layer now supports a generic json syntax for passing option parameters explicitly, making parsing of options from the url redundant. Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Fix build when curl_multi_socket_action isn't availableMatthew Booth2014-05-191-0/+15
| | | | | Signed-off-by: Matthew Booth <mbooth@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Fix hang reading from slow connectionsMatthew Booth2014-04-301-1/+2
| | | | | | | | | | | | | When receiving a new aio read request, we first look for an existing transaction whose range will cover the read request by the time it completes. However, we weren't checking that the existing transaction was still active. If it had timed out, we were adding the request to a transaction which would never complete and had already been cancelled, resulting in a hang. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Ensure all informationals are checked for completionMatthew Booth2014-04-301-30/+23Star
| | | | | | | | | | According to the documentation, the correct way to ensure all informationals have been returned by curl_multi_info_read is to loop until it returns NULL. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Eliminate unnecessary use of curl_multi_socket_allMatthew Booth2014-04-301-10/+22
| | | | | | | | | | | | | | | | curl_multi_socket_all is a deprecated catch-all which checks for activities on all open curl sockets. We have enough information from the event loop to check only the sockets with activity. This change removes use of curl_multi_socket_all in favour of curl_multi_socket_action called with the relevant handle. At the same time, it also ensures that the driver only checks for completion of read operations after reading from a socket, rather than both reading and writing. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove unnecessary explicit calls to internal event handlerMatthew Booth2014-04-301-2/+3
| | | | | | | | | | | | Remove calls to curl_multi_do where the relevant handles are already registered to the event loop. Ensure that we kick off socket handling with CURL_SOCKET_TIMEOUT after adding a new handle. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove erroneous sleep waiting for curl completionMatthew Booth2014-04-301-2/+1Star
| | | | | | | | | | | | | | | | The driver will not start more than a fixed number of curl sessions. If it needs more, it must wait for the completion of an existing one. The driver was sleeping, which will prevent the main loop from running, and therefore the event it's waiting on. It was also directly calling its internal handler rather than waiting on existing registered handlers to be called from the main loop. This change causes it simply to wait for a period of time whilst allowing the main loop to execute. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Fix return from curl_read_cb with invalid stateMatthew Booth2014-04-301-2/+1Star
| | | | | | | | | | A curl write callback is supposed to return the number of bytes it handled. curl_read_cb would have erroneously reported it had handled all bytes in the event that the internal curl state was invalid. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Remove unnecessary use of gotoMatthew Booth2014-04-301-28/+27Star
| | | | | | | | This isn't any of the usually acceptable uses of goto. Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Fix long lineMatthew Booth2014-04-301-1/+2
| | | | | | Signed-off-by: Matthew Booth <mbooth@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: Replaced old error handling with error reporting API.Maria Kustova2014-04-221-1/+1
| | | | | | Signed-off-by: Maria Kustova <maria.k@catit.be> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* curl: check data size before memcpy to local buffer. (CVE-2014-0144)Fam Zheng2014-04-011-0/+5
| | | | | | | | | | | | | curl_read_cb is callback function for libcurl when data arrives. The data size passed in here is not guaranteed to be within the range of request we submitted, so we may overflow the guest IO buffer. Check the real size we have before memcpy to buffer to avoid overflow. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: correctly propagate errorsPaolo Bonzini2014-02-211-8/+5Star
| | | | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* Use error_is_set() only when necessaryMarkus Armbruster2014-02-171-1/+1
| | | | | | | | | | | | | | | | error_is_set(&var) is the same as var != NULL, but it takes whole-program analysis to figure that out. Unnecessarily hard for optimizers, static checkers, and human readers. Dumb it down to obvious. Gets rid of several dozen Coverity false positives. Note that the obvious form is already used in many places. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
* block/curl: Implement the libcurl timer callback interfacePeter Maydell2014-01-241-11/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | libcurl versions 7.16.0 and later have a timer callback interface which must be implemented in order for libcurl to make forward progress (it will sometimes rely on being called back on the timeout if there are no file descriptors registered). Implement the callback, and use a QEMU AIO timer to ensure we prod libcurl again when it asks us to. Based on Peter's original patch plus my fix to add curl_multi_timeout_do. Should compile just fine even on older versions of libcurl. I also tried copy-on-read and streaming: $ ./qemu-img create -f qcow2 -o \ backing_file=http://download.fedoraproject.org/pub/fedora/linux/releases/20/Live/x86_64/Fedora-Live-Desktop-x86_64-20-1.iso \ foo.qcow2 1G $ x86_64-softmmu/qemu-system-x86_64 \ -drive if=none,file=foo.qcow2,copy-on-read=on,id=cd \ -device ide-cd,drive=cd --enable-kvm -m 1024 Direct http usage is probably too slow, but with copy-on-read ultimately the image does boot! After some time, streaming gets canceled by an EIO, which needs further investigation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qemu-option: Remove qemu_opts_create_nofailPeter Crosthwaite2014-01-061-1/+1
| | | | | | | | | | | | | This is a boiler-plate _nofail variant of qemu_opts_create. Remove and use error_abort in call sites. null/0 arguments needs to be added for the id and fail_if_exists fields in affected callsites due to argument inconsistency between the normal and no_fail variants. Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
* bdrv: Use "Error" for opening imagesMax Reitz2013-09-121-1/+2
| | | | | | | Add an Error ** parameter to BlockDriver.bdrv_open and BlockDriver.bdrv_file_open to allow more specific error messages. Signed-off-by: Max Reitz <mreitz@redhat.com>
* curl: qemu_bh_new() can never return NULLStefan Hajnoczi2013-09-011-6/+0Star
| | | | | | | | | Drop error code path which cannot be taken since qemu_bh_new() does not return NULL. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* aio: drop io_flush argumentStefan Hajnoczi2013-08-191-5/+4Star
| | | | | | | | | | | The .io_flush() handler no longer exists and has no users. Drop the io_flush argument to aio_set_fd_handler() and related functions. The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no longer used and are dropped too. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* block/curl: drop curl_aio_flush()Stefan Hajnoczi2013-08-191-19/+3Star
| | | | | | | | .io_flush() is no longer called so drop curl_aio_flush(). The acb[] array that the function checks is still used in other parts of block/curl.c. Therefore we cannot remove acb[], it is needed. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* curl: refuse to open URL from HTTP server without range supportFam Zheng2013-07-051-6/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | CURL driver requests partial data from server on guest IO req. For HTTP and HTTPS, it uses "Range: ***" in requests, and this will not work if server not accepting range. This patch does this check when open. * Removed curl_size_cb, which is not used: On one hand it's registered to libcurl as CURLOPT_WRITEFUNCTION, instead of CURLOPT_HEADERFUNCTION, which will get called with *data*, not *header*. On the other hand the s->len is assigned unconditionally later. In this gone function, the sscanf for "Content-Length: %zd", on (void *)ptr, which is not guaranteed to be zero-terminated, is potentially a security bug. So this patch fixes it as a side-effect. The bug is reported as: https://bugs.launchpad.net/qemu/+bug/1188943 (Note the bug is marked "private" so you might not be able to see it) * Introduced curl_header_cb, which is used to parse header and mark the server as accepting range if "Accept-Ranges: bytes" line is seen from response header. If protocol is HTTP or HTTPS, but server response has no not this support, refuse to open this URL. Note that python builtin module SimpleHTTPServer is an example of not supporting range, if you need to test this driver, get a better server or use internet URLs. Signed-off-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>