summaryrefslogtreecommitdiffstats
path: root/include/block/nbd.h
Commit message (Collapse)AuthorAgeFilesLines
* nbd: make nbd_read* return -EIO on errorRoman Kagan2021-02-031-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | NBD reconnect logic considers the error code from the functions that read NBD messages to tell if reconnect should be attempted or not: it is attempted on -EIO, otherwise the client transitions to NBD_CLIENT_QUIT state (see nbd_channel_error). This error code is propagated from the primitives like nbd_read. The problem, however, is that nbd_read itself turns every error into -1 rather than -EIO. As a result, if the NBD server happens to die while sending the message, the client in QEMU receives less data than it expects, considers it as a fatal error, and wouldn't attempt reestablishing the connection. Fix it by turning every negative return from qio_channel_read_all into -EIO returned from nbd_read. Apparently that was the original behavior, but got broken later. Also adjust nbd_readXX to follow. Fixes: e6798f06a6 ("nbd: generalize usage of nbd_read") Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20210129073859.683063-4-rvkagan@yandex-team.ru> Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: Add new qemu:allocation-depth metadata contextEric Blake2020-10-301-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'qemu-img map' provides a way to determine which extents of an image come from the top layer vs. inherited from a backing chain. This is useful information worth exposing over NBD. There is a proposal to add a QMP command block-dirty-bitmap-populate which can create a dirty bitmap that reflects allocation information, at which point the qemu:dirty-bitmap:NAME metadata context can expose that information via the creation of a temporary bitmap, but we can shorten the effort by adding a new qemu:allocation-depth metadata context that does the same thing without an intermediate bitmap (this patch does not eliminate the need for that proposal, as it will have other uses as well). While documenting things, remember that although the NBD protocol has NBD_OPT_SET_META_CONTEXT, the rest of its documentation refers to 'metadata context', which is a more apt description of what is actually being used by NBD_CMD_BLOCK_STATUS: the user is requesting metadata by passing one or more context names. So I also touched up some existing wording to prefer the term 'metadata context' where it makes sense. Note that this patch does not actually enable any way to request a server to enable this context; that will come in the next patch. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20201027050556.269064-10-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
* nbd: Merge nbd_export_new() and nbd_export_create()Kevin Wolf2020-10-021-6/+1Star
| | | | | | | | | | | | | | | | | | | | There is no real reason any more why nbd_export_new() and nbd_export_create() should be separate functions. The latter only performs a few checks before it calls the former. What makes the current state stand out is that it's the only function in BlockExportDriver that is not a static function inside nbd/server.c, but a small wrapper in blockdev-nbd.c that then calls back into nbd/server.c for the real functionality. Move all the checks to nbd/server.c and make the resulting function static to improve readability. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-27-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Create BlockBackend in blk_exp_add()Kevin Wolf2020-10-021-2/+2
| | | | | | | | | | | Every export type will need a BlockBackend, so creating it centrally in blk_exp_add() instead of the .create driver callback avoids duplication. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-24-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Add block-export-delKevin Wolf2020-10-021-1/+0Star
| | | | | | | | | | | Implement a new QMP command block-export-del and make nbd-server-remove a wrapper around it. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-21-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Add blk_exp_close_all(_type)Kevin Wolf2020-10-021-2/+0Star
| | | | | | | | | | | | | | | | | This adds a function to shut down all block exports, and another one to shut down the block exports of a single type. The latter is used for now when stopping the NBD server. As soon as we implement support for multiple NBD servers, we'll need a per-server list of exports and it will be replaced by a function using that. As a side effect, the BlockExport layer has a list tracking all existing exports now. closed_exports loses its only user and can go away. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-18-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Allocate BlockExport in blk_exp_add()Kevin Wolf2020-10-021-5/+6
| | | | | | | | | | | | | | | | Instead of letting the driver allocate and return the BlockExport object, allocate it already in blk_exp_add() and pass it. This allows us to initialise the generic part before calling into the driver so that the driver can just use these values instead of having to parse the options a second time. For symmetry, move freeing the BlockExport to blk_exp_unref(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-17-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Move refcount from NBDExport to BlockExportKevin Wolf2020-10-021-2/+0Star
| | | | | | | | | | | | Having a refcount makes sense for all types of block exports. It is also a prerequisite for keeping a list of all exports at the BlockExport level. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-14-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qemu-nbd: Use blk_exp_add() to create the exportKevin Wolf2020-10-021-0/+1
| | | | | | | | | | | | | | | | With this change, NBD exports are now only created through the BlockExport interface. This allows us finally to move things from the NBD layer to the BlockExport layer if they make sense for other export types, too. blk_exp_add() returns only a weak reference, so the explicit nbd_export_put() goes away. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-12-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* nbd: Remove NBDExport.close callbackKevin Wolf2020-10-021-2/+1Star
| | | | | | | | | | | | | | | | | The export close callback is unused by the built-in NBD server. qemu-nbd uses it only during shutdown to wait for the unrefed export to actually go away. It can just use nbd_export_close_all() instead and do without the callback. This removes the close callback from nbd_export_new() and makes both callers of it more similar. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-11-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* nbd: Add max-connections to nbd-server-startKevin Wolf2020-10-021-1/+2
| | | | | | | | | | | | This is a QMP equivalent of qemu-nbd's --shared option, limiting the maximum number of clients that can attach at the same time. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-9-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Remove magic from block-export-addKevin Wolf2020-10-021-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | nbd-server-add tries to be convenient and adds two questionable features that we don't want to share in block-export-add, even for NBD exports: 1. When requesting a writable export of a read-only device, the export is silently downgraded to read-only. This should be an error in the context of block-export-add. 2. When using a BlockBackend name, unplugging the device from the guest will automatically stop the NBD server, too. This may sometimes be what you want, but it could also be very surprising. Let's keep things explicit with block-export-add. If the user wants to stop the export, they should tell us so. Move these things into the nbd-server-add QMP command handler so that they apply only there. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-8-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qemu-nbd: Use raw block driver for --offsetKevin Wolf2020-10-021-2/+2
| | | | | | | | | | | | | | | | | | | Instead of implementing qemu-nbd --offset in the NBD code, just put a raw block node with the requested offset on top of the user image and rely on that doing the job. This does not only simplify the nbd_export_new() interface and bring it closer to the set of options that the nbd-server-add QMP command offers, but in fact it also eliminates a potential source for bugs in the NBD code which previously had to add the offset manually in all relevant places. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-7-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block/export: Add BlockExport infrastructure and block-export-addKevin Wolf2020-10-021-1/+4
| | | | | | | | | | | | | | | | | | We want to have a common set of commands for all types of block exports. Currently, this is only NBD, but we're going to add more types. This patch adds the basic BlockExport and BlockExportDriver structs and a QMP command block-export-add that creates a new export based on the given BlockExportOptions. qmp_nbd_server_add() becomes a wrapper around qmp_block_export_add(). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-5-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qapi: Create block-export moduleKevin Wolf2020-10-021-1/+1
| | | | | | | | | | | | Move all block export related types and commands from block-core to the new QAPI module block-export. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-3-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* nbd: Remove unused nbd_export_get_blockdev()Kevin Wolf2020-10-021-2/+0Star
| | | | | | | | | Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-2-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* nbd: Use ERRP_GUARD()Vladimir Sementsov-Ogievskiy2020-07-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we want to check error after errp-function call, we need to introduce local_err and then propagate it to errp. Instead, use the ERRP_GUARD() macro, benefits are: 1. No need of explicit error_propagate call 2. No need of explicit local_err variable: use errp directly 3. ERRP_GUARD() leaves errp as is if it's not NULL or &error_fatal, this means that we don't break error_abort (we'll abort on error_set, not on error_propagate) If we want to add some info to errp (by error_prepend() or error_append_hint()), we must use the ERRP_GUARD() macro. Otherwise, this info will not be added when errp == &error_fatal (the program will exit prior to the error_append_hint() or error_prepend() call). Fix several such cases, e.g. in nbd_read(). This commit is generated by command sed -n '/^Network Block Device (NBD)$/,/^$/{s/^F: //p}' \ MAINTAINERS | \ xargs git ls-files | grep '\.[hc]$' | \ xargs spatch \ --sp-file scripts/coccinelle/errp-guard.cocci \ --macro-file scripts/cocci-macro-file.h \ --in-place --no-show-diff --max-width 80 Reported-by: Kevin Wolf <kwolf@redhat.com> Reported-by: Greg Kurz <groug@kaod.org> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message tweaked] Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20200707165037.1026246-8-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> [ERRP_AUTO_PROPAGATE() renamed to ERRP_GUARD(), and auto-propagated-errp.cocci to errp-guard.cocci. Commit message tweaked again.]
* qemu-storage-daemon: Add --nbd-server optionKevin Wolf2020-03-061-0/+1
| | | | | | | | | | | | | | | | | Add a --nbd-server option to qemu-storage-daemon to start the built-in NBD server right away. It maps the arguments for nbd-server-start to the command line, with the exception that it uses SocketAddress instead of SocketAddressLegacy: New interfaces shouldn't use legacy types, and the additional nesting would be nasty on the command line. Example (only with required options): --nbd-server addr.type=inet,addr.host=localhost,addr.port=10809 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20200224143008.13362-10-kwolf@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* nbd: Don't send oversize stringsEric Blake2019-11-181-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Qemu as server currently won't accept export names larger than 256 bytes, nor create dirty bitmap names longer than 1023 bytes, so most uses of qemu as client or server have no reason to get anywhere near the NBD spec maximum of a 4k limit per string. However, we weren't actually enforcing things, ignoring when the remote side violates the protocol on input, and also having several code paths where we send oversize strings on output (for example, qemu-nbd --description could easily send more than 4k). Tighten things up as follows: client: - Perform bounds check on export name and dirty bitmap request prior to handing it to server - Validate that copied server replies are not too long (ignoring NBD_INFO_* replies that are not copied is not too bad) server: - Perform bounds check on export name and description prior to advertising it to client - Reject client name or metadata query that is too long - Adjust things to allow full 4k name limit rather than previous 256 byte limit Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-4-eblake@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
* nbd/server: Prefer heap over stack for parsing client namesEric Blake2019-11-181-5/+5
| | | | | | | | | | | | | | | | As long as we limit NBD names to 256 bytes (the bare minimum permitted by the standard), stack-allocation works for parsing a name received from the client. But as mentioned in a comment, we eventually want to permit up to the 4k maximum of the NBD standard, which is too large for stack allocation; so switch everything in the server to use heap allocation. For now, there is no change in actually supported name length. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-2-eblake@redhat.com> [eblake: fix uninit variable compile failure] Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
* nbd: Grab aio context lock in more placesEric Blake2019-09-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | When iothreads are in use, the failure to grab the aio context results in an assertion failure when trying to unlock things during blk_unref, when trying to unlock a mutex that was not locked. In short, all calls to nbd_export_put need to done while within the correct aio context. But since nbd_export_put can recursively reach itself via nbd_export_close, and recursively grabbing the context would deadlock, we can't do the context grab directly in those functions, but must do so in their callers. Hoist the use of the correct aio_context from nbd_export_new() to its caller qmp_nbd_server_add(). Then tweak qmp_nbd_server_remove(), nbd_eject_notifier(), and nbd_esport_close_all() to grab the right context, so that all callers during qemu now own the context before nbd_export_put() can call blk_unref(). Remaining uses in qemu-nbd don't matter (since that use case does not support iothreads). Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190917023917.32226-1-eblake@redhat.com> Reviewed-by: Sergio Lopez <slp@redhat.com>
* nbd: Prepare for NBD_CMD_FLAG_FAST_ZEROEric Blake2019-09-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit fe0480d6 and friends added BDRV_REQ_NO_FALLBACK as a way to avoid wasting time on a preliminary write-zero request that will later be rewritten by actual data, if it is known that the write-zero request will use a slow fallback; but in doing so, could not optimize for NBD. The NBD specification is now considering an extension that will allow passing on those semantics; this patch updates the new protocol bits and 'qemu-nbd --list' output to recognize the bit, as well as the new errno value possible when using the new flag; while upcoming patches will improve the client to use the feature when present, and the server to advertise support for it. The NBD spec recommends (but not requires) that ENOTSUP be avoided for all but failures of a fast zero (the only time it is mandatory to avoid an ENOTSUP failure is when fast zero is supported but not requested during write zeroes; the questionable use is for ENOTSUP to other actions like a normal write request). However, clients that get an unexpected ENOTSUP will either already be treating it the same as EINVAL, or may appreciate the extra bit of information. We were equally loose for returning EOVERFLOW in more situations than recommended by the spec, so if it turns out to be a problem in practice, a later patch can tighten handling for both error codes. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-3-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: tweak commit message, also handle EOPNOTSUPP]
* nbd: Improve per-export flag handling in serverEric Blake2019-09-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When creating a read-only image, we are still advertising support for TRIM and WRITE_ZEROES to the client, even though the client should not be issuing those commands. But seeing this requires looking across multiple functions: All callers to nbd_export_new() passed a single flag based solely on whether the export allows writes. Later, we then pass a constant set of flags to nbd_negotiate_options() (namely, the set of flags which we always support, at least for writable images), which is then further dynamically modified with NBD_FLAG_SEND_DF based on client requests for structured options. Finally, when processing NBD_OPT_EXPORT_NAME or NBD_OPT_EXPORT_GO we bitwise-or the original caller's flag with the runtime set of flags we've built up over several functions. Let's refactor things to instead compute a baseline of flags as soon as possible which gets shared between multiple clients, in nbd_export_new(), and changing the signature for the callers to pass in a simpler bool rather than having to figure out flags. We can then get rid of the 'myflags' parameter to various functions, and instead refer to client for everything we need (we still have to perform a bitwise-OR for NBD_FLAG_SEND_DF during NBD_OPT_EXPORT_NAME and NBD_OPT_EXPORT_GO, but it's easier to see what is being computed). This lets us quit advertising senseless flags for read-only images, as well as making the next patch for exposing FAST_ZERO support easier to write. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: improve commit message, update iotest 223]
* nbd: Advertise multi-conn for shared read-only connectionsEric Blake2019-09-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be advertised when the server promises cache consistency between simultaneous clients (basically, rules that determine what FUA and flush from one client are able to guarantee for reads from another client). When we don't permit simultaneous clients (such as qemu-nbd without -e), the bit makes no sense; and for writable images, we probably have a lot more work before we can declare that actions from one client are cache-consistent with actions from another. But for read-only images, where flush isn't changing any data, we might as well advertise multi-conn support. What's more, advertisement of the bit makes it easier for clients to determine if 'qemu-nbd -e' was in use, where a second connection will succeed rather than hang until the first client goes away. This patch affects qemu as server in advertising the bit. We may want to consider patches to qemu as client to attempt parallel connections for higher throughput by spreading the load over those connections when a server advertises multi-conn, but for now sticking to one connection per nbd:// BDS is okay. See also: https://bugzilla.redhat.com/1708300 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190815185024.7010-1-eblake@redhat.com> [eblake: tweak blockdev-nbd.c to not request shared when writable, fix iotest 233] Reviewed-by: John Snow <jsnow@redhat.com>
* block/nbd: use non-blocking io channel for nbd negotiationVladimir Sementsov-Ogievskiy2019-08-151-1/+2
| | | | | | | | | | | No reason to use blocking channel for negotiation and we'll benefit in further reconnect feature, as qio_channel reads and writes will do qemu_coroutine_yield while waiting for io completion. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190618114328.55249-3-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
* qemu-nbd: Look up flag names in arrayMax Reitz2019-05-071-12/+26
| | | | | | | | | | The existing code to convert flag bits into strings looks a bit strange now, and if we ever add more flags, it will look even stranger. Prevent that from happening by making it look up the flag names in an array. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20190405191635.25740-1-mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: allow authorization with nbd-server-start QMP commandDaniel P. Berrange2019-03-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As with the previous patch to qemu-nbd, the nbd-server-start QMP command also needs to be able to specify authorization when enabling TLS encryption. First the client must create a QAuthZ object instance using the 'object-add' command: { 'execute': 'object-add', 'arguments': { 'qom-type': 'authz-list', 'id': 'authz0', 'parameters': { 'policy': 'deny', 'rules': [ { 'match': '*CN=fred', 'policy': 'allow' } ] } } } They can then reference this in the new 'tls-authz' parameter when executing the 'nbd-server-start' command: { 'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'inet', 'host': '127.0.0.1', 'port': '9000' }, 'tls-creds': 'tls0', 'tls-authz': 'authz0' } } Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-Id: <20190227162035.18543-3-berrange@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
* qemu-nbd: add support for authorization of TLS clientsDaniel P. Berrange2019-03-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently any client which can complete the TLS handshake is able to use the NBD server. The server admin can turn on the 'verify-peer' option for the x509 creds to require the client to provide a x509 certificate. This means the client will have to acquire a certificate from the CA before they are permitted to use the NBD server. This is still a fairly low bar to cross. This adds a '--tls-authz OBJECT-ID' option to the qemu-nbd command which takes the ID of a previously added 'QAuthZ' object instance. This will be used to validate the client's x509 distinguished name. Clients failing the authorization check will not be permitted to use the NBD server. For example to setup authorization that only allows connection from a client whose x509 certificate distinguished name is CN=laptop.example.com,O=Example Org,L=London,ST=London,C=GB escape the commas in the name and use: qemu-nbd --object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,\ endpoint=server,verify-peer=yes \ --object 'authz-simple,id=auth0,identity=CN=laptop.example.com,,\ O=Example Org,,L=London,,ST=London,,C=GB' \ --tls-creds tls0 \ --tls-authz authz0 \ ....other qemu-nbd args... NB: a real shell command line would not have leading whitespace after the line continuation, it is just included here for clarity. Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Message-Id: <20190227162035.18543-2-berrange@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: split long line in --help text, tweak 233 to show that whitespace after ,, in identity= portion is actually okay] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: Use low-level QIOChannel API in nbd_read_eof()Kevin Wolf2019-02-251-2/+2
| | | | | | | | | | | | | | | Instead of using the convenience wrapper qio_channel_read_all_eof(), use the lower level QIOChannel API. This means duplicating some code, but we'll need this because this coroutine yield is special: We want it to be interruptible so that nbd_client_attach_aio_context() can correctly reenter the coroutine. This moves the bdrv_dec/inc_in_flight() pair into nbd_read_eof(), so that connection_co will always sit in this exact qio_channel_yield() call when bdrv_drain() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* nbd: Move nbd_read_eof() to nbd/client.cKevin Wolf2019-02-251-1/+2
| | | | | | | | | | The only caller of nbd_read_eof() is nbd_receive_reply(), so it doesn't have to live in the header file, but can move next to its caller. Also add the missing coroutine_fn to the function and its caller. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* nbd: generalize usage of nbd_readVladimir Sementsov-Ogievskiy2019-02-041-2/+30
| | | | | | | | | | | | | | | | | | | | We generally do very similar things around nbd_read: error_prepend specifying what we have tried to read, and be_to_cpu conversion of integers. So, it seems reasonable to move common things to helper functions, which: 1. simplify code a bit 2. generalize nbd_read error descriptions, all starting with "Failed to read" 3. make it more difficult to forget to convert things from BE Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20190128165830.165170-1-vsementsov@virtuozzo.com> [eblake: rename macro to DEF_NBD_READ_N and formatting tweaks; checkpatch has false positive complaint] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd/client: Add meta contexts to nbd_receive_export_list()Eric Blake2019-01-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to be able to detect whether a given qemu NBD server is exposing the right export(s) and dirty bitmaps, at least for regression testing. We could use 'nbd-client -l' from the upstream NBD project to list exports, but it's annoying to rely on out-of-tree binaries; furthermore, nbd-client doesn't necessarily know about all of the qemu NBD extensions. Thus, we plan on adding a new mode to qemu-nbd that merely sniffs all possible information from the server during handshake phase, then disconnects and dumps the information. This patch continues the work of the previous patch, by adding the ability to track the list of available meta contexts into NBDExportInfo. It benefits from the recent refactoring patches with a new nbd_list_meta_contexts() that reuses much of the same framework as setting a meta context. Note: a malicious server could exhaust memory of a client by feeding an unending loop of contexts; perhaps we could place a limit on how many we are willing to receive. But this is no different from our earlier analysis on a server sending an unending list of exports, and the death of a client due to memory exhaustion when the client was going to exit soon anyways is not really a denial of service attack. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-19-eblake@redhat.com>
* nbd/client: Add nbd_receive_export_list()Eric Blake2019-01-211-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to be able to detect whether a given qemu NBD server is exposing the right export(s) and dirty bitmaps, at least for regression testing. We could use 'nbd-client -l' from the upstream NBD project to list exports, but it's annoying to rely on out-of-tree binaries; furthermore, nbd-client doesn't necessarily know about all of the qemu NBD extensions. Thus, we plan on adding a new mode to qemu-nbd that merely sniffs all possible information from the server during handshake phase, then disconnects and dumps the information. This patch adds the low-level client code for grabbing the list of exports. It benefits from the recent refactoring patches, in order to share as much code as possible when it comes to doing validation of server replies. The resulting information is stored in an array of NBDExportInfo which has been expanded to any description string, along with a convenience function for freeing the list. Note: a malicious server could exhaust memory of a client by feeding an unending loop of exports; perhaps we should place a limit on how many we are willing to receive. But note that a server could reasonably be serving an export for every file in a large directory, where an arbitrary limit in the client means we can't list anything from such a server; the same happens if we just run until the client fails to malloc() and thus dies by an abort(), where the limit is no longer arbitrary but determined by available memory. Since the client is already planning on being short-lived, it's hard to call this a denial of service attack that would starve off other uses, so it does not appear to be a security issue. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Message-Id: <20190117193658.16413-18-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
* nbd/client: Change signature of nbd_negotiate_simple_meta_context()Eric Blake2019-01-211-1/+1
| | | | | | | | | | | | | | | Pass 'info' instead of three separate parameters related to info, when requesting the server to set the meta context. Update the NBDExportInfo struct to rename the received id field to match the fact that we are currently overloading the field to match whatever context the user supplied through the x-dirty-bitmap hack, as well as adding a TODO comment to remind future patches about a desire to request two contexts at once. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-11-eblake@redhat.com>
* nbd/client: Move export name into NBDExportInfoEric Blake2019-01-211-4/+4
| | | | | | | | | | | | | | | | | | | Refactor the 'name' parameter of nbd_receive_negotiate() from being a separate parameter into being part of the in-out 'info'. This also spills over to a simplification of nbd_opt_go(). The main driver for this refactoring is that an upcoming patch would like to add support to qemu-nbd to list information about all exports available on a server, where the name(s) will be provided by the server instead of the client. But another benefit is that we can now allow the client to explicitly specify the empty export name "" even when connecting to an oldstyle server (even if qemu is no longer such a server after commit 7f7dfe2a). Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-10-eblake@redhat.com>
* nbd/server: Favor [u]int64_t over off_tEric Blake2019-01-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | Although our compile-time environment is set up so that we always support long files with 64-bit off_t, we have no guarantee whether off_t is the same type as int64_t. This requires casts when printing values, and prevents us from directly using qemu_strtoi64() (which will be done in the next patch). Let's just flip to uint64_t where possible, and stick to int64_t for detecting failure of blk_getlength(); we also keep the assertions added in the previous patch that the resulting values fit in 63 bits. The overflow check in nbd_co_receive_request() was already sane (request->from is validated to fit in 63 bits, and request->len is 32 bits, so the addition can't overflow 64 bits), but rewrite it in a form easier to recognize as a typical overflow check. Rename the variable 'description' to keep line lengths reasonable. Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190117193658.16413-7-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
* nbd: Merge nbd_export_bitmap into nbd_export_newEric Blake2019-01-141-6/+3Star
| | | | | | | | | | | | | | | | | | | We only have one caller that wants to export a bitmap name, which it does right after creation of the export. But there is still a brief window of time where an NBD client could see the export but not the dirty bitmap, which a robust client would have to interpret as meaning the entire image should be treated as dirty. Better is to eliminate the window entirely, by inlining nbd_export_bitmap() into nbd_export_new(), and refusing to create the bitmap in the first place if the requested bitmap can't be located. We also no longer need logic for setting a different bitmap name compared to the bitmap being exported. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190111194720.15671-8-eblake@redhat.com>
* nbd: Merge nbd_export_set_name into nbd_export_newEric Blake2019-01-141-2/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing NBD code had a weird split where nbd_export_new() created an export but did not add it to the list of exported names until a later nbd_export_set_name() came along and grabbed a second reference on the object; later, the first call to nbd_export_close() drops the second reference while removing the export from the list. This is in part because the QAPI NbdServerRemoveNode enum documents the possibility of adding a mode where we could do a soft disconnect: preventing new clients, but waiting for existing clients to gracefully quit, based on the mode used when calling nbd_export_close(). But in spite of all that, note that we never change the name of an NBD export while it is exposed, which means it is easier to just inline the process of setting the name as part of creating the export. Inline the contents of nbd_export_set_name() and nbd_export_set_description() into the two points in an export lifecycle where they matter, then adjust both callers to pass the name up front. Note that for creation, all callers pass a non-NULL name, (passing NULL at creation was for old style servers, but we removed support for that in commit 7f7dfe2a), so we can add an assert and do things unconditionally; but for cleanup, because of the dual nature of nbd_export_close(), we still have to be careful to avoid use-after-free. Along the way, add a comment reminding ourselves of the potential of adding a middle mode disconnect. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190111194720.15671-5-eblake@redhat.com>
* nbd: publish _lookup functionsVladimir Sementsov-Ogievskiy2019-01-051-0/+5
| | | | | | | | | | | | | These functions are used for formatting pretty trace points. We are going to add some in block/nbd-client, so, let's publish all these functions at once. Note, that nbd_reply_type_lookup is already published, and constants, "named" by these functions live in include/block/nbd.h too. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20181102151152.288399-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: fix NBD_FLAG_SEND_CACHE valueDenis V. Lunev2018-10-041-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit bc37b06a5 added NBD_CMD_CACHE support, but used the wrong value for NBD_FLAG_SEND_CACHE flag for negotiation. That commit picked bit 8, which had already been assigned by the NBD specification to mean NBD_FLAG_CAN_MULTI_CONN, and which was already implemented in the Linux kernel as a part of stable userspace-kernel API since 4.10: "bit 8, NBD_FLAG_CAN_MULTI_CONN: Indicates that the server operates entirely without cache, or that the cache it uses is shared among all connections to the given device. In particular, if this flag is present, then the effects of NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA MUST be visible across all connections when the server sends its reply to that command to the client. In the absense of this flag, clients SHOULD NOT multiplex their commands over more than one connection to the export. ... bit 10, NBD_FLAG_SEND_CACHE: documents that the server understands NBD_CMD_CACHE; however, note that server implementations exist which support the command without advertising this bit, and conversely that this bit does not guarantee that the command will succeed or have an impact." Consequences: - a client trying to use NBD_CMD_CACHE per the NBD spec will not see the feature as available from a qemu 3.0 server (not fatal, clients already have to be prepared for caching to not exist) - a client accidentally coded to the qemu 3.0 bit value instead of following the spec may interpret NBD_CMD_CACHE as being available when it is not (probably not fatal, the spec says the server should gracefully fail unknown commands, and that clients of NBD_CMD_CACHE should be prepared for failure even when the feature is advertised); such clients are unlikely (perhaps only in unreleased Virtuozzo code), and will disappear over time - a client prepared to use multiple connections based on NBD_FLAG_CAN_MULTI_CONN may cause data corruption when it assumes that caching is consistent when in reality qemu 3.0 did not have a consistent cache. Partially mitigated by using read-only connections (where nothing needs to be flushed, so caching is indeed consistent) or when using qemu-nbd with the default -e 1 (at most one client at a time); visible only when using -e 2 or more for a writable export. Thus the commit fixes negotiation flag in QEMU according to the specification. Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> CC: Valery Vdovin <valery.vdovin@acronis.com> CC: Eric Blake <eblake@redhat.com> CC: Paolo Bonzini <pbonzini@redhat.com> CC: qemu-stable@nongnu.org Message-Id: <20181004100313.4253-1-den@openvz.org> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: enhance commit message, add defines for unimplemented flags] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd/server: drop old-style negotiationVladimir Sementsov-Ogievskiy2018-10-031-2/+1Star
| | | | | | | | | | | | After the previous commit, nbd_client_new's first parameter is always NULL. Let's drop it with all corresponding old-style negotiation code path which is unreachable now. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20181003170228.95973-3-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: re-wrap short line] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd/client: Add x-dirty-bitmap to query bitmap from serverEric Blake2018-07-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | In order to test that the NBD server is properly advertising dirty bitmaps, we need a bare minimum client that can request and read the context. Since feature freeze for 3.0 is imminent, this is the smallest workable patch, which replaces the qemu block status report with the results of the NBD server's dirty bitmap (making it very easy to use 'qemu-img map --output=json' to learn where the dirty portions are). Note that the NBD protocol defines a dirty section with the same bit but opposite sense that normal "base:allocation" uses to report an allocated section; so in qemu-img map output, "data":true corresponds to clean, "data":false corresponds to dirty. A more complete solution that allows dirty bitmaps to be queried at the same time as normal block status will be required before this addition can lose the x- prefix. Until then, the fact that this replaces normal status with dirty status means actions like 'qemu-img convert' will likely misbehave due to treating dirty regions of the file as if they are unallocated. The next patch adds an iotest to exercise this new code. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180702191458.28741-2-eblake@redhat.com>
* nbd/server: introduce NBD_CMD_CACHEVladimir Sementsov-Ogievskiy2018-06-211-1/+2
| | | | | | | | | | | Handle nbd CACHE command. Just do read, without sending read data back. Cache mechanism should be done by exported node driver chain. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180413143156.11409-1-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: fix two missing case labels in switch statements] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd/server: implement dirty bitmap exportVladimir Sementsov-Ogievskiy2018-06-211-2/+6
| | | | | | | | | | | | | | | | Handle a new NBD meta namespace: "qemu", and corresponding queries: "qemu:dirty-bitmap:<export bitmap name>". With the new metadata context negotiated, BLOCK_STATUS query will reply with dirty-bitmap data, converted to extents. The new public function nbd_export_bitmap selects which bitmap to export. For now, only one bitmap may be exported. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180609151758.17343-5-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: wording tweaks, minor cleanups, additional tracing] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: BLOCK_STATUS for standard get_block_status function: client partVladimir Sementsov-Ogievskiy2018-03-131-0/+3
| | | | | | | | | | | | | Minimal realization: only one extent in server answer is supported. Flag NBD_CMD_FLAG_REQ_ONE is used to force this behavior. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180312152126.286890-6-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> [eblake: grammar tweaks, fix min_block check and 32-bit cap, use -1 instead of errno on failure in nbd_negotiate_simple_meta_context, ensure that block status makes progress on success] Signed-off-by: Eric Blake <eblake@redhat.com>
* Include less of the generated modular QAPI headersMarkus Armbruster2018-03-021-1/+1
| | | | | | | | | | | | | | | | | | | | | In my "build everything" tree, a change to the types in qapi-schema.json triggers a recompile of about 4800 out of 5100 objects. The previous commit split up qmp-commands.h, qmp-event.h, qmp-visit.h, qapi-types.h. Each of these headers still includes all its shards. Reduce compile time by including just the shards we actually need. To illustrate the benefits: adding a type to qapi/migration.json now recompiles some 2300 instead of 4800 objects. The next commit will improve it further. Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180211093607.27351-24-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> [eblake: rebase to master] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: BLOCK_STATUS constantsVladimir Sementsov-Ogievskiy2018-03-011-0/+31
| | | | | | | | | | | | | Expose the new constants and structs that will be used by both server and client implementations of NBD_CMD_BLOCK_STATUS (the command is currently experimental at https://github.com/NetworkBlockDevice/nbd/blob/extension-blockstatus/doc/proto.md but will hopefully be stabilized soon). Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <1518702707-7077-4-git-send-email-vsementsov@virtuozzo.com> [eblake: split from larger patch on server implementation] Signed-off-by: Eric Blake <eblake@redhat.com>
* nbd: change indenting in nbd.hVladimir Sementsov-Ogievskiy2018-03-011-11/+11
| | | | | | | | Prepared indenting for the following patch. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <1518702707-7077-3-git-send-email-vsementsov@virtuozzo.com> Signed-off-by: Eric Blake <eblake@redhat.com>
* Move include qemu/option.h from qemu-common.h to actual usersMarkus Armbruster2018-02-091-2/+0Star
| | | | | | | | | | | | | | | | | | qemu-common.h includes qemu/option.h, but most places that include the former don't actually need the latter. Drop the include, and add it to the places that actually need it. While there, drop superfluous includes of both headers, and separate #include from file comment with a blank line. This cleanup makes the number of objects depending on qemu/option.h drop from 4545 (out of 4743) to 284 in my "build everything" tree. Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180201111846.21846-20-armbru@redhat.com> [Semantic conflict with commit bdd6a90a9e in block/nvme.c resolved]
* qapi: add nbd-server-removeVladimir Sementsov-Ogievskiy2018-01-261-0/+1
| | | | | | | | | | | | | Add command for removing an export. It is needed for cases when we don't want to keep the export after the operation on it was completed. The other example is a temporary node, created with blockdev-add. If we want to delete it we should firstly remove any corresponding NBD export. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20180119135719.24745-3-vsementsov@virtuozzo.com> [eblake: drop dead nb_clients code] Signed-off-by: Eric Blake <eblake@redhat.com>