summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorVladimir Sementsov-Ogievskiy2020-09-03 21:02:58 +0200
committerEric Blake2020-10-09 22:04:32 +0200
commit8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace (patch)
tree7c54adb44497c050210bccb6d1239b21fada6449 /block
parentnbd: silence maybe-uninitialized warnings (diff)
downloadqemu-8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace.tar.gz
qemu-8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace.tar.xz
qemu-8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace.zip
block/nbd: fix drain dead-lock because of nbd reconnect-delay
We pause reconnect process during drained section. So, if we have some requests, waiting for reconnect we should cancel them, otherwise they deadlock the drained section. How to reproduce: 1. Create an image: qemu-img create -f qcow2 xx 100M 2. Start NBD server: qemu-nbd xx 3. Start vm with second nbd disk on node2, like this: ./build/x86_64-softmmu/qemu-system-x86_64 -nodefaults -drive \ file=/work/images/cent7.qcow2 -drive \ driver=nbd,server.type=inet,server.host=192.168.100.5,server.port=10809,reconnect-delay=60 \ -vnc :0 -m 2G -enable-kvm -vga std 4. Access the vm through vnc (or some other way?), and check that NBD drive works: dd if=/dev/sdb of=/dev/null bs=1M count=10 - the command should succeed. 5. Now, kill the nbd server, and run dd in the guest again: dd if=/dev/sdb of=/dev/null bs=1M count=10 Now Qemu is trying to reconnect, and dd-generated requests are waiting for the connection (they will wait up to 60 seconds (see reconnect-delay option above) and than fail). But suddenly, vm may totally hang in the deadlock. You may need to increase reconnect-delay period to catch the dead-lock. VM doesn't respond because drain dead-lock happens in cpu thread with global mutex taken. That's not good thing by itself and is not fixed by this commit (true way is using iothreads). Still this commit fixes drain dead-lock itself. Note: probably, we can instead continue to reconnect during drained section. To achieve this, we may move negotiation to the connect thread to make it independent of bs aio context. But expanding drained section doesn't seem good anyway. So, let's now fix the bug the simplest way. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20200903190301.367620-2-vsementsov@virtuozzo.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com>
Diffstat (limited to 'block')
-rw-r--r--block/nbd.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/block/nbd.c b/block/nbd.c
index 9daf003bea..912ea27be7 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -242,6 +242,11 @@ static void coroutine_fn nbd_client_co_drain_begin(BlockDriverState *bs)
}
nbd_co_establish_connection_cancel(bs, false);
+
+ if (s->state == NBD_CLIENT_CONNECTING_WAIT) {
+ s->state = NBD_CLIENT_CONNECTING_NOWAIT;
+ qemu_co_queue_restart_all(&s->free_sema);
+ }
}
static void coroutine_fn nbd_client_co_drain_end(BlockDriverState *bs)