summaryrefslogtreecommitdiffstats
path: root/hw/display/virtio-gpu.c
diff options
context:
space:
mode:
authorStefano Garzarella2021-07-21 11:42:11 +0200
committerStefan Hajnoczi2021-07-21 14:47:50 +0200
commitd7ddd0a1618a75b31dc308bb37365ce1da972154 (patch)
tree246183bcceead8342ab50bbd359c5cc284da534a /hw/display/virtio-gpu.c
parentiothread: add aio-max-batch parameter (diff)
downloadqemu-d7ddd0a1618a75b31dc308bb37365ce1da972154.tar.gz
qemu-d7ddd0a1618a75b31dc308bb37365ce1da972154.tar.xz
qemu-d7ddd0a1618a75b31dc308bb37365ce1da972154.zip
linux-aio: limit the batch size using `aio-max-batch` parameter
When there are multiple queues attached to the same AIO context, some requests may experience high latency, since in the worst case the AIO engine queue is only flushed when it is full (MAX_EVENTS) or there are no more queues plugged. Commit 2558cb8dd4 ("linux-aio: increasing MAX_EVENTS to a larger hardcoded value") changed MAX_EVENTS from 128 to 1024, to increase the number of in-flight requests. But this change also increased the potential maximum batch to 1024 elements. When there is a single queue attached to the AIO context, the issue is mitigated from laio_io_unplug() that will flush the queue every time is invoked since there can't be others queue plugged. Let's use the new `aio-max-batch` IOThread parameter to mitigate this issue, limiting the number of requests in a batch. We also define a default value (32): this value is obtained running some benchmarks and it represents a good tradeoff between the latency increase while a request is queued and the cost of the io_submit(2) system call. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Message-id: 20210721094211.69853-4-sgarzare@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Diffstat (limited to 'hw/display/virtio-gpu.c')
0 files changed, 0 insertions, 0 deletions