summaryrefslogtreecommitdiffstats
path: root/drivers/nvme/host/nvme.h
diff options
context:
space:
mode:
authorJens Axboe2018-06-21 17:49:37 +0200
committerChristoph Hellwig2018-06-21 18:59:46 +0200
commit943e942e6266f22babee5efeb00f8f672fbff5bd (patch)
tree9122de26af304afdf313020e689e9e4008de375c /drivers/nvme/host/nvme.h
parentnvme-pci: move nvme_kill_queues to nvme_remove_dead_ctrl (diff)
downloadkernel-qcow2-linux-943e942e6266f22babee5efeb00f8f672fbff5bd.tar.gz
kernel-qcow2-linux-943e942e6266f22babee5efeb00f8f672fbff5bd.tar.xz
kernel-qcow2-linux-943e942e6266f22babee5efeb00f8f672fbff5bd.zip
nvme-pci: limit max IO size and segments to avoid high order allocations
nvme requires an sg table allocation for each request. If the request is large, then the allocation can become quite large. For instance, with our default software settings of 1280KB IO size, we'll need 10248 bytes of sg table. That turns into a 2nd order allocation, which we can't always guarantee. If we fail the allocation, blk-mq will retry it later. But there's no guarantee that we'll EVER be able to allocate that much contigious memory. Limit the IO size such that we never need more than a single page of memory. That's a lot faster and more reliable. Then back that allocation with a mempool, so that we know we'll always be able to succeed the allocation at some point. Signed-off-by: Jens Axboe <axboe@kernel.dk> Acked-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/nvme/host/nvme.h')
-rw-r--r--drivers/nvme/host/nvme.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 231807cbc849..0c4a33df3b2f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -170,6 +170,7 @@ struct nvme_ctrl {
u64 cap;
u32 page_size;
u32 max_hw_sectors;
+ u32 max_segments;
u16 oncs;
u16 oacs;
u16 nssa;