summaryrefslogtreecommitdiffstats
path: root/lib/sg_pool.c
diff options
context:
space:
mode:
authorMing Lei2019-04-28 09:39:32 +0200
committerMartin K. Petersen2019-06-20 21:21:33 +0200
commit3dccdf53c2f38399b11085ded4447ce1467f006c (patch)
treed5415d9704bd5dc2151e36363e5b6cfa0a1dd087 /lib/sg_pool.c
parentscsi: core: avoid preallocating big SGL for protection information (diff)
downloadkernel-qcow2-linux-3dccdf53c2f38399b11085ded4447ce1467f006c.tar.gz
kernel-qcow2-linux-3dccdf53c2f38399b11085ded4447ce1467f006c.tar.xz
kernel-qcow2-linux-3dccdf53c2f38399b11085ded4447ce1467f006c.zip
scsi: core: avoid preallocating big SGL for data
scsi_mq_setup_tags() preallocates a big buffer for the IO SGL. The size is based on scsi_mq_sgl_size() which is determined based on shost->sg_tablesize and SG_CHUNK_SIZE. Modern DMA engines are often capable of dealing with very big segments so the resulting scsi_mq_sgl_size() is often too big. SG_CHUNK_SIZE results in a static 4KB SGL allocation per command. If an HBA has lots of deep queues, preallocation for the sg list can consume substantial amounts of memory. For lpfc, nr_hw_queues can be 70 and each queue's depth 3781. This means the resulting preallocation for the data SGL is 70*3781*2K = 517MB. Switch to runtime allocation for SGL for lists longer than 2 entries. This is the approach used by NVMe PCI so it should be reasonable for SCSI as well. Runtime SGL allocation has always been the case for the legacy I/O path so this is nothing new. [mkp: attempted to clarify commit desc] Cc: Christoph Hellwig <hch@lst.de> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Ewan D. Milne <emilne@redhat.com> Cc: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'lib/sg_pool.c')
0 files changed, 0 insertions, 0 deletions