summaryrefslogtreecommitdiffstats
path: root/crypto/chainiv.c
diff options
context:
space:
mode:
authorSantosh Shilimkar2015-09-10 20:57:14 +0200
committerSantosh Shilimkar2015-10-05 20:18:45 +0200
commit4bebdd7a4d2960b2ff6c40b27156d041ea270765 (patch)
tree9fdee787d45d2cdef6eb1aa58a4d7b0b19cc246e /crypto/chainiv.c
parentRDS: Use per-bucket rw lock for bind hash-table (diff)
downloadkernel-qcow2-linux-4bebdd7a4d2960b2ff6c40b27156d041ea270765.tar.gz
kernel-qcow2-linux-4bebdd7a4d2960b2ff6c40b27156d041ea270765.tar.xz
kernel-qcow2-linux-4bebdd7a4d2960b2ff6c40b27156d041ea270765.zip
RDS: defer the over_batch work to send worker
Current process gives up if its send work over the batch limit. The work queue will get kicked to finish off any other requests. This fixes remainder condition from commit 443be0e5affe ("RDS: make sure not to loop forever inside rds_send_xmit"). The restart condition is only for the case where we reached to over_batch code for some other reason so just retrying again before giving up. While at it, make sure we use already available 'send_batch_count' parameter instead of magic value. The batch count threshold value of 1024 came via commit 443be0e5affe ("RDS: make sure not to loop forever inside rds_send_xmit"). The idea is to process as big a batch as we can but at the same time we don't hold other waiting processes for send. Hence back-off after the send_batch_count limit (1024) to avoid soft-lock ups. Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Diffstat (limited to 'crypto/chainiv.c')
0 files changed, 0 insertions, 0 deletions