summaryrefslogtreecommitdiffstats
path: root/include/linux/skbuff.h
diff options
context:
space:
mode:
authorEric Dumazet2018-03-31 21:58:58 +0200
committerDavid S. Miller2018-04-01 05:25:40 +0200
commitbf66337140c64c27fa37222b7abca7e49d63fb57 (patch)
tree21a458948982fa21406bf356534516789d435661 /include/linux/skbuff.h
parentinet: frags: reorganize struct netns_frags (diff)
downloadkernel-qcow2-linux-bf66337140c64c27fa37222b7abca7e49d63fb57.tar.gz
kernel-qcow2-linux-bf66337140c64c27fa37222b7abca7e49d63fb57.tar.xz
kernel-qcow2-linux-bf66337140c64c27fa37222b7abca7e49d63fb57.zip
inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately this integer is currently in a different cache line than skb->next, meaning that we use two cache lines per skb when finding the insertion point. By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields in a single cache line and save precious memory bandwidth. Note that after the fast path added by Changli Gao in commit d6bebca92c66 ("fragment: add fast path for in-order fragments") this change wont help the fast path, since we still need to access prev->len (2nd cache line), but will show great benefits when slow path is entered, since we perform a linear scan of a potentially long list. Also, note that this potential long list is an attack vector, we might consider also using an rb-tree there eventually. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/linux/skbuff.h')
-rw-r--r--include/linux/skbuff.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 47082f54ec1f..9065477ed255 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -672,6 +672,7 @@ struct sk_buff {
* UDP receive path is one user.
*/
unsigned long dev_scratch;
+ int ip_defrag_offset;
};
};
struct rb_node rbnode; /* used in netem & tcp stack */