diff options
author | David S. Miller | 2018-04-17 16:50:30 +0200 |
---|---|---|
committer | David S. Miller | 2018-04-17 17:17:58 +0200 |
commit | 684009d4fdaf40f1e50b0589cff6e039e97058a4 (patch) | |
tree | 1f162a8267f6647ca930ef202dfc9ad0b0245fae /net/core/filter.c | |
parent | liquidio: Enhanced ethtool stats (diff) | |
parent | xdp: avoid leaking info stored in frame data on page reuse (diff) | |
download | kernel-qcow2-linux-684009d4fdaf40f1e50b0589cff6e039e97058a4.tar.gz kernel-qcow2-linux-684009d4fdaf40f1e50b0589cff6e039e97058a4.tar.xz kernel-qcow2-linux-684009d4fdaf40f1e50b0589cff6e039e97058a4.zip |
Merge branch 'XDP-redirect-memory-return-API'
Jesper Dangaard Brouer says:
====================
XDP redirect memory return API
Submitted against net-next, as it contains NIC driver changes.
This patchset works towards supporting different XDP RX-ring memory
allocators. As this will be needed by the AF_XDP zero-copy mode.
The patchset uses mlx5 as the sample driver, which gets implemented
XDP_REDIRECT RX-mode, but not ndo_xdp_xmit (as this API is subject to
change thought the patchset).
A new struct xdp_frame is introduced (modeled after cpumap xdp_pkt).
And both ndo_xdp_xmit and the new xdp_return_frame end-up using this.
Support for a driver supplied allocator is implemented, and a
refurbished version of page_pool is the first return allocator type
introduced. This will be a integration point for AF_XDP zero-copy.
The mlx5 driver evolve into using the page_pool, and see a performance
increase (with ndo_xdp_xmit out ixgbe driver) from 6Mpps to 12Mpps.
The patchset stop at 16 patches (one over limit), but more API changes
are planned. Specifically extending ndo_xdp_xmit and xdp_return_frame
APIs to support bulking. As this will address some known limits.
V2: Updated according to Tariq's feedback
V3: Updated based on feedback from Jason Wang and Alex Duyck
V4: Updated based on feedback from Tariq and Jason
V5: Fix SPDX license, add Tariq's reviews, improve patch desc for perf test
V6: Updated based on feedback from Eric Dumazet and Alex Duyck
V7: Adapt to i40e that got XDP_REDIRECT support in-between
V8:
Updated based on feedback kbuild test robot, and adjust for mlx5 changes
page_pool only compiled into kernel when drivers Kconfig 'select' feature
V9:
Remove some inline statements, let compiler decide what to inline
Fix return value in virtio_net driver
Adjust for mlx5 changes in-between submissions
V10:
Minor adjust for mlx5 requested by Tariq
Resubmit against net-next
V11: avoid leaking info stored in frame data on page reuse
====================
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core/filter.c')
-rw-r--r-- | net/core/filter.c | 25 |
1 files changed, 23 insertions, 2 deletions
diff --git a/net/core/filter.c b/net/core/filter.c index d31aff93270d..a374b8560bc4 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -2692,6 +2692,7 @@ static unsigned long xdp_get_metalen(const struct xdp_buff *xdp) BPF_CALL_2(bpf_xdp_adjust_head, struct xdp_buff *, xdp, int, offset) { + void *xdp_frame_end = xdp->data_hard_start + sizeof(struct xdp_frame); unsigned long metalen = xdp_get_metalen(xdp); void *data_start = xdp->data_hard_start + metalen; void *data = xdp->data + offset; @@ -2700,6 +2701,13 @@ BPF_CALL_2(bpf_xdp_adjust_head, struct xdp_buff *, xdp, int, offset) data > xdp->data_end - ETH_HLEN)) return -EINVAL; + /* Avoid info leak, when reusing area prev used by xdp_frame */ + if (data < xdp_frame_end) { + unsigned long clearlen = xdp_frame_end - data; + + memset(data, 0, clearlen); + } + if (metalen) memmove(xdp->data_meta + offset, xdp->data_meta, metalen); @@ -2749,13 +2757,18 @@ static int __bpf_tx_xdp(struct net_device *dev, struct xdp_buff *xdp, u32 index) { + struct xdp_frame *xdpf; int err; if (!dev->netdev_ops->ndo_xdp_xmit) { return -EOPNOTSUPP; } - err = dev->netdev_ops->ndo_xdp_xmit(dev, xdp); + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + return -EOVERFLOW; + + err = dev->netdev_ops->ndo_xdp_xmit(dev, xdpf); if (err) return err; dev->netdev_ops->ndo_xdp_flush(dev); @@ -2771,11 +2784,19 @@ static int __bpf_tx_xdp_map(struct net_device *dev_rx, void *fwd, if (map->map_type == BPF_MAP_TYPE_DEVMAP) { struct net_device *dev = fwd; + struct xdp_frame *xdpf; if (!dev->netdev_ops->ndo_xdp_xmit) return -EOPNOTSUPP; - err = dev->netdev_ops->ndo_xdp_xmit(dev, xdp); + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + return -EOVERFLOW; + + /* TODO: move to inside map code instead, for bulk support + * err = dev_map_enqueue(dev, xdp); + */ + err = dev->netdev_ops->ndo_xdp_xmit(dev, xdpf); if (err) return err; __dev_map_insert_ctx(map, index); |