summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
diff options
context:
space:
mode:
authorDavid S. Miller2019-06-15 03:52:14 +0200
committerDavid S. Miller2019-06-15 03:52:44 +0200
commit4373a5e2606b4eda14fa096caf93dc2efc22689f (patch)
treed8fd9f39518e15bb5c6b3856043e3c14bbf412b9 /drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
parentnet: phy: Add more 1000BaseX support detection (diff)
parentnet/packet: introduce packet_rcv_try_clear_pressure() helper (diff)
downloadkernel-qcow2-linux-4373a5e2606b4eda14fa096caf93dc2efc22689f.tar.gz
kernel-qcow2-linux-4373a5e2606b4eda14fa096caf93dc2efc22689f.tar.xz
kernel-qcow2-linux-4373a5e2606b4eda14fa096caf93dc2efc22689f.zip
Merge branch 'packet-DDOS'
Eric Dumazet says: ==================== net/packet: better behavior under DDOS Using tcpdump (or other af_packet user) on a busy host can lead to catastrophic consequences, because suddenly, potentially all cpus are spinning on a contended spinlock. Both packet_rcv() and tpacket_rcv() grab the spinlock to eventually find there is no room for an additional packet. This patch series align packet_rcv() and tpacket_rcv() to both check if the queue is full before grabbing the spinlock. If the queue is full, they both increment a new atomic counter placed on a separate cache line to let readers drain the queue faster. There is still false sharing on this new atomic counter, we might in the future make it per cpu if there is interest. ==================== Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c')
0 files changed, 0 insertions, 0 deletions