summaryrefslogtreecommitdiffstats
path: root/net/sched
Commit message (Collapse)AuthorAgeFilesLines
* [PKT_SCHED]: Rework QoS and/or fair queueing configurationThomas Graf2005-11-031-199/+195Star
| | | | | | | | | | | | | Make "QoS and/or fair queueing" have its own menu, it's too big to be inlined into "Network options". Remove the obsolete NET_QOS option. Automatically select NET_CLS if needed. Do the same for NET_ESTIMATOR but allow it to be selected manually for statistical purposes. Add comments to separate queueing from classification. Fix dependencies and ordering of classifiers. Improve descriptions/help texts and remove outdated pieces. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
* [NET]: Disable NET_SCH_CLK_CPU for SMP x86 hostsAndi Kleen2005-10-131-1/+3
| | | | | | | | | | Opterons with frequency scaling have fully unsynchronized TSCs running at different frequencies, so using TSCs there is not a good idea. Also some other x86 boxes have this problem. gettimeofday should be good enough, so just disable it. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* [INET]: speedup inet (tcp/dccp) lookupsEric Dumazet2005-10-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arnaldo and I agreed it could be applied now, because I have other pending patches depending on this one (Thank you Arnaldo) (The other important patch moves skc_refcnt in a separate cache line, so that the SMP/NUMA performance doesnt suffer from cache line ping pongs) 1) First some performance data : -------------------------------- tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established() The most time critical code is : sk_for_each(sk, node, &head->chain) { if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif)) goto hit; /* You sunk my battleship! */ } The sk_for_each() does use prefetch() hints but only the begining of "struct sock" is prefetched. As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far away from the begining of "struct sock", it has to bring into CPU cache cold cache line. Each iteration has to use at least 2 cache lines. This can be problematic if some chains are very long. 2) The goal ----------- The idea I had is to change things so that INET_MATCH() may return FALSE in 99% of cases only using the data already in the CPU cache, using one cache line per iteration. 3) Description of the patch --------------------------- Adds a new 'unsigned int skc_hash' field in 'struct sock_common', filling a 32 bits hole on 64 bits platform. struct sock_common { unsigned short skc_family; volatile unsigned char skc_state; unsigned char skc_reuse; int skc_bound_dev_if; struct hlist_node skc_node; struct hlist_node skc_bind_node; atomic_t skc_refcnt; + unsigned int skc_hash; struct proto *skc_prot; }; Store in this 32 bits field the full hash, not masked by (ehash_size - 1) Using this full hash as the first comparison done in INET_MATCH permits us immediatly skip the element without touching a second cache line in case of a miss. Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to sk_hash and tw_hash) already contains the slot number if we mask with (ehash_size - 1) File include/net/inet_hashtables.h 64 bits platforms : #define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\ (((__sk)->sk_hash == (__hash)) ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie)) && \ ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports)) && \ (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif)))) 32bits platforms: #define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\ (((__sk)->sk_hash == (__hash)) && \ (inet_sk(__sk)->daddr == (__saddr)) && \ (inet_sk(__sk)->rcv_saddr == (__daddr)) && \ (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif)))) - Adds a prefetch(head->chain.first) in __inet_lookup_established()/__tcp_v4_check_established() and __inet6_lookup_established()/__tcp_v6_check_established() and __dccp_v4_check_established() to bring into cache the first element of the list, before the {read|write}_lock(&head->lock); Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] timer initialization cleanup: DEFINE_TIMERIngo Molnar2005-09-091-1/+1
| | | | | | | | | | Clean up timer initialization by introducing DEFINE_TIMER a'la DEFINE_SPINLOCK. Build and boot-tested on x86. A similar patch has been been in the -RT tree for some time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [LIB]: Make TEXTSEARCH_BM plain tristate like the othersDavid S. Miller2005-08-301-0/+1
| | | | | | And select it when the relevant modules are enabled. Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Convert netlink users to use group numbers instead of bitmasksPatrick McHardy2005-08-303-7/+7
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Deinline netif_carrier_{on,off}().Denis Vlasenko2005-08-301-0/+16
| | | | | | | | | | | | | | # grep -r 'netif_carrier_o[nf]' linux-2.6.12 | wc -l 246 # size vmlinux.org vmlinux.carrier text data bss dec hex filename 4339634 1054414 259296 5653344 564360 vmlinux.org 4337710 1054414 259296 5651420 563bdc vmlinux.carrier And this ain't an allyesconfig kernel! Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Kill skb->tc_classidPatrick McHardy2005-08-307-12/+8Star
| | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Fix missing qdisc_destroy() in qdisc_create_dflt()Thomas Graf2005-08-231-0/+1
| | | | | | | | | | | | | | | | qdisc_create_dflt() is missing to destroy the newly allocated default qdisc if the initialization fails resulting in leaks of all kinds. The only caller in mainline which may trigger this bug is sch_tbf.c in tbf_create_dflt_qdisc(). Note: qdisc_create_dflt() doesn't fulfill the official locking requirements of qdisc_destroy() but since the qdisc could never be seen by the outside world this doesn't matter and it can stay as-is until the locking of pkt_sched is cleaned up. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [EMATCH]: Remove feature ifdefs in meta ematch.Patrick McHardy2005-07-251-8/+8
| | | | | | Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: em_meta: Kill TCF_META_ID_{INDEV,SECURITY,TCVERDICT}David S. Miller2005-07-221-25/+3Star
| | | | | | | | More unusable TCF_META_* match types that need to get eliminated before 2.6.13 goes out the door. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Thomas Graf <tgraf@suug.ch>
* [PKT_SCHED]: Kill TCF_META_ID_REALDEV from meta ematch.David S. Miller2005-07-221-12/+0Star
| | | | | | | | | It won't exist any longer when we shrink the SKB in 2.6.14, and we should kill this off before anyone in userspace starts using it. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Thomas Graf <tgraf@suug.ch>
* [EMATCH]: Kill TCF_META_ID_TCCLASSID reference from meta ematch as well.David S. Miller2005-07-191-6/+0Star
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Reduce branch mispredictions in pfifo_fast_dequeueThomas Graf2005-07-181-4/+3Star
| | | | | | | | | | | | | The current call to __qdisc_dequeue_head leads to a branch misprediction for every loop iteration, the fact that the most common priority is 2 makes this even worse. This issue has been brought up by Eric Dumazet <dada1@cosmosbay.com> but unlike his solution which was to manually unroll the loop, this approach preserves the possibility to increase the number of bands at compile time. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Remove debugging leftover from textsearch ematchThomas Graf2005-07-181-3/+0Star
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: move config options out to individual protocolsSam Ravnborg2005-07-121-0/+37
| | | | | | | | | | | | | | | | | Move the protocol specific config options out to the specific protocols. With this change net/Kconfig now starts to become readable and serve as a good basis for further re-structuring. The menu structure is left almost intact, except that indention is fixed in most cases. Most visible are the INET changes where several "depends on INET" are replaced with a single ifdef INET / endif pair. Several new files were created to accomplish this change - they are small but serve the purpose that config options are now distributed out where they belongs. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Transform skb_queue_len() binary tests into skb_queue_empty()David S. Miller2005-07-081-1/+1
| | | | | | | | | | | | | This is part of the grand scheme to eliminate the qlen member of skb_queue_head, and subsequently remove the 'list' member of sk_buff. Most users of skb_queue_len() want to know if the queue is empty or not, and that's trivially done with skb_queue_empty() which doesn't use the skb_queue_head->qlen member and instead uses the queue list emptyness as the test. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Blackhole queueing disciplineThomas Graf2005-07-062-1/+55
| | | | | | | | | | | | Useful in combination with classful qdiscs to drop or temporary disable certain flows, e.g. one could block specific ds flows with dsmark. Unlike the noop qdisc it can be controlled by the user and statistic accounting is done. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Report rate estimator configuration errors during qdisc allocationThomas Graf2005-07-051-5/+17
| | | | | | | | | | | | | | Current behaviour is to not report an error if a rate estimator is created together with a qdisc and the configuration of the rate estimator is bogus. This leads to unexpected behaviour because the user is not notified. New behaviour is to report the error and let the whole qdisc creation operation fail so the user is able to fix his mistake. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Cleanup qdisc creation and alignment macrosThomas Graf2005-07-052-43/+33Star
| | | | | | | | | Adds qdisc_alloc() to share code between qdisc_create() and qdisc_create_dflt(). Hides the qdisc alignment behind macros and makes use of them. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Remove unused security member in sk_buffThomas Graf2005-07-051-6/+0Star
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Missing padding fields in dumped structuresPatrick McHardy2005-06-282-0/+2
| | | | | | | Plug holes with padding fields and initialized them to zero. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Missing initializations in dumped dataPatrick McHardy2005-06-284-1/+15
| | | | | | | | Mostly missing initialization of padding fields of 1 or 2 bytes length, two instances of uninitialized nlmsgerr->msg of 16 bytes length. Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Make TEXTSEARCH* options only selected.David S. Miller2005-06-251-2/+3
| | | | | | | | Do not present these confusing new options to the user unless he picked some facility that makes use of it, such as NET_EMATCH_TEXT. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Make NET_EMATCH_TEXT select TEXTSEARChDavid S. Miller2005-06-241-0/+1
| | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Packet classification based on textsearch (ematch)Thomas Graf2005-06-243-0/+169
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: noop/noqueue qdisc style cleanupsThomas Graf2005-06-191-11/+5Star
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Cleanup pfifo_fast qdisc and remove unnecessary codeThomas Graf2005-06-191-20/+14Star
| | | | | | | | | Removes the skb trimming code which is not needed since we never touch the skb upon failure. Removes unnecessary initializers, and simplifies the code a bit. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Add and use prio2list() in the pfifo_fast qdiscThomas Graf2005-06-191-8/+9
| | | | | | | | prio2list() returns the relevant sk_buff_head for the band specified by the priority for a given skb. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Transform pfifo_fast to use generic queue management interfaceThomas Graf2005-06-191-14/+9Star
| | | | | | | Gives pfifo_fast a byte based backlog. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Cleanup fifo qdisc and remove unnecessary codeThomas Graf2005-06-191-38/+12Star
| | | | | | | | | Removes the skb trimming code which is not needed since we never touch the skb upon failure. Removes unnecessary includes, initializers, and simplifies the code a bit. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Transform fifo qdisc to use generic queue management interfaceThomas Graf2005-06-191-88/+14Star
| | | | | | | | | The simplicity of the fifo qdisc allows several qdisc operations to be redirected to the relevant queue management function directly. Saves a lot of code lines and gives the pfifo a byte based backlog. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NETLINK]: Explicit typingJamal Hadi Salim2005-06-193-15/+11Star
| | | | | | | | This patch converts "unsigned flags" to use more explict types like u16 instead and incrementally introduces NLMSG_NEW(). Signed-off-by: Jamal Hadi Salim <hadi@cyberus.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Logic simplifications and codingstyle/whitespace cleanupsThomas Graf2005-06-191-86/+88
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Make dsmark use the new dumping macrosThomas Graf2005-06-191-28/+24Star
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Fix dsmark to apply changes consistentThomas Graf2005-06-191-49/+82
| | | | | | | | | | Fixes dsmark to do all configuration sanity checks first and only apply the changes if all of them can be applied without any errors. Also fixes the weak sanity checks for DSMARK_VALUE and DSMASK_MASK. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Move the netdev list to vger.kernel.org.Ralf Baechle2005-06-131-1/+1
| | | | | | | | | | From: Ralf Baechle <ralf@linux-mips.org> There are archives of the old list at http://oss.sgi.com/archives/netdev Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Fix numeric comparison in meta ematchThomas Graf2005-06-091-2/+2
| | | | | | | This patch is brought to you by the department of applied stupidity. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Dump classification result for basic classifierThomas Graf2005-06-091-0/+3
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Allow socket attributes to be matched on via meta ematchThomas Graf2005-06-091-24/+267
| | | | | | | | | Adds meta collectors for all socket attributes that make sense to be filtered upon. Some of them are only useful for debugging but having them doesn't hurt. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Fix typo in NET_EMATCH_STACK help textThomas Graf2005-06-091-1/+1
| | | | | | | Spotted by Geert Uytterhoeven <geert@linux-m68k.org>. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Disable dsmark debugging messages by defaultThomas Graf2005-06-011-1/+1
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: make dsmark try using pfifo instead of noop while graftingThomas Graf2005-06-011-2/+7
| | | | | Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Fix dsmark to count ignored indices while walkingThomas Graf2005-06-011-2/+3
| | | | | | | | Unused indices which are ignored while walking must still be counted to avoid dumping the same index twice. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED] netem: allow random reordering (with fix)Stephen Hemminger2005-05-261-12/+42
| | | | | | | | | | | | Here is a fixed up version of the reorder feature of netem. It is the same as the earlier patch plus with the bugfix from Julio merged in. Has expected backwards compatibility behaviour. Go ahead and merge this one, the TCP strangeness I was seeing was due to the reordering bug, and previous version of TSO patch. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED] netem: use only inner qdisc -- no private skbuff queueStephen Hemminger2005-05-261-88/+36Star
| | | | | | | | | | | | | | | | | | | | Netem works better if there if packets are just queued in the inner discipline rather than having a separate delayed queue. Change to use the dequeue/requeue to peek like TBF does. By doing this potential qlen problems with the old method are avoided. The problems happened when the netem_run that moved packets from the inner discipline to the nested discipline failed (because inner queue was full). This happened in dequeue, so the effective qlen of the netem would be decreased (because of the drop), but there was no way to keep the outer qdisc (caller of netem dequeue) in sync. The problem window is still there since this patch doesn't address the issue of requeue failing in netem_dequeue, but that shouldn't happen since the sequence dequeue/requeue should always work. Long term correct fix is to implement qdisc->peek in all the qdisc's to allow for this (needed by several other qdisc's as well). Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: netem: reinsert for duplicationStephen Hemminger2005-05-261-24/+29
| | | | | | | | | Handle duplication of packets in netem by re-inserting at top of qdisc tree. This avoid problems with qlen accounting with nested qdisc. This recursion requires no additional locking but will potentially increase stack depth. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: Action repeatJ Hadi Salim2005-05-041-2/+2
| | | | | | | | Long standing bug. Policy to repeat an action never worked. Signed-off-by: J Hadi Salim <hadi@cyberus.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: netetm: adjust parent qlen when duplicatingStephen Hemminger2005-05-042-5/+16
| | | | | | | | | Fix qlen underrun when doing duplication with netem. If netem is used as leaf discipline, then the parent needs to be tweaked when packets are duplicated. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PKT_SCHED]: netetm: make qdisc friendly to outer disciplinesStephen Hemminger2005-05-041-46/+67
| | | | | | | | | Netem currently dumps packets into the queue when timer expires. This patch makes work by self-clocking (more like TBF). It fixes a bug when 0 delay is requested (only doing loss or duplication). Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>