summaryrefslogtreecommitdiffstats
path: root/MAINTAINERS
diff options
context:
space:
mode:
authorMarkus Pargmann2013-11-01 10:36:36 +0100
committerMarc Kleine-Budde2013-12-17 11:47:19 +0100
commit4ce78a838c1c5482aeb47cfba9baf9df90400a25 (patch)
tree5885bec1033265b6134bad75b53c7dc12c183f5e /MAINTAINERS
parentfddi: cleanup unsigned to unsigned int/short (diff)
downloadkernel-qcow2-linux-4ce78a838c1c5482aeb47cfba9baf9df90400a25.tar.gz
kernel-qcow2-linux-4ce78a838c1c5482aeb47cfba9baf9df90400a25.tar.xz
kernel-qcow2-linux-4ce78a838c1c5482aeb47cfba9baf9df90400a25.zip
can: c_can: Speed up rx_poll function
This patch speeds up the rx_poll function by reducing the number of register reads. Replace the 32bit register read by a 16bit register read. Currently the 32bit register read is implemented by using 2 16bit reads. This is inefficient as we only use the lower 16bit in rx_poll. The for loop reads the pending interrupts in every iteration. This leads up to 16 reads of pending interrupts. The patch introduces a new outer loop to read the pending interrupts as long as 'quota' is above 0. This reduces the total number of reads. The third change is to replace the for-loop by a ffs loop. Tested on AM335x. I removed all 'static' and 'inline' from c_can.c to see the timings for all functions. I used the function tracer with trace_stats. 125kbit: Function Hit Time Avg s^2 -------- --- ---- --- --- c_can_do_rx_poll 63960 10168178 us 158.977 us 1493056 us With patch: c_can_do_rx_poll 63941 3764057 us 58.867 us 776162.2 us 1Mbit: Function Hit Time Avg s^2 -------- --- ---- --- --- c_can_do_rx_poll 69489 30049498 us 432.435 us 9271851 us With patch: c_can_do_rx_poll 207109 24322185 us 117.436 us 171469047 us Signed-off-by: Markus Pargmann <mpa@pengutronix.de> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Diffstat (limited to 'MAINTAINERS')
0 files changed, 0 insertions, 0 deletions