summaryrefslogtreecommitdiffstats
path: root/include/asm-x86/paravirt.h
diff options
context:
space:
mode:
authorNick Piggin2008-01-30 13:31:21 +0100
committerIngo Molnar2008-01-30 13:31:21 +0100
commit314cdbefd1fd0a7acf3780e9628465b77ea6a836 (patch)
tree2d2e743433ef61864728e4031e2d17be53efa3bc /include/asm-x86/paravirt.h
parentspinlock: lockbreak cleanup (diff)
downloadkernel-qcow2-linux-314cdbefd1fd0a7acf3780e9628465b77ea6a836.tar.gz
kernel-qcow2-linux-314cdbefd1fd0a7acf3780e9628465b77ea6a836.tar.xz
kernel-qcow2-linux-314cdbefd1fd0a7acf3780e9628465b77ea6a836.zip
x86: FIFO ticket spinlocks
Introduce ticket lock spinlocks for x86 which are FIFO. The implementation is described in the comments. The straight-line lock/unlock instruction sequence is slightly slower than the dec based locks on modern x86 CPUs, however the difference is quite small on Core2 and Opteron when working out of cache, and becomes almost insignificant even on P4 when the lock misses cache. trylock is more significantly slower, but they are relatively rare. On an 8 core (2 socket) Opteron, spinlock unfairness is extremely noticable, with a userspace test having a difference of up to 2x runtime per thread, and some threads are starved or "unfairly" granted the lock up to 1 000 000 (!) times. After this patch, all threads appear to finish at exactly the same time. The memory ordering of the lock does conform to x86 standards, and the implementation has been reviewed by Intel and AMD engineers. The algorithm also tells us how many CPUs are contending the lock, so lockbreak becomes trivial and we no longer have to waste 4 bytes per spinlock for it. After this, we can no longer spin on any locks with preempt enabled and cannot reenable interrupts when spinning on an irq safe lock, because at that point we have already taken a ticket and the would deadlock if the same CPU tries to take the lock again. These are questionable anyway: if the lock happens to be called under a preempt or interrupt disabled section, then it will just have the same latency problems. The real fix is to keep critical sections short, and ensure locks are reasonably fair (which this patch does). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/asm-x86/paravirt.h')
-rw-r--r--include/asm-x86/paravirt.h21
1 files changed, 0 insertions, 21 deletions
diff --git a/include/asm-x86/paravirt.h b/include/asm-x86/paravirt.h
index 4f23f434a1f3..24406703007f 100644
--- a/include/asm-x86/paravirt.h
+++ b/include/asm-x86/paravirt.h
@@ -1077,27 +1077,6 @@ static inline unsigned long __raw_local_irq_save(void)
return f;
}
-#define CLI_STRING \
- _paravirt_alt("pushl %%ecx; pushl %%edx;" \
- "call *%[paravirt_cli_opptr];" \
- "popl %%edx; popl %%ecx", \
- "%c[paravirt_cli_type]", "%c[paravirt_clobber]")
-
-#define STI_STRING \
- _paravirt_alt("pushl %%ecx; pushl %%edx;" \
- "call *%[paravirt_sti_opptr];" \
- "popl %%edx; popl %%ecx", \
- "%c[paravirt_sti_type]", "%c[paravirt_clobber]")
-
-#define CLI_STI_CLOBBERS , "%eax"
-#define CLI_STI_INPUT_ARGS \
- , \
- [paravirt_cli_type] "i" (PARAVIRT_PATCH(pv_irq_ops.irq_disable)), \
- [paravirt_cli_opptr] "m" (pv_irq_ops.irq_disable), \
- [paravirt_sti_type] "i" (PARAVIRT_PATCH(pv_irq_ops.irq_enable)), \
- [paravirt_sti_opptr] "m" (pv_irq_ops.irq_enable), \
- paravirt_clobber(CLBR_EAX)
-
/* Make sure as little as possible of this mess escapes. */
#undef PARAVIRT_CALL
#undef __PVOP_CALL