summaryrefslogtreecommitdiffstats
path: root/kernel/locking/semaphore.c
diff options
context:
space:
mode:
authorWaiman Long2014-08-06 19:22:01 +0200
committerIngo Molnar2014-08-13 10:33:34 +0200
commitf0bab73cb539fb803c4d419951e8d28aa4964f8f (patch)
tree0b9616befbd8a892eae3755fb389e2c4296e3d9d /kernel/locking/semaphore.c
parentlocking/spinlocks: Always evaluate the second argument of spin_lock_nested() (diff)
downloadkernel-qcow2-linux-f0bab73cb539fb803c4d419951e8d28aa4964f8f.tar.gz
kernel-qcow2-linux-f0bab73cb539fb803c4d419951e8d28aa4964f8f.tar.xz
kernel-qcow2-linux-f0bab73cb539fb803c4d419951e8d28aa4964f8f.zip
locking/lockdep: Restrict the use of recursive read_lock() with qrwlock
Unlike the original unfair rwlock implementation, queued rwlock will grant lock according to the chronological sequence of the lock requests except when the lock requester is in the interrupt context. Consequently, recursive read_lock calls will now hang the process if there is a write_lock call somewhere in between the read_lock calls. This patch updates the lockdep implementation to look for recursive read_lock calls. A new read state (3) is used to mark those read_lock call that cannot be recursively called except in the interrupt context. The new read state does exhaust the 2 bits available in held_lock:read bit field. The addition of any new read state in the future may require a redesign of how all those bits are squeezed together in the held_lock structure. Signed-off-by: Waiman Long <Waiman.Long@hp.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com> Cc: Rik van Riel <riel@redhat.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1407345722-61615-2-git-send-email-Waiman.Long@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking/semaphore.c')
0 files changed, 0 insertions, 0 deletions