summaryrefslogtreecommitdiffstats
path: root/fs/btrfs/locking.c
diff options
context:
space:
mode:
authorDavid Sterba2018-04-04 02:11:50 +0200
committerDavid Sterba2019-02-25 14:13:28 +0100
commit970e74d961db61eed18e33d8ecd644ee8ef7da04 (patch)
tree3d77e75a8e04f1207ca504d7d3a3504cb6e843d9 /fs/btrfs/locking.c
parentbtrfs: open code now trivial btrfs_set_lock_blocking (diff)
downloadkernel-qcow2-linux-970e74d961db61eed18e33d8ecd644ee8ef7da04.tar.gz
kernel-qcow2-linux-970e74d961db61eed18e33d8ecd644ee8ef7da04.tar.xz
kernel-qcow2-linux-970e74d961db61eed18e33d8ecd644ee8ef7da04.zip
btrfs: simplify waiting loop in btrfs_tree_lock
Currently, the number of readers and writers is checked and in case there are any, wait and redo the locks. There's some duplication before the branches go back to again label, eg. calling wait_event on blocking_readers twice. The sequence is transformed loop: * wait for readers * wait for writers * write_lock * check readers, unlock and wait for readers, loop * check writers, unlock and wait for writers, loop The new sequence is not exactly the same due to the simplification, for readers it's slightly faster. For the writers, original code does * wait for writers * (loop) wait for readers * wait for writers -- again while the new goes directly to the reader check. This should behave the same on a contended lock with multiple writers and readers, but can reduce number of times we're waiting on something. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs/locking.c')
-rw-r--r--fs/btrfs/locking.c11
1 files changed, 2 insertions, 9 deletions
diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
index 7f89ca6f1fbc..82b84e4daad1 100644
--- a/fs/btrfs/locking.c
+++ b/fs/btrfs/locking.c
@@ -237,16 +237,9 @@ again:
wait_event(eb->read_lock_wq, atomic_read(&eb->blocking_readers) == 0);
wait_event(eb->write_lock_wq, atomic_read(&eb->blocking_writers) == 0);
write_lock(&eb->lock);
- if (atomic_read(&eb->blocking_readers)) {
+ if (atomic_read(&eb->blocking_readers) ||
+ atomic_read(&eb->blocking_writers)) {
write_unlock(&eb->lock);
- wait_event(eb->read_lock_wq,
- atomic_read(&eb->blocking_readers) == 0);
- goto again;
- }
- if (atomic_read(&eb->blocking_writers)) {
- write_unlock(&eb->lock);
- wait_event(eb->write_lock_wq,
- atomic_read(&eb->blocking_writers) == 0);
goto again;
}
WARN_ON(atomic_read(&eb->spinning_writers));