diff options
author | Nick Piggin | 2009-09-15 21:34:12 +0200 |
---|---|---|
committer | Jens Axboe | 2009-09-16 15:18:52 +0200 |
commit | 77fad5e625e56eb31a343ae1d489979fdc61a2aa (patch) | |
tree | e0b881a38be27d0c4d8523289f51b70ffb98c080 /fs | |
parent | writeback: remove smp_mb(), it's not needed with list_add_tail_rcu() (diff) | |
download | kernel-qcow2-linux-77fad5e625e56eb31a343ae1d489979fdc61a2aa.tar.gz kernel-qcow2-linux-77fad5e625e56eb31a343ae1d489979fdc61a2aa.tar.xz kernel-qcow2-linux-77fad5e625e56eb31a343ae1d489979fdc61a2aa.zip |
writeback: improve scalability of bdi writeback work queues
If you're going to do an atomic RMW on each list entry, there's not much
point in all the RCU complexities of the list walking. This is only going
to help the multi-thread case I guess, but it doesn't hurt to do now.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/fs-writeback.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 59c99e729187..6bca6f8176f0 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -772,8 +772,9 @@ static struct bdi_work *get_next_work_item(struct backing_dev_info *bdi, rcu_read_lock(); list_for_each_entry_rcu(work, &bdi->work_list, list) { - if (!test_and_clear_bit(wb->nr, &work->seen)) + if (!test_bit(wb->nr, &work->seen)) continue; + clear_bit(wb->nr, &work->seen); ret = work; break; |