summaryrefslogtreecommitdiffstats
path: root/arch/arc/include/asm/atomic.h
diff options
context:
space:
mode:
authorMark Rutland2018-06-21 14:13:08 +0200
committerIngo Molnar2018-06-21 14:22:33 +0200
commitbef828204a1bc7a0fd3a24551c4265e9c2ab95ed (patch)
treebb35ac3bc791547c1345aaa737406f9b315a9e63 /arch/arc/include/asm/atomic.h
parentatomics: Make conditional ops return 'bool' (diff)
downloadkernel-qcow2-linux-bef828204a1bc7a0fd3a24551c4265e9c2ab95ed.tar.gz
kernel-qcow2-linux-bef828204a1bc7a0fd3a24551c4265e9c2ab95ed.tar.xz
kernel-qcow2-linux-bef828204a1bc7a0fd3a24551c4265e9c2ab95ed.zip
atomics/treewide: Make atomic64_inc_not_zero() optional
We define a trivial fallback for atomic_inc_not_zero(), but don't do the same for atomic64_inc_not_zero(), leading most architectures to define the same boilerplate. Let's add a fallback in <linux/atomic.h>, and remove the redundant implementations. Note that atomic64_add_unless() is always defined in <linux/atomic.h>, and promotes its arguments to the requisite types, so we need not do this explicitly. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Palmer Dabbelt <palmer@sifive.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/lkml/20180621121321.4761-6-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/arc/include/asm/atomic.h')
-rw-r--r--arch/arc/include/asm/atomic.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index cecdf3403caf..1406825b5e7d 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -603,7 +603,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
#define atomic64_dec(v) atomic64_sub(1LL, (v))
#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v))
#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0)
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL)
#endif /* !CONFIG_GENERIC_ATOMIC64 */