arch: Mass conversion of smp_mb__*()

Mostly scripted conversion of the smp_mb__* barriers.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Peter Zijlstra 2014-03-17 18:06:10 +01:00 committed by Ingo Molnar
parent 1b15611e1c
commit 4e857c58ef
106 changed files with 284 additions and 288 deletions

View file

@ -80,7 +80,7 @@ static inline void set_bit(int nr, volatile unsigned long *addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static inline void clear_bit(int nr, volatile unsigned long *addr)

View file

@ -20,7 +20,7 @@
*/
#define clear_bit_unlock(nr, addr) \
do { \
smp_mb__before_clear_bit(); \
smp_mb__before_atomic(); \
clear_bit(nr, addr); \
} while (0)