mirror of
https://github.com/Fishwaldo/Star64_linux.git
synced 2025-06-23 15:11:16 +00:00
locking: Remove ACCESS_ONCE() usage
With the new standardized functions, we can replace all ACCESS_ONCE() calls across relevant locking - this includes lockref and seqlock while at it. ACCESS_ONCE() does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 Update the new calls regardless of if it is a scalar type, this is cleaner than having three alternatives. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1424662301.6539.18.camel@stgolabs.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
2ae7902681
commit
4d3199e4ca
6 changed files with 23 additions and 23 deletions
|
@ -18,7 +18,7 @@
|
|||
#define CMPXCHG_LOOP(CODE, SUCCESS) do { \
|
||||
struct lockref old; \
|
||||
BUILD_BUG_ON(sizeof(old) != 8); \
|
||||
old.lock_count = ACCESS_ONCE(lockref->lock_count); \
|
||||
old.lock_count = READ_ONCE(lockref->lock_count); \
|
||||
while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \
|
||||
struct lockref new = old, prev = old; \
|
||||
CODE \
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue