locking/pvqspinlock/x86: Use LOCK_PREFIX in __pv_queued_spin_unlock() assembly code
authorWaiman Long <longman@redhat.com>
Tue, 17 Jul 2018 20:16:00 +0000 (16:16 -0400)
committerIngo Molnar <mingo@kernel.org>
Wed, 25 Jul 2018 09:22:20 +0000 (11:22 +0200)
The LOCK_PREFIX macro should be used in the __raw_callee_save___pv_queued_spin_unlock()
assembly code, so that the lock prefix can be patched out on UP systems.

Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joe Mario <jmario@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1531858560-21547-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/qspinlock_paravirt.h

index 9ef5ee03d2d79268dc627dd8371653a7e2ec6c33..159622ee067488ed9baae2ba5627298b6844df32 100644 (file)
@@ -43,7 +43,7 @@ asm    (".pushsection .text;"
        "push  %rdx;"
        "mov   $0x1,%eax;"
        "xor   %edx,%edx;"
-       "lock cmpxchg %dl,(%rdi);"
+       LOCK_PREFIX "cmpxchg %dl,(%rdi);"
        "cmp   $0x1,%al;"
        "jne   .slowpath;"
        "pop   %rdx;"