locking/atomic, asm-generic, x86: Add comments for atomic instrumentation
authorDmitry Vyukov <dvyukov@google.com>
Mon, 29 Jan 2018 17:26:07 +0000 (18:26 +0100)
committerIngo Molnar <mingo@kernel.org>
Mon, 12 Mar 2018 11:15:35 +0000 (12:15 +0100)
The comments are factored out from the code changes to make them
easier to read. Add them separately to explain some non-obvious
aspects.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/cc595efc644bb905407012d82d3eb8bac3368e7a.1517246437.git.dvyukov@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/atomic.h
include/asm-generic/atomic-instrumented.h

index 33afc966d6a988b865072a64b47128c7cde2c03b..0db6bec95489ebc703df2650e98c7f624cc7bff9 100644 (file)
  */
 static __always_inline int arch_atomic_read(const atomic_t *v)
 {
+       /*
+        * Note for KASAN: we deliberately don't use READ_ONCE_NOCHECK() here,
+        * it's non-inlined function that increases binary size and stack usage.
+        */
        return READ_ONCE((v)->counter);
 }
 
index 82e080505982a3796b4354098deffd614cc03484..ec07f23678ea6a507539b92d7df7577ccf54e0a0 100644 (file)
@@ -1,3 +1,15 @@
+/*
+ * This file provides wrappers with KASAN instrumentation for atomic operations.
+ * To use this functionality an arch's atomic.h file needs to define all
+ * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
+ * this file at the end. This file provides atomic_read() that forwards to
+ * arch_atomic_read() for actual atomic operation.
+ * Note: if an arch atomic operation is implemented by means of other atomic
+ * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
+ * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
+ * double instrumentation.
+ */
+
 #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
 #define _LINUX_ATOMIC_INSTRUMENTED_H
 
@@ -442,6 +454,15 @@ cmpxchg64_local_size(volatile u64 *ptr, u64 old, u64 new)
                (u64)(new)));                                           \
 })
 
+/*
+ * Originally we had the following code here:
+ *     __typeof__(p1) ____p1 = (p1);
+ *     kasan_check_write(____p1, 2 * sizeof(*____p1));
+ *     arch_cmpxchg_double(____p1, (p2), (o1), (o2), (n1), (n2));
+ * But it leads to compilation failures (see gcc issue 72873).
+ * So for now it's left non-instrumented.
+ * There are few callers of cmpxchg_double(), so it's not critical.
+ */
 #define cmpxchg_double(p1, p2, o1, o2, n1, n2)                         \
 ({                                                                     \
        arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2));        \