Hillf Danton [Tue, 14 May 2019 08:34:19 +0000 (16:34 +0800)]
arm64: assembler: Update comment above cond_yield_neon() macro
Since commit
7faa313f05ca ("arm64: preempt: Fix big-endian when checking
preempt count in assembly") both the preempt count and the 'need_resched'
flag are checked as part of a single 64-bit load in cond_yield_neon(),
so update the stale comment to reflect reality.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Hillf Danton <hdanton@sina.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 13 May 2019 16:53:03 +0000 (17:53 +0100)]
drivers/perf: arm_spe: Don't error on high-order pages for aux buf
Since commit
5768402fd9c6 ("perf/ring_buffer: Use high order allocations
for AUX buffers optimistically"), the perf core tends to back aux buffer
allocations with high-order pages with the order encoded in the
PagePrivate data. The Arm SPE driver explicitly rejects such pages,
causing the perf tool to fail with:
| failed to mmap with 12 (Cannot allocate memory)
In actual fact, we can simply treat these pages just like any other
since the perf core takes care to populate the page array appropriately.
In theory we could try to map with PMDs where possible, but for now,
let's just get things working again.
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Fixes: 5768402fd9c6 ("perf/ring_buffer: Use high order allocations for AUX buffers optimistically")
Reported-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Christoph Hellwig [Tue, 30 Apr 2019 10:51:50 +0000 (06:51 -0400)]
arm64/iommu: handle non-remapped addresses in ->mmap and ->get_sgtable
DMA allocations that can't sleep may return non-remapped addresses, but
we do not properly handle them in the mmap and get_sgtable methods.
Resolve non-vmalloc addresses using virt_to_page to handle this corner
case.
Cc: <stable@vger.kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Fri, 3 May 2019 09:18:08 +0000 (10:18 +0100)]
Merge branch 'for-next/perf' of git://git./linux/kernel/git/will/linux into for-next/core
Will Deacon [Wed, 1 May 2019 14:45:36 +0000 (15:45 +0100)]
Merge branch 'for-next/timers' of git://git./linux/kernel/git/arm64/linux into for-next/core
Conflicts:
arch/arm64/Kconfig
arch/arm64/include/asm/arch_timer.h
Will Deacon [Wed, 1 May 2019 14:34:56 +0000 (15:34 +0100)]
Merge branch 'for-next/mitigations' of git://git./linux/kernel/git/arm64/linux into for-next/core
Will Deacon [Wed, 1 May 2019 14:34:17 +0000 (15:34 +0100)]
Merge branch 'for-next/futex' of git://git./linux/kernel/git/arm64/linux into for-next/core
Josh Poimboeuf [Sat, 13 Apr 2019 03:56:21 +0000 (22:56 -0500)]
Documentation: Add ARM64 to kernel-parameters.rst
Add ARM64 to the legend of architectures. It's already used in several
places in kernel-parameters.txt.
Suggested-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Josh Poimboeuf [Fri, 12 Apr 2019 20:39:32 +0000 (15:39 -0500)]
arm64/speculation: Support 'mitigations=' cmdline option
Configure arm64 runtime CPU speculation bug mitigations in accordance
with the 'mitigations=' cmdline option. This affects Meltdown, Spectre
v2, and Speculative Store Bypass.
The default behavior is unchanged.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
[will: reorder checks so KASLR implies KPTI and SSBS is affected by cmdline]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 30 Apr 2019 15:58:56 +0000 (16:58 +0100)]
arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB
SSBS provides a relatively cheap mitigation for SSB, but it is still a
mitigation and its presence does not indicate that the CPU is unaffected
by the vulnerability.
Tweak the mitigation logic so that we report the correct string in sysfs.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Mian Yousaf Kaukab [Mon, 15 Apr 2019 21:21:29 +0000 (16:21 -0500)]
arm64: enable generic CPU vulnerabilites support
Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.
Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jeremy Linton [Mon, 15 Apr 2019 21:21:28 +0000 (16:21 -0500)]
arm64: add sysfs vulnerability show for speculative store bypass
Return status based on ssbd_state and __ssb_safe. If the
mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a whitelist of known
good cores.
Given a heterogeneous machine, the overall machine vulnerability
defaults to safe but is reset to unsafe when we miss the whitelist
and the firmware doesn't explicitly tell us the core is safe.
In order to make that work we delay transitioning to vulnerable
until we know the firmware isn't responding to avoid a case
where we miss the whitelist, but the firmware goes ahead and
reports the core is not vulnerable. If all the cores in the
machine have SSBS, then __ssb_safe will remain true.
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Arun KS [Tue, 30 Apr 2019 10:35:04 +0000 (16:05 +0530)]
arm64: Fix size of __early_cpu_boot_status
__early_cpu_boot_status is of type long. Use quad
assembler directive to allocate proper size.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun KS <arunks@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:07 +0000 (16:49 +0100)]
clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable counters
Instead of always going via arch_counter_get_cntvct_stable to access the
counter workaround, let's have arch_timer_read_counter point to the
right method.
For that, we need to track whether any CPU in the system has a
workaround for the counter. This is done by having an atomic variable
tracking this.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:06 +0000 (16:49 +0100)]
clocksource/arm_arch_timer: Remove use of workaround static key
The use of a static key in a hotplug path has proved to be a real
nightmare, and makes it impossible to have scream-free lockdep
kernel.
Let's remove the static key altogether, and focus on something saner.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:05 +0000 (16:49 +0100)]
clocksource/arm_arch_timer: Drop use of static key in arch_timer_reg_read_stable
Let's start with the removal of the arch_timer_read_ool_enabled
static key in arch_timer_reg_read_stable. It is not a fast path,
and we can simplify things a bit.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:04 +0000 (16:49 +0100)]
clocksource/arm_arch_timer: Direcly assign set_next_event workaround
When a given timer is affected by an erratum and requires an
alternative implementation of set_next_event, we do a rather
complicated dance to detect and call the workaround on each
set_next_event call.
This is clearly idiotic, as we can perfectly detect whether
this CPU requires a workaround while setting up the clock event
device.
This only requires the CPU-specific detection to be done a bit
earlier, and we can then safely override the set_next_event pointer
if we have a workaround associated to that CPU.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by; Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:03 +0000 (16:49 +0100)]
arm64: Use arch_timer_read_counter instead of arch_counter_get_cntvct
Only arch_timer_read_counter will guarantee that workarounds are
applied. So let's use this one instead of arch_counter_get_cntvct.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:02 +0000 (16:49 +0100)]
watchdog/sbsa: Use arch_timer_read_counter instead of arch_counter_get_cntvct
Only arch_timer_read_counter will guarantee that workarounds are
applied. So let's use this one instead of arch_counter_get_cntvct.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 8 Apr 2019 15:49:01 +0000 (16:49 +0100)]
ARM: vdso: Remove dependency with the arch_timer driver internals
The VDSO code uses the kernel helper that was originally designed
to abstract the access between 32 and 64bit systems. It worked so
far because this function is declared as 'inline'.
As we're about to revamp that part of the code, the VDSO would
break. Let's fix it by doing what should have been done from
the start, a proper system register access.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 15 Apr 2019 12:03:54 +0000 (13:03 +0100)]
arm64: Apply ARM64_ERRATUM_1188873 to Neoverse-N1
Neoverse-N1 is also affected by ARM64_ERRATUM_1188873, so let's
add it to the list of affected CPUs.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: Update silicon-errata.txt]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 15 Apr 2019 12:03:53 +0000 (13:03 +0100)]
arm64: Add part number for Neoverse N1
New CPU, new part number. You know the drill.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 15 Apr 2019 12:03:52 +0000 (13:03 +0100)]
arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT
Since ARM64_ERRATUM_1188873 only affects AArch32 EL0, it makes some
sense that it should depend on COMPAT.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 15 Apr 2019 12:03:51 +0000 (13:03 +0100)]
arm64: Restrict ARM64_ERRATUM_1188873 mitigation to AArch32
We currently deal with ARM64_ERRATUM_1188873 by always trapping EL0
accesses for both instruction sets. Although nothing wrong comes out
of that, people trying to squeeze the last drop of performance from
buggy HW find this over the top. Oh well.
Let's change the mitigation by flipping the counter enable bit
on return to userspace. Non-broken HW gets an extra branch on
the fast path, which is hopefully not the end of the world.
The arch timer workaround is also removed.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Qian Cai [Mon, 29 Apr 2019 17:37:02 +0000 (13:37 -0400)]
arm64: mm: Remove pte_unmap_nested()
As of commit
ece0e2b6406a ("mm: remove pte_*map_nested()"),
pte_unmap_nested() is no longer used and can be removed from the arm64
code.
Signed-off-by: Qian Cai <cai@lca.pw>
[will: also remove pte_offset_map_nested()]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Qian Cai [Mon, 29 Apr 2019 17:37:01 +0000 (13:37 -0400)]
arm64: Fix compiler warning from pte_unmap() with -Wunused-but-set-variable
When building with -Wunused-but-set-variable, the compiler shouts about
a number of pte_unmap() users, since this expands to an empty macro on
arm64:
| mm/gup.c: In function 'gup_pte_range':
| mm/gup.c:1727:16: warning: variable 'ptem' set but not used
| [-Wunused-but-set-variable]
| mm/gup.c: At top level:
| mm/memory.c: In function 'copy_pte_range':
| mm/memory.c:821:24: warning: variable 'orig_dst_pte' set but not used
| [-Wunused-but-set-variable]
| mm/memory.c:821:9: warning: variable 'orig_src_pte' set but not used
| [-Wunused-but-set-variable]
| mm/swap_state.c: In function 'swap_ra_info':
| mm/swap_state.c:641:15: warning: variable 'orig_pte' set but not used
| [-Wunused-but-set-variable]
| mm/madvise.c: In function 'madvise_free_pte_range':
| mm/madvise.c:318:9: warning: variable 'orig_pte' set but not used
| [-Wunused-but-set-variable]
Rewrite pte_unmap() as a static inline function, which silences the
warnings.
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Mon, 29 Apr 2019 17:27:13 +0000 (18:27 +0100)]
arm64: compat: Reduce address limit for 64K pages
With the introduction of the config option that allows to enable kuser
helpers, it is now possible to reduce TASK_SIZE_32 when these are
disabled and 64K pages are enabled. This extends the compliance with
the section 6.5.8 of the C standard (C99).
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 29 Apr 2019 16:26:22 +0000 (17:26 +0100)]
arm64: arch_timer: Ensure counter register reads occur with seqlock held
When executing clock_gettime(), either in the vDSO or via a system call,
we need to ensure that the read of the counter register occurs within
the seqlock reader critical section. This ensures that updates to the
clocksource parameters (e.g. the multiplier) are consistent with the
counter value and therefore avoids the situation where time appears to
go backwards across multiple reads.
Extend the vDSO logic so that the seqlock critical section covers the
read of the counter register as well as accesses to the data page. Since
reads of the counter system registers are not ordered by memory barrier
instructions, introduce dependency ordering from the counter read to a
subsequent memory access so that the seqlock memory barriers apply to
the counter access in both the vDSO and the system call paths.
Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/alpine.DEB.2.21.1902081950260.1662@nanos.tec.linutronix.de/
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Xiongfeng Wang [Fri, 26 Apr 2019 02:16:36 +0000 (10:16 +0800)]
firmware: arm_sdei: Prohibit probing in '_sdei_handler'
Functions called in '_sdei_handler' are needed to be marked as
'nokprobe'. Because these functions are called in NMI context and
neither the arch-code's debug infrastructure nor kprobes core supports
this.
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Boyang Zhou [Mon, 29 Apr 2019 14:27:19 +0000 (15:27 +0100)]
arm64: mmap: Ensure file offset is treated as unsigned
The file offset argument to the arm64 sys_mmap() implementation is
scaled from bytes to pages by shifting right by PAGE_SHIFT.
Unfortunately, the offset is passed in as a signed 'off_t' type and
therefore large offsets (i.e. with the top bit set) are incorrectly
sign-extended by the shift. This has been observed to cause false mmap()
failures when mapping GPU doorbells on an arm64 server part.
Change the type of the file offset argument to sys_mmap() from 'off_t'
to 'unsigned long' so that the shifting scales the value as expected.
Cc: <stable@vger.kernel.org>
Signed-off-by: Boyang Zhou <zhouby_cn@126.com>
[will: rewrote commit message]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 29 Apr 2019 13:21:11 +0000 (14:21 +0100)]
arm64: Kconfig: Tidy up errata workaround help text
The nature of silicon errata means that the Kconfig help text for our
various software workarounds has been written by many different people.
Along the way, we've accumulated typos and inconsistencies which make
the options needlessly difficult to read.
Fix up minor issues with the help text.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jeremy Linton [Mon, 15 Apr 2019 21:21:27 +0000 (16:21 -0500)]
arm64: Always enable ssb vulnerability detection
Ensure we are always able to detect whether or not the CPU is affected
by SSB, so that we can later advertise this to userspace.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
[will: Use IS_ENABLED instead of #ifdef]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jeremy Linton [Mon, 15 Apr 2019 21:21:26 +0000 (16:21 -0500)]
arm64: add sysfs vulnerability show for spectre-v2
Track whether all the cores in the machine are vulnerable to Spectre-v2,
and whether all the vulnerable cores have been mitigated. We then expose
this information to userspace via sysfs.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jeremy Linton [Mon, 15 Apr 2019 21:21:25 +0000 (16:21 -0500)]
arm64: Always enable spectre-v2 vulnerability detection
Ensure we are always able to detect whether or not the CPU is affected
by Spectre-v2, so that we can later advertise this to userspace.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 15 Apr 2019 21:21:24 +0000 (16:21 -0500)]
arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
firmware knows about the Spectre-v2 mitigation, this particular
CPU is not vulnerable, and it is thus not necessary to call
the firmware on this CPU.
Let's use this information to our benefit.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Marc Zyngier [Mon, 15 Apr 2019 21:21:23 +0000 (16:21 -0500)]
arm64: Advertise mitigation of Spectre-v2, or lack thereof
We currently have a list of CPUs affected by Spectre-v2, for which
we check that the firmware implements ARCH_WORKAROUND_1. It turns
out that not all firmwares do implement the required mitigation,
and that we fail to let the user know about it.
Instead, let's slightly revamp our checks, and rely on a whitelist
of cores that are known to be non-vulnerable, and let the user know
the status of the mitigation in the kernel log.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jeremy Linton [Mon, 15 Apr 2019 21:21:22 +0000 (16:21 -0500)]
arm64: add sysfs vulnerability show for meltdown
We implement page table isolation as a mitigation for meltdown.
Report this to userspace via sysfs.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Mian Yousaf Kaukab [Mon, 15 Apr 2019 21:21:21 +0000 (16:21 -0500)]
arm64: Add sysfs vulnerability show for spectre-v1
spectre-v1 has been mitigated and the mitigation is always active.
Report this to userspace via sysfs
Signed-off-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jeremy Linton [Mon, 15 Apr 2019 21:21:20 +0000 (16:21 -0500)]
arm64: Provide a command line to disable spectre_v2 mitigation
There are various reasons, such as benchmarking, to disable spectrev2
mitigation on a machine. Provide a command-line option to do so.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Stefan Wahren <stefan.wahren@i2se.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Wed, 10 Apr 2019 10:51:54 +0000 (11:51 +0100)]
futex: Update comments and docs about return values of arch futex code
The architecture implementations of 'arch_futex_atomic_op_inuser()' and
'futex_atomic_cmpxchg_inatomic()' are permitted to return only -EFAULT,
-EAGAIN or -ENOSYS in the case of failure.
Update the comments in the asm-generic/ implementation and also a stray
reference in the robust futex documentation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Wed, 10 Apr 2019 10:49:11 +0000 (11:49 +0100)]
arm64: futex: Avoid copying out uninitialised stack in failed cmpxchg()
Returning an error code from futex_atomic_cmpxchg_inatomic() indicates
that the caller should not make any use of *uval, and should instead act
upon on the value of the error code. Although this is implemented
correctly in our futex code, we needlessly copy uninitialised stack to
*uval in the error case, which can easily be avoided.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 8 Apr 2019 13:23:17 +0000 (14:23 +0100)]
arm64: futex: Bound number of LDXR/STXR loops in FUTEX_WAKE_OP
Our futex implementation makes use of LDXR/STXR loops to perform atomic
updates to user memory from atomic context. This can lead to latency
problems if we end up spinning around the LL/SC sequence at the expense
of doing something useful.
Rework our futex atomic operations so that we return -EAGAIN if we fail
to update the futex word after 128 attempts. The core futex code will
reschedule if necessary and we'll try again later.
Cc: <stable@kernel.org>
Fixes: 6170a97460db ("arm64: Atomic operations")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Thu, 28 Feb 2019 11:58:08 +0000 (11:58 +0000)]
locking/futex: Allow low-level atomic operations to return -EAGAIN
Some futex() operations, including FUTEX_WAKE_OP, require the kernel to
perform an atomic read-modify-write of the futex word via the userspace
mapping. These operations are implemented by each architecture in
arch_futex_atomic_op_inuser() and futex_atomic_cmpxchg_inatomic(), which
are called in atomic context with the relevant hash bucket locks held.
Although these routines may return -EFAULT in response to a page fault
generated when accessing userspace, they are expected to succeed (i.e.
return 0) in all other cases. This poses a problem for architectures
that do not provide bounded forward progress guarantees or fairness of
contended atomic operations and can lead to starvation in some cases.
In these problematic scenarios, we must return back to the core futex
code so that we can drop the hash bucket locks and reschedule if
necessary, much like we do in the case of a page fault.
Allow architectures to return -EAGAIN from their implementations of
arch_futex_atomic_op_inuser() and futex_atomic_cmpxchg_inatomic(), which
will cause the core futex code to reschedule if necessary and return
back to the architecture code later on.
Cc: <stable@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 8 Apr 2019 11:45:09 +0000 (12:45 +0100)]
arm64: futex: Fix FUTEX_WAKE_OP atomic ops with non-zero result value
Rather embarrassingly, our futex() FUTEX_WAKE_OP implementation doesn't
explicitly set the return value on the non-faulting path and instead
leaves it holding the result of the underlying atomic operation. This
means that any FUTEX_WAKE_OP atomic operation which computes a non-zero
value will be reported as having failed. Regrettably, I wrote the buggy
code back in 2011 and it was upstreamed as part of the initial arm64
support in 2012.
The reasons we appear to get away with this are:
1. FUTEX_WAKE_OP is rarely used and therefore doesn't appear to get
exercised by futex() test applications
2. If the result of the atomic operation is zero, the system call
behaves correctly
3. Prior to version 2.25, the only operation used by GLIBC set the
futex to zero, and therefore worked as expected. From 2.25 onwards,
FUTEX_WAKE_OP is not used by GLIBC at all.
Fix the implementation by ensuring that the return value is either 0
to indicate that the atomic operation completed successfully, or -EFAULT
if we encountered a fault when accessing the user mapping.
Cc: <stable@kernel.org>
Fixes: 6170a97460db ("arm64: Atomic operations")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Fri, 26 Apr 2019 12:32:20 +0000 (13:32 +0100)]
Merge branch 'core/speculation' of git://git./linux/kernel/git/tip/tip into for-next/mitigations
Pull in core support for the "mitigations=" cmdline option from Thomas
Gleixner via -tip, which we can build on top of when we expose our
mitigation state via sysfs.
Kees Cook [Wed, 24 Apr 2019 16:55:37 +0000 (09:55 -0700)]
arm64: sysreg: Make mrs_s and msr_s macros work with Clang and LTO
Clang's integrated assembler does not allow assembly macros defined
in one inline asm block using the .macro directive to be used across
separate asm blocks. LLVM developers consider this a feature and not a
bug, recommending code refactoring:
https://bugs.llvm.org/show_bug.cgi?id=19749
As binutils doesn't allow macros to be redefined, this change uses
UNDEFINE_MRS_S and UNDEFINE_MSR_S to define corresponding macros
in-place and workaround gcc and clang limitations on redefining macros
across different assembler blocks.
Specifically, the current state after preprocessing looks like this:
asm volatile(".macro mXX_s ... .endm");
void f()
{
asm volatile("mXX_s a, b");
}
With GCC, it gives macro redefinition error because sysreg.h is included
in multiple source files, and assembler code for all of them is later
combined for LTO (I've seen an intermediate file with hundreds of
identical definitions).
With clang, it gives macro undefined error because clang doesn't allow
sharing macros between inline asm statements.
I also seem to remember catching another sort of undefined error with
GCC due to reordering of macro definition asm statement and generated
asm code for function that uses the macro.
The solution with defining and undefining for each use, while certainly
not elegant, satisfies both GCC and clang, LTO and non-LTO.
Co-developed-by: Alex Matveev <alxmtvv@gmail.com>
Co-developed-by: Yury Norov <ynorov@caviumnetworks.com>
Co-developed-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Dave Martin [Thu, 18 Apr 2019 17:41:38 +0000 (18:41 +0100)]
arm64: Expose SVE2 features for userspace
This patch provides support for reporting the presence of SVE2 and
its optional features to userspace.
This will also enable visibility of SVE2 for guests, when KVM
support for SVE-enabled guests is available.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 23 Apr 2019 13:37:24 +0000 (14:37 +0100)]
arm64: Kconfig: Make CONFIG_COMPAT a menuconfig entry
Make CONFIG_COMPAT a menuconfig entry so that we can place
CONFIG_KUSER_HELPERS and CONFIG_ARMV8_DEPRECATED underneath it.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Mon, 15 Apr 2019 09:49:37 +0000 (10:49 +0100)]
arm64: compat: Add KUSER_HELPERS config option
When kuser helpers are enabled the kernel maps the relative code at
a fixed address (0xffff0000). Making configurable the option to disable
them means that the kernel can remove this mapping and any access to
this memory area results in a sigfault.
Add a KUSER_HELPERS config option that can be used to disable the
mapping when it is turned off.
This option can be turned off if and only if the applications are
designed specifically for the platform and they do not make use of the
kuser helpers code.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
[will: Use IS_ENABLED() instead of #ifdef]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Mon, 15 Apr 2019 09:49:36 +0000 (10:49 +0100)]
arm64: compat: Refactor aarch32_alloc_vdso_pages()
aarch32_alloc_vdso_pages() needs to be refactored to make it
easier to disable kuser helpers.
Divide the function in aarch32_alloc_kuser_vdso_page() and
aarch32_alloc_sigreturn_vdso_page().
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
[will: Inlined sigpage allocation to simplify error paths]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Mon, 15 Apr 2019 09:49:35 +0000 (10:49 +0100)]
arm64: compat: Split kuser32
To make it possible to disable kuser helpers in aarch32 we need to
divide the kuser and the sigreturn functionalities.
Split the current version of kuser32 in kuser32 (for kuser helpers)
and sigreturn32 (for sigreturn helpers).
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Mon, 15 Apr 2019 09:49:34 +0000 (10:49 +0100)]
arm64: compat: Alloc separate pages for vectors and sigpage
For AArch32 tasks, we install a special "[vectors]" page that contains
the sigreturn trampolines and kuser helpers, which is mapped at a fixed
address specified by the kuser helpers ABI.
Having the sigreturn trampolines in the same page as the kuser helpers
makes it impossible to disable the kuser helpers independently.
Follow the Arm implementation, by moving the signal trampolines out of
the "[vectors]" page and into their own "[sigpage]".
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
[will: tweaked comments and fixed sparse warning]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Robin Murphy [Tue, 16 Apr 2019 15:24:25 +0000 (16:24 +0100)]
perf/arm-ccn: Clean up CPU hotplug handling
Like arm-cci, arm-ccn has the same issue of disabling preemption around
operations which can take mutexes. Again, remove the definite bug by
simply not trying to fight the theoretical races. And since we are
touching the hotplug handling code, take the opportunity to streamline
it, as there's really no need to store a full-sized cpumask to keep
track of a single CPU ID.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Robin Murphy [Tue, 16 Apr 2019 15:24:24 +0000 (16:24 +0100)]
perf/arm-cci: Remove broken race mitigation
Uncore PMU drivers face an awkward cyclic dependency wherein:
- They have to pick a valid online CPU to associate with before
registering the PMU device, since it will get exposed to userspace
immediately.
- The PMU registration has to be be at least partly complete before
hotplug events can be handled, since trying to migrate an
uninitialised context would be bad.
- The hotplug handler has to be ready as soon as a CPU is chosen, lest
it go offline without the user-visible cpumask value getting updated.
The arm-cci driver has tried to solve this by using get_cpu() to pick
the current CPU and prevent it from disappearing while both
registrations are performed, but that results in taking mutexes with
preemption disabled, which makes certain configurations very unhappy:
[ 1.983337] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:2004
[ 1.983340] in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: swapper/0
[ 1.983342] Preemption disabled at:
[ 1.983353] [<
ffffff80089801f4>] cci_pmu_probe+0x1dc/0x488
[ 1.983360] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.20-rt8-yocto-preempt-rt #1
[ 1.983362] Hardware name: ZynqMP ZCU102 Rev1.0 (DT)
[ 1.983364] Call trace:
[ 1.983369] dump_backtrace+0x0/0x158
[ 1.983372] show_stack+0x24/0x30
[ 1.983378] dump_stack+0x80/0xa4
[ 1.983383] ___might_sleep+0x138/0x160
[ 1.983386] __might_sleep+0x58/0x90
[ 1.983391] __rt_mutex_lock_state+0x30/0xc0
[ 1.983395] _mutex_lock+0x24/0x30
[ 1.983400] perf_pmu_register+0x2c/0x388
[ 1.983404] cci_pmu_probe+0x2bc/0x488
[ 1.983409] platform_drv_probe+0x58/0xa8
It is not feasible to resolve all the possible races outside of the perf
core itself, so address the immediate bug by following the example of
nearly every other PMU driver and not even trying to do so. Registering
the hotplug notifier first should minimise the window in which things
can go wrong, so that's about as much as we can reasonably do here. This
also revealed an additional race in assigning the global pointer too
late relative to the hotplug notifier, which gets fixed in the process.
Reported-by: Li, Meng <Meng.Li@windriver.com>
Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Josh Poimboeuf [Fri, 12 Apr 2019 20:39:31 +0000 (15:39 -0500)]
s390/speculation: Support 'mitigations=' cmdline option
Configure s390 runtime CPU speculation bug mitigations in accordance
with the 'mitigations=' cmdline option. This affects Spectre v1 and
Spectre v2.
The default behavior is unchanged.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Jiri Kosina <jkosina@suse.cz> (on x86)
Reviewed-by: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-arch@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Tyler Hicks <tyhicks@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Phil Auld <pauld@redhat.com>
Link: https://lkml.kernel.org/r/e4a161805458a5ec88812aac0307ae3908a030fc.1555085500.git.jpoimboe@redhat.com
Josh Poimboeuf [Fri, 12 Apr 2019 20:39:30 +0000 (15:39 -0500)]
powerpc/speculation: Support 'mitigations=' cmdline option
Configure powerpc CPU runtime speculation bug mitigations in accordance
with the 'mitigations=' cmdline option. This affects Meltdown, Spectre
v1, Spectre v2, and Speculative Store Bypass.
The default behavior is unchanged.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Jiri Kosina <jkosina@suse.cz> (on x86)
Reviewed-by: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-arch@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Tyler Hicks <tyhicks@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Phil Auld <pauld@redhat.com>
Link: https://lkml.kernel.org/r/245a606e1a42a558a310220312d9b6adb9159df6.1555085500.git.jpoimboe@redhat.com
Josh Poimboeuf [Fri, 12 Apr 2019 20:39:29 +0000 (15:39 -0500)]
x86/speculation: Support 'mitigations=' cmdline option
Configure x86 runtime CPU speculation bug mitigations in accordance with
the 'mitigations=' cmdline option. This affects Meltdown, Spectre v2,
Speculative Store Bypass, and L1TF.
The default behavior is unchanged.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Jiri Kosina <jkosina@suse.cz> (on x86)
Reviewed-by: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-arch@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Tyler Hicks <tyhicks@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Phil Auld <pauld@redhat.com>
Link: https://lkml.kernel.org/r/6616d0ae169308516cfdf5216bedd169f8a8291b.1555085500.git.jpoimboe@redhat.com
Josh Poimboeuf [Fri, 12 Apr 2019 20:39:28 +0000 (15:39 -0500)]
cpu/speculation: Add 'mitigations=' cmdline option
Keeping track of the number of mitigations for all the CPU speculation
bugs has become overwhelming for many users. It's getting more and more
complicated to decide which mitigations are needed for a given
architecture. Complicating matters is the fact that each arch tends to
have its own custom way to mitigate the same vulnerability.
Most users fall into a few basic categories:
a) they want all mitigations off;
b) they want all reasonable mitigations on, with SMT enabled even if
it's vulnerable; or
c) they want all reasonable mitigations on, with SMT disabled if
vulnerable.
Define a set of curated, arch-independent options, each of which is an
aggregation of existing options:
- mitigations=off: Disable all mitigations.
- mitigations=auto: [default] Enable all the default mitigations, but
leave SMT enabled, even if it's vulnerable.
- mitigations=auto,nosmt: Enable all the default mitigations, disabling
SMT if needed by a mitigation.
Currently, these options are placeholders which don't actually do
anything. They will be fleshed out in upcoming patches.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Jiri Kosina <jkosina@suse.cz> (on x86)
Reviewed-by: Jiri Kosina <jkosina@suse.cz>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-arch@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Tyler Hicks <tyhicks@canonical.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Phil Auld <pauld@redhat.com>
Link: https://lkml.kernel.org/r/b07a8ef9b7c5055c3a4637c87d07c296d5016fe0.1555085500.git.jpoimboe@redhat.com
Kefeng Wang [Mon, 8 Apr 2019 15:21:12 +0000 (23:21 +0800)]
ACPI/IORT: Reject platform device creation on NUMA node mapping failure
In a system where, through IORT firmware mappings, the SMMU device is
mapped to a NUMA node that is not online, the kernel bootstrap results
in the following crash:
Unable to handle kernel paging request at virtual address
0000000000001388
Mem abort info:
ESR = 0x96000004
Exception class = DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
Data abort info:
ISV = 0, ISS = 0x00000004
CM = 0, WnR = 0
[
0000000000001388] user address but active_mm is swapper
Internal error: Oops:
96000004 [#1] SMP
Modules linked in:
CPU: 5 PID: 1 Comm: swapper/0 Not tainted 5.0.0 #15
pstate:
80c00009 (Nzcv daif +PAN +UAO)
pc : __alloc_pages_nodemask+0x13c/0x1068
lr : __alloc_pages_nodemask+0xdc/0x1068
...
Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
Call trace:
__alloc_pages_nodemask+0x13c/0x1068
new_slab+0xec/0x570
___slab_alloc+0x3e0/0x4f8
__slab_alloc+0x60/0x80
__kmalloc_node_track_caller+0x10c/0x478
devm_kmalloc+0x44/0xb0
pinctrl_bind_pins+0x4c/0x188
really_probe+0x78/0x2b8
driver_probe_device+0x64/0x110
device_driver_attach+0x74/0x98
__driver_attach+0x9c/0xe8
bus_for_each_dev+0x84/0xd8
driver_attach+0x30/0x40
bus_add_driver+0x170/0x218
driver_register+0x64/0x118
__platform_driver_register+0x54/0x60
arm_smmu_driver_init+0x24/0x2c
do_one_initcall+0xbc/0x328
kernel_init_freeable+0x304/0x3ac
kernel_init+0x18/0x110
ret_from_fork+0x10/0x1c
Code:
f90013b5 b9410fa1 1a9f0694 b50014c2 (
b9400804)
---[ end trace
dfeaed4c373a32da ]--
Change the dev_set_proximity() hook prototype so that it returns a
value and make it return failure if the PXM->NUMA-node mapping
corresponds to an offline node, fixing the crash.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/linux-arm-kernel/20190315021940.86905-1-wangkefeng.wang@huawei.com/
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Tue, 16 Apr 2019 16:14:30 +0000 (17:14 +0100)]
arm64: vdso: Fix clock_getres() for CLOCK_REALTIME
clock_getres() in the vDSO library has to preserve the same behaviour
of posix_get_hrtimer_res().
In particular, posix_get_hrtimer_res() does:
sec = 0;
ns = hrtimer_resolution;
where 'hrtimer_resolution' depends on whether or not high resolution
timers are enabled, which is a runtime decision.
The vDSO incorrectly returns the constant CLOCK_REALTIME_RES. Fix this
by exposing 'hrtimer_resolution' in the vDSO datapage and returning that
instead.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
[will: Use WRITE_ONCE(), move adr off COARSE path, renumber labels, use 'w' reg]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Nishad Kamdar [Tue, 16 Apr 2019 15:19:35 +0000 (20:49 +0530)]
arm64: Use the correct style for SPDX License Identifier
This patch corrects the SPDX License Identifier style
in the arm64 Hardware Architecture related files.
Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Nishad Kamdar <nishadkamdar@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Mark Rutland [Tue, 9 Apr 2019 11:13:13 +0000 (12:13 +0100)]
arm64: instrument smp_{load_acquire,store_release}
Our __smp_store_release() and __smp_load_acquire() macros use inline
assembly, which is opaque to kasan. This means that kasan can't catch
erroneous use of these.
This patch adds kasan instrumentation to both.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[will: consistently use *p as argument to sizeof]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Miles Chen [Mon, 15 Apr 2019 17:36:36 +0000 (01:36 +0800)]
arm64: mm: check virtual addr in virt_to_page() if CONFIG_DEBUG_VIRTUAL=y
This change uses the original virt_to_page() (the one with __pa()) to
check the given virtual address if CONFIG_DEBUG_VIRTUAL=y.
Recently, I worked on a bug: a driver passes a symbol address to
dma_map_single() and the virt_to_page() (called by dma_map_single())
does not work for non-linear addresses after commit
9f2875912dac
("arm64: mm: restrict virt_to_page() to the linear mapping").
I tried to trap the bug by enabling CONFIG_DEBUG_VIRTUAL but it
did not work - bacause the commit removes the __pa() from
virt_to_page() but CONFIG_DEBUG_VIRTUAL checks the virtual address
in __pa()/__virt_to_phys().
A simple solution is to use the original virt_to_page()
(the one with__pa()) if CONFIG_DEBUG_VIRTUAL=y.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Miles Chen <miles.chen@mediatek.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Andrew Murray [Tue, 9 Apr 2019 09:52:45 +0000 (10:52 +0100)]
arm64: Advertise ARM64_HAS_DCPODP cpu feature
Advertise ARM64_HAS_DCPODP when both DC CVAP and DC CVADP are supported.
Even though we don't use this feature now, we provide it for consistency
with DCPOP and anticipate it being used in the future.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Andrew Murray [Tue, 9 Apr 2019 09:52:44 +0000 (10:52 +0100)]
arm64: add CVADP support to the cache maintenance helper
Allow users of dcache_by_line_op to specify cvadp as an op.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Andrew Murray [Tue, 9 Apr 2019 09:52:43 +0000 (10:52 +0100)]
arm64: Expose DC CVADP to userspace
ARMv8.5 builds upon the ARMv8.2 DC CVAP instruction by introducing a DC
CVADP instruction which cleans the data cache to the point of deep
persistence. Let's expose this support via the arm64 ELF hwcaps.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Andrew Murray [Tue, 9 Apr 2019 09:52:42 +0000 (10:52 +0100)]
arm64: Handle trapped DC CVADP
The ARMv8.5 DC CVADP instruction may be trapped to EL1 via
SCTLR_EL1.UCI therefore let's provide a handler for it.
Just like the CVAP instruction we use a 'sys' instruction instead of
the 'dc' alias to avoid build issues with older toolchains.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Andrew Murray [Tue, 9 Apr 2019 09:52:41 +0000 (10:52 +0100)]
arm64: HWCAP: encapsulate elf_hwcap
The introduction of AT_HWCAP2 introduced accessors which ensure that
hwcap features are set and tested appropriately.
Let's now mandate access to elf_hwcap via these accessors by making
elf_hwcap static within cpufeature.c.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Andrew Murray [Tue, 9 Apr 2019 09:52:40 +0000 (10:52 +0100)]
arm64: HWCAP: add support for AT_HWCAP2
As we will exhaust the first 32 bits of AT_HWCAP let's start
exposing AT_HWCAP2 to userspace to give us up to 64 caps.
Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
prefer to expand into AT_HWCAP2 in order to provide a consistent
view to userspace between ILP32 and LP64. However internal to the
kernel we prefer to continue to use the full space of elf_hwcap.
To reduce complexity and allow for future expansion, we now
represent hwcaps in the kernel as ordinals and use a
KERNEL_HWCAP_ prefix. This allows us to support automatic feature
based module loading for all our hwcaps.
We introduce cpu_set_feature to set hwcaps which complements the
existing cpu_have_feature helper. These helpers allow us to clean
up existing direct uses of elf_hwcap and reduce any future effort
required to move beyond 64 caps.
For convenience we also introduce cpu_{have,set}_named_feature which
makes use of the cpu_feature macro to allow providing a hwcap name
without a {KERNEL_}HWCAP_ prefix.
Signed-off-by: Andrew Murray <andrew.murray@arm.com>
[will: use const_ilog2() and tweak documentation]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Masami Hiramatsu [Fri, 12 Apr 2019 14:22:01 +0000 (23:22 +0900)]
arm64: ptrace: Add function argument access API
Add regs_get_argument() which returns N th argument of the function
call. On arm64, it supports up to 8th argument.
Note that this chooses most probably assignment, in some case
it can be incorrect (e.g. passing data structure or floating
point etc.)
This enables ftrace kprobe events to access kernel function
arguments via $argN syntax.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
[will: tidied up the comment a bit]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Masahiro Yamada [Thu, 11 Apr 2019 09:30:15 +0000 (18:30 +0900)]
arm64: vdso: use $(LD) instead of $(CC) to link VDSO
We use $(LD) to link vmlinux, modules, decompressors, etc.
VDSO is the only exceptional case where $(CC) is used as the linker
driver, but I do not know why we need to do so. VDSO uses a special
linker script, and does not link standard libraries at all.
I changed the Makefile to use $(LD) rather than $(CC). I tested this,
and VDSO worked for me.
Users will be able to use their favorite linker (e.g. lld instead of
of bfd) by passing LD= from the command line.
My plan is to rewrite all VDSO Makefiles to use $(LD), then delete
cc-ldoption.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Raphael Gault [Thu, 11 Apr 2019 16:16:46 +0000 (17:16 +0100)]
arm64: perf_event: Remove wrongfully used inline
The functions armv8pmu_read_counter() and armv8pmu_write_counter()
are `static inline` while they are only referenced when assigned
to a function pointer field in a `struct arm_pmu` instance.
The inline keyword is thus counter intuitive and shouldn't be used.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Raphael Gault <raphael.gault@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Vincenzo Frascino [Mon, 1 Apr 2019 11:30:14 +0000 (12:30 +0100)]
arm64: compat: Reduce address limit
Currently, compat tasks running on arm64 can allocate memory up to
TASK_SIZE_32 (UL(0x100000000)).
This means that mmap() allocations, if we treat them as returning an
array, are not compliant with the sections 6.5.8 of the C standard
(C99) which states that: "If the expression P points to an element of
an array object and the expression Q points to the last element of the
same array object, the pointer expression Q+1 compares greater than P".
Redefine TASK_SIZE_32 to address the issue.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Jann Horn <jannh@google.com>
Cc: <stable@vger.kernel.org>
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
[will: fixed typo in comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jean-Philippe Brucker [Mon, 8 Apr 2019 17:17:19 +0000 (18:17 +0100)]
arm64: Save and restore OSDLR_EL1 across suspend/resume
When the CPU comes out of suspend, the firmware may have modified the OS
Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and
restore it on resume.
Cc: <stable@vger.kernel.org>
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Jean-Philippe Brucker [Mon, 8 Apr 2019 17:17:18 +0000 (18:17 +0100)]
arm64: Clear OSDLR_EL1 on CPU boot
Some firmwares may reboot CPUs with OS Double Lock set. Make sure that
it is unlocked, in order to use debug exceptions.
Cc: <stable@vger.kernel.org>
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 8 Apr 2019 10:23:48 +0000 (11:23 +0100)]
arm64: mm: Consolidate early page table allocation
The logic for early allocation of page tables is duplicated between
pgd_kernel_pgtable_alloc() and pgd_pgtable_alloc(). Drop the duplication
by calling one from the other and renaming pgd_kernel_pgtable_alloc()
accordingly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Yu Zhao [Tue, 12 Mar 2019 00:57:49 +0000 (18:57 -0600)]
arm64: mm: enable per pmd page table lock
Switch from per mm_struct to per pmd page table lock by enabling
ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity for
large system.
I'm not sure if there is contention on mm->page_table_lock. Given
the option comes at no cost (apart from initializing more spin
locks), why not enable it now.
We only do so when pmd is not folded, so we don't mistakenly call
pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc().
Signed-off-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Anshuman Khandual [Tue, 12 Mar 2019 13:25:45 +0000 (18:55 +0530)]
KVM: ARM: Remove pgtable page standard functions from stage-2 page tables
ARM64 standard pgtable functions are going to use pgtable_page_[ctor|dtor]
or pgtable_pmd_page_[ctor|dtor] constructs. At present KVM guest stage-2
PUD|PMD|PTE level page tabe pages are allocated with __get_free_page()
via mmu_memory_cache_alloc() but released with standard pud|pmd_free() or
pte_free_kernel(). These will fail once they start calling into pgtable_
[pmd]_page_dtor() for pages which never originally went through respective
constructor functions. Hence convert all stage-2 page table page release
functions to call buddy directly while freeing pages.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Yu Zhao <yuzhao@google.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Yu Zhao [Tue, 12 Mar 2019 00:57:47 +0000 (18:57 -0600)]
arm64: mm: don't call page table ctors for init_mm
init_mm doesn't require page table lock to be initialized at
any level. Add a separate page table allocator for it, and the
new one skips page table ctors.
The ctors allocate memory when ALLOC_SPLIT_PTLOCKS is set. Not
calling them avoids memory leak in case we call pte_free_kernel()
on init_mm.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Yu Zhao [Tue, 12 Mar 2019 00:57:46 +0000 (18:57 -0600)]
arm64: mm: use appropriate ctors for page tables
For pte page, use pgtable_page_ctor(); for pmd page, use
pgtable_pmd_page_ctor(); and for the rest (pud, p4d and pgd),
don't use any.
For now, we don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK and
pgtable_pmd_page_ctor() is a nop. When we do in patch 3, we
make sure pmd is not folded so we won't mistakenly call
pgtable_pmd_page_ctor() on pud or p4d.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 26 Feb 2019 15:39:47 +0000 (15:39 +0000)]
arm64: debug: Clean up brk_handler()
brk_handler() now looks pretty strange and can be refactored to drop its
funny 'handler_found' local variable altogether.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 26 Feb 2019 15:06:42 +0000 (15:06 +0000)]
arm64: probes: Move magic BRK values into brk-imm.h
kprobes and uprobes reserve some BRK immediates for installing their
probes. Define these along with the other reservations in brk-imm.h
and rename the ESR definitions to be consistent with the others that we
already have.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 26 Feb 2019 15:37:09 +0000 (15:37 +0000)]
arm64: debug: Remove redundant user_mode(regs) checks from debug handlers
Now that the debug hook dispatching code takes the triggering exception
level into account, there's no need for the hooks themselves to poke
around with user_mode(regs).
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 26 Feb 2019 14:35:00 +0000 (14:35 +0000)]
arm64: kprobes: Avoid calling kprobes debug handlers explicitly
Kprobes bypasses our debug hook registration code so that it doesn't
get tangled up with recursive debug exceptions from things like lockdep:
http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/324385.html
However, since then, (a) the hook list has become RCU protected and (b)
the kprobes hooks were found not to filter out exceptions from userspace
correctly. On top of that, the step handler is invoked directly from
single_step_handler(), which *does* use the debug hook list, so it's
clearly not the end of the world.
For now, have kprobes use the debug hook registration API like everybody
else. We can revisit this in the future if this is found to limit
coverage significantly.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Tue, 26 Feb 2019 12:52:47 +0000 (12:52 +0000)]
arm64: debug: Separate debug hooks based on target exception level
Mixing kernel and user debug hooks together is highly error-prone as it
relies on all of the hooks to figure out whether the exception came from
kernel or user, and then to act accordingly.
Make our debug hook code a little more robust by maintaining separate
hook lists for user and kernel, with separate registration functions
to force callers to be explicit about the exception levels that they
care about.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 25 Feb 2019 13:22:17 +0000 (13:22 +0000)]
arm64: debug: Remove meaningless comment
The comment next to the definition of our 'break_hook' list head is
at best wrong but mainly just meaningless. Rip it out.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 25 Feb 2019 12:42:26 +0000 (12:42 +0000)]
arm64: debug: Rename addr parameter for non-watchpoint exception hooks
Since the 'addr' parameter contains an UNKNOWN value for non-watchpoint
debug exceptions, rename it to 'unused' for those hooks so we don't get
tempted to use it in the future.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Mon, 25 Feb 2019 12:06:43 +0000 (12:06 +0000)]
arm64: debug: Remove unused return value from do_debug_exception()
do_debug_exception() goes out of its way to return a value that isn't
ever used, so just make the thing void.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Torsten Duwe [Fri, 8 Feb 2019 15:10:14 +0000 (16:10 +0100)]
kasan: Makefile: Replace -pg with CC_FLAGS_FTRACE
In preparation for arm64 supporting ftrace built on other compiler
options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags,
whatever these may be, rather than assuming '-pg'.
There should be no functional change as a result of this patch.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Torsten Duwe [Fri, 8 Feb 2019 15:10:11 +0000 (16:10 +0100)]
efi/arm/arm64: Makefile: Replace -pg with CC_FLAGS_FTRACE
In preparation for arm64 supporting ftrace built on other compiler
options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags,
whatever these maybe, rather than assuming '-pg'.
While at it, fix arm32 as well. There should be no functional change as
a result of this patch.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Torsten Duwe [Fri, 8 Feb 2019 15:10:07 +0000 (16:10 +0100)]
arm64: Makefile: Replace -pg with CC_FLAGS_FTRACE
In preparation for arm64 supporting ftrace built on other compiler
options, let's have the arm64 Makefiles remove the $(CC_FLAGS_FTRACE)
flags, whatever these may be, rather than assuming '-pg'.
There should be no functional change as a result of this patch.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Alexandru Elisei [Fri, 5 Apr 2019 10:20:12 +0000 (11:20 +0100)]
arm64: Use defines instead of magic numbers
Following assembly code is not trivial; make it slightly easier to read by
replacing some of the magic numbers with the defines which are already
present in sysreg.h.
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Shameer Kolothum [Tue, 26 Mar 2019 15:17:53 +0000 (15:17 +0000)]
perf/smmuv3: Enable HiSilicon Erratum
162001800 quirk
HiSilicon erratum
162001800 describes the limitation of
SMMUv3 PMCG implementation on HiSilicon Hip08 platforms.
On these platforms, the PMCG event counter registers
(SMMU_PMCG_EVCNTRn) are read only and as a result it
is not possible to set the initial counter period value
on event monitor start.
To work around this, the current value of the counter
is read and used for delta calculations. OEM information
from ACPI header is used to identify the affected hardware
platforms.
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
[will: update silicon-errata.txt and add reason string to acpi match]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Shameer Kolothum [Tue, 26 Mar 2019 15:17:52 +0000 (15:17 +0000)]
perf/smmuv3: Add MSI irq support
This adds support for MSI-based counter overflow interrupt.
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Neil Leeder [Tue, 26 Mar 2019 15:17:51 +0000 (15:17 +0000)]
perf/smmuv3: Add arm64 smmuv3 pmu driver
Adds a new driver to support the SMMUv3 PMU and add it into the
perf events framework.
Each SMMU node may have multiple PMUs associated with it, each of
which may support different events.
SMMUv3 PMCG devices are named as smmuv3_pmcg_<phys_addr_page> where
<phys_addr_page> is the physical page address of the SMMU PMCG
wrapped to 4K boundary. For example, the PMCG at 0xff88840000 is
named smmuv3_pmcg_ff88840
Filtering by stream id is done by specifying filtering parameters
with the event. options are:
filter_enable - 0 = no filtering, 1 = filtering enabled
filter_span - 0 = exact match, 1 = pattern match
filter_stream_id - pattern to filter against
Example: perf stat -e smmuv3_pmcg_ff88840/transaction,filter_enable=1,
filter_span=1,filter_stream_id=0x42/ -a netperf
Applies filter pattern 0x42 to transaction events, which means events
matching stream ids 0x42 & 0x43 are counted as only upper StreamID
bits are required to match the given filter. Further filtering
information is available in the SMMU documentation.
SMMU events are not attributable to a CPU, so task mode and sampling
are not supported.
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
[will: fold in review feedback from Robin]
[will: rewrite Kconfig text and allow building as a module]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Neil Leeder [Tue, 26 Mar 2019 15:17:50 +0000 (15:17 +0000)]
ACPI/IORT: Add support for PMCG
Add support for the SMMU Performance Monitor Counter Group
information from ACPI. This is in preparation for its use
in the SMMUv3 PMU driver.
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Masahiro Yamada [Wed, 3 Apr 2019 08:48:22 +0000 (17:48 +0900)]
arm64: vdso: fix and clean-up Makefile
- $(call if_changed,...) must have FORCE as a prerequisite
- vdso.lds is a generated file, so it should be prefixed with
$(obj)/ instead of $(src)/.
- cmd_vdsosym is a one-liner rule, so the assignment with '='
is simpler.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Wed, 3 Apr 2019 16:58:39 +0000 (17:58 +0100)]
arm64: mm: Ensure we ignore the initrd if it is placed out of range
If the initrd payload isn't completely accessible via the linear map,
then we print a warning during boot and nobble the virtual address of
the payload so that we ignore it later on.
Unfortunately, since commit
c756c592e442 ("arm64: Utilize
phys_initrd_start/phys_initrd_size"), the virtual address isn't
initialised until later anyway, so we need to nobble the size of the
payload to ensure that we don't try to use it later on.
Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
Reported-by: Pierre Kuo <vichy.kuo@gmail.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Wen Yang [Tue, 5 Mar 2019 11:34:05 +0000 (19:34 +0800)]
arm64: cpu_ops: fix a leaked reference by adding missing of_node_put
The call to of_get_next_child returns a node pointer with refcount
incremented thus it must be explicitly decremented after the last
usage.
Detected by coccinelle with the following warnings:
./arch/arm64/kernel/cpu_ops.c:102:1-7: ERROR: missing of_node_put;
acquired a node pointer with refcount incremented on line 69, but
without a corresponding object release within this function.
Signed-off-by: Wen Yang <wen.yang99@zte.com.cn>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Wed, 3 Apr 2019 12:36:54 +0000 (13:36 +0100)]
arm64: mm: Make show_pte() a static function
show_pte() doesn't have any external callers, so make it static.
Signed-off-by: Will Deacon <will.deacon@arm.com>