Michael Ellerman [Thu, 26 Jul 2018 13:07:15 +0000 (23:07 +1000)]
powerpc/64s: Remove PSERIES naming from the MASKABLE macros
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:14 +0000 (23:07 +1000)]
powerpc/64s: Drop _MASKABLE_RELON_EXCEPTION_PSERIES()
_MASKABLE_RELON_EXCEPTION_PSERIES() does nothing useful, update all
callers to use __MASKABLE_RELON_EXCEPTION_PSERIES() directly.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:13 +0000 (23:07 +1000)]
powerpc/64s: Drop _MASKABLE_EXCEPTION_PSERIES()
_MASKABLE_EXCEPTION_PSERIES() does nothing useful, update all callers
to use __MASKABLE_EXCEPTION_PSERIES() directly.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:12 +0000 (23:07 +1000)]
powerpc/64s: Rename EXCEPTION_PROLOG_PSERIES to EXCEPTION_PROLOG
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:11 +0000 (23:07 +1000)]
powerpc/64s: Rename EXCEPTION_RELON_PROLOG_PSERIES
To just EXCEPTION_RELON_PROLOG().
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:10 +0000 (23:07 +1000)]
powerpc/64s: Rename EXCEPTION_RELON_PROLOG_PSERIES_1
The EXCEPTION_RELON_PROLOG_PSERIES_1() macro does the same job as
EXCEPTION_PROLOG_2 (which we just recently created), except for
"RELON" (relocation on) exceptions.
So rename it as such.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:09 +0000 (23:07 +1000)]
powerpc/64s: Remove PSERIES from the NORI macros
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:08 +0000 (23:07 +1000)]
powerpc/64s: Rename EXCEPTION_PROLOG_PSERIES_1 to EXCEPTION_PROLOG_2
As with the other patches in this series, we are removing the
"PSERIES" from the name as it's no longer meaningful.
In this case it's not simply a case of removing the "PSERIES" as that
would result in a clash with the existing EXCEPTION_PROLOG_1.
Instead we name this one EXCEPTION_PROLOG_2, as it's usually used in
sequence after 0 and 1.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:07 +0000 (23:07 +1000)]
powerpc/64s: Rename STD_RELON_EXCEPTION_PSERIES_OOL to STD_RELON_EXCEPTION_OOL
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:06 +0000 (23:07 +1000)]
powerpc/64s: Rename STD_RELON_EXCEPTION_PSERIES to STD_RELON_EXCEPTION
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:05 +0000 (23:07 +1000)]
powerpc/64s: Rename STD_EXCEPTION_PSERIES_OOL to STD_EXCEPTION_OOL
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:04 +0000 (23:07 +1000)]
powerpc/64s: Rename STD_EXCEPTION_PSERIES to STD_EXCEPTION
The "PSERIES" in STD_EXCEPTION_PSERIES is to differentiate the macros
from the legacy iSeries versions, which are called
STD_EXCEPTION_ISERIES. It is not anything to do with pseries vs
powernv or powermac etc.
We removed the legacy iSeries code in 2012, in commit 8ee3e0d69623x
("powerpc: Remove the main legacy iSerie platform code").
So remove "PSERIES" from the macros.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:03 +0000 (23:07 +1000)]
powerpc/64s: Move SET_SCRATCH0() into EXCEPTION_RELON_PROLOG_PSERIES()
EXCEPTION_RELON_PROLOG_PSERIES() only has two users,
STD_RELON_EXCEPTION_PSERIES() and STD_RELON_EXCEPTION_HV() both of
which "call" SET_SCRATCH0(), so just move SET_SCRATCH0() into
EXCEPTION_RELON_PROLOG_PSERIES().
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 13:07:02 +0000 (23:07 +1000)]
powerpc/64s: Move SET_SCRATCH0() into EXCEPTION_PROLOG_PSERIES()
EXCEPTION_PROLOG_PSERIES() only has two users, STD_EXCEPTION_PSERIES()
and STD_EXCEPTION_HV() both of which "call" SET_SCRATCH0(), so just
move SET_SCRATCH0() into EXCEPTION_PROLOG_PSERIES().
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Darren Stevens [Wed, 25 Jul 2018 20:55:18 +0000 (21:55 +0100)]
powerpc/pasemi: Search for PCI root bus by compatible property
Pasemi arch code finds the root of the PCI-e bus by searching the
device-tree for a node called 'pxp'. But the root bus has a compatible
property of 'pasemi,rootbus' so search for that instead.
Signed-off-by: Darren Stevens <darren@stevens-zone.net>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 1 Aug 2018 09:01:16 +0000 (09:01 +0000)]
selftests/powerpc: Update strlen() test to test the new assembly function for PPC32
This patch adds a test for testing the new assembly strlen() for PPC32
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Fix 64-bit build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 1 Aug 2018 09:01:14 +0000 (09:01 +0000)]
powerpc/lib: Implement strlen() in assembly for PPC32
The generic implementation of strlen() reads strings byte per byte.
This patch implements strlen() in assembly based on a read of entire
words, in the same spirit as what some other arches and glibc do.
On a 8xx the time spent in strlen is reduced by 3/4 for long strings.
strlen() selftest on an 8xx provides the following values:
Before the patch (ie with the generic strlen() in lib/string.c):
len 256 : time = 1.195055
len 016 : time = 0.083745
len 008 : time = 0.046828
len 004 : time = 0.028390
After the patch:
len 256 : time = 0.272185 ==> 78% improvment
len 016 : time = 0.040632 ==> 51% improvment
len 008 : time = 0.033060 ==> 29% improvment
len 004 : time = 0.029149 ==> 2% degradation
On a 832x:
Before the patch:
len 256 : time = 0.236125
len 016 : time = 0.018136
len 008 : time = 0.011000
len 004 : time = 0.007229
After the patch:
len 256 : time = 0.094950 ==> 60% improvment
len 016 : time = 0.013357 ==> 26% improvment
len 008 : time = 0.010586 ==> 4% improvment
len 004 : time = 0.008784
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 1 Aug 2018 09:01:12 +0000 (09:01 +0000)]
selftests/powerpc: Add test for strlen()
This patch adds a test for strlen()
string.c contains a copy of strlen() from lib/string.c
The test first tests the correctness of strlen() by comparing
the result with libc strlen(). It tests all cases of alignment.
It them tests the duration of an aligned strlen() on a 4 bytes string,
on a 16 bytes string and on a 256 bytes string.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Drop change log from copy of string.c]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 1 Aug 2018 09:01:10 +0000 (09:01 +0000)]
selftests/powerpc: Add test for 32 bits memcmp
This patch renames memcmp test to memcmp_64 and adds a memcmp_32 test
for testing the 32 bits version of memcmp()
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Fix 64-bit build by adding build_32bit test]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Mahesh Salgaonkar [Wed, 4 Jul 2018 17:57:21 +0000 (23:27 +0530)]
powerpc/pseries: Defer the logging of rtas error to irq work queue.
rtas_log_buf is a buffer to hold RTAS event data that are communicated
to kernel by hypervisor. This buffer is then used to pass RTAS event
data to user through proc fs. This buffer is allocated from
vmalloc (non-linear mapping) area.
On Machine check interrupt, register r3 points to RTAS extended event
log passed by hypervisor that contains the MCE event. The pseries
machine check handler then logs this error into rtas_log_buf. The
rtas_log_buf is a vmalloc-ed (non-linear) buffer we end up taking up a
page fault (vector 0x300) while accessing it. Since machine check
interrupt handler runs in NMI context we can not afford to take any
page fault. Page faults are not honored in NMI context and causes
kernel panic. Apart from that, as Nick pointed out,
pSeries_log_error() also takes a spin_lock while logging error which
is not safe in NMI context. It may endup in deadlock if we get another
MCE before releasing the lock. Fix this by deferring the logging of
rtas error to irq work queue.
Current implementation uses two different buffers to hold rtas error
log depending on whether extended log is provided or not. This makes
bit difficult to identify which buffer has valid data that needs to
logged later in irq work. Simplify this using single buffer, one per
paca, and copy rtas log to it irrespective of whether extended log is
provided or not. Allocate this buffer below RMA region so that it can
be accessed in real mode mce handler.
Fixes: b96672dd840f ("powerpc: Machine check interrupt is a non-maskable interrupt")
Cc: stable@vger.kernel.org # v4.14+
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Mahesh Salgaonkar [Wed, 4 Jul 2018 17:57:02 +0000 (23:27 +0530)]
powerpc/pseries: Avoid using the size greater than RTAS_ERROR_LOG_MAX.
The global mce data buffer that used to copy rtas error log is of 2048
(RTAS_ERROR_LOG_MAX) bytes in size. Before the copy we read
extended_log_length from rtas error log header, then use max of
extended_log_length and RTAS_ERROR_LOG_MAX as a size of data to be copied.
Ideally the platform (phyp) will never send extended error log with
size > 2048. But if that happens, then we have a risk of buffer overrun
and corruption. Fix this by using min_t instead.
Fixes: d368514c3097 ("powerpc: Fix corruption when grabbing FWNMI data")
Reported-by: Michal Suchanek <msuchanek@suse.com>
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 11 Apr 2018 05:18:01 +0000 (15:18 +1000)]
powerpc/xive: Remove xive_kexec_teardown_cpu()
It's identical to xive_teardown_cpu() so just use the latter
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 11 Apr 2018 05:18:00 +0000 (15:18 +1000)]
powerpc/xive: Remove now useless pr_debug statements
Those overly verbose statement in the setup of the pool VP
aren't particularly useful (esp. considering we don't actually
use the pool, we configure it bcs HW requires it only). So
remove them which improves the code readability.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 25 Jul 2018 09:54:28 +0000 (19:54 +1000)]
powerpc/64s: free page table caches at exit_mmap time
The kernel page table caches are tied to init_mm, so there is no
more need for them after userspace is finished.
destroy_context() gets called when we drop the last reference for an
mm, which can be much later than the task exit due to other lazy mm
references to it. We can free the page table cache pages on task exit
because they only cache the userspace page tables and kernel threads
should not access user space addresses.
The mapping for kernel threads itself is maintained in init_mm and
page table cache for that is attached to init_mm.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Merge change log additions from Aneesh]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 25 Jul 2018 13:58:06 +0000 (23:58 +1000)]
powerpc/64s/radix: tlb do not flush on page size when fullmm
When the mm is being torn down there will be a full PID flush so
there is no need to flush the TLB on page size changes.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 12:24:59 +0000 (22:24 +1000)]
selftests/powerpc: Give some tests longer to run
Some of these long running tests can time out on heavily loaded
systems, give them longer to run.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 12:24:58 +0000 (22:24 +1000)]
selftests/powerpc: Only run some tests on ppc64le
These tests are currently failing on (some) big endian systems. Until
we can fix that, skip them unless we're on ppc64le.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 26 Jul 2018 12:24:57 +0000 (22:24 +1000)]
selftests/powerpc: Add a helper for checking if we're on ppc64le
Some of our selftests have only been tested on ppc64le and crash or
behave weirdly on ppc64/ppc32. So add a helper for checking the UTS
machine.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 24 Jul 2018 14:03:46 +0000 (00:03 +1000)]
powerpc: Add a checkpatch wrapper with our preferred settings
This makes it easy to run checkpatch with settings that I like.
Usage is eg:
$ ./arch/powerpc/tools/checkpatch.sh -g origin/master..
To check all commits since origin/master.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Michael Ellerman [Mon, 6 Aug 2018 10:30:49 +0000 (20:30 +1000)]
powerpc/64: Disable irq restore warning for now
We recently added a warning in arch_local_irq_restore() to check that
the soft masking state matches reality.
Unfortunately it trips in a few places, which are not entirely trivial
to fix. The key problem is if we're doing function_graph tracing of
restore_math(), the warning pops and then seems to recurse. It's not
entirely clear because the system continuously oopses on all CPUs,
with the output interleaved and unreadable.
It's also been observed on a G5 coming out of idle.
Until we can fix those cases disable the warning for now.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reza Arbab [Fri, 3 Aug 2018 04:03:36 +0000 (23:03 -0500)]
powerpc/powernv: Fix concurrency issue with npu->mmio_atsd_usage
We've encountered a performance issue when multiple processors stress
{get,put}_mmio_atsd_reg(). These functions contend for
mmio_atsd_usage, an unsigned long used as a bitmask.
The accesses to mmio_atsd_usage are done using test_and_set_bit_lock()
and clear_bit_unlock(). As implemented, both of these will require
a (successful) stwcx to that same cache line.
What we end up with is thread A, attempting to unlock, being slowed by
other threads repeatedly attempting to lock. A's stwcx instructions
fail and retry because the memory reservation is lost every time a
different thread beats it to the punch.
There may be a long-term way to fix this at a larger scale, but for
now resolve the immediate problem by gating our call to
test_and_set_bit_lock() with one to test_bit(), which is obviously
implemented without using a store.
Fixes: 1ab66d1fbada ("powerpc/powernv: Introduce address translation services for Nvlink2")
Signed-off-by: Reza Arbab <arbab@linux.ibm.com>
Acked-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Mon, 30 Jul 2018 07:37:21 +0000 (09:37 +0200)]
powerpc: Do not redefine NEED_DMA_MAP_STATE
kernel/dma/Kconfig already defines NEED_DMA_MAP_STATE, just select it
from CONFIG_PPC using the same condition as an if guard.
Signed-off-by: Christoph Hellwig <hch@lst.de>
[mpe: Move it under PPC]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Guenter Roeck [Tue, 31 Jul 2018 01:44:14 +0000 (18:44 -0700)]
powerpc/4xx: Fix error return path in ppc4xx_msi_probe()
An arbitrary error in ppc4xx_msi_probe() quite likely results in a
crash similar to the following, seen after dma_alloc_coherent()
returned an error.
Unable to handle kernel paging request for data at address 0x00000000
Faulting instruction address: 0xc001bff0
Oops: Kernel access of bad area, sig: 11 [#1]
BE Canyonlands
Modules linked in:
CPU: 0 PID: 1 Comm: swapper Tainted: G W
4.18.0-rc6-00010-gff33d1030a6c #1
NIP:
c001bff0 LR:
c001c418 CTR:
c01faa7c
REGS:
cf82db40 TRAP: 0300 Tainted: G W
(
4.18.0-rc6-00010-gff33d1030a6c)
MSR:
00029000 <CE,EE,ME> CR:
28002024 XER:
00000000
DEAR:
00000000 ESR:
00000000
GPR00:
c001c418 cf82dbf0 cf828000 cf8de400 00000000 00000000 000000c4 000000c4
GPR08:
c0481ea4 00000000 00000000 000000c4 22002024 00000000 c00025e8 00000000
GPR16:
00000000 00000000 00000000 00000000 00000000 00000000 c0492380 0000004a
GPR24:
00029000 0000000c 00000000 cf8de410 c0494d60 c0494d60 cf8bebc0 00000001
NIP [
c001bff0] ppc4xx_of_msi_remove+0x48/0xa0
LR [
c001c418] ppc4xx_msi_probe+0x294/0x3b8
Call Trace:
[
cf82dbf0] [
00029000] 0x29000 (unreliable)
[
cf82dc10] [
c001c418] ppc4xx_msi_probe+0x294/0x3b8
[
cf82dc70] [
c0209fbc] platform_drv_probe+0x40/0x9c
[
cf82dc90] [
c0208240] driver_probe_device+0x2a8/0x350
[
cf82dcc0] [
c0206204] bus_for_each_drv+0x60/0xac
[
cf82dcf0] [
c0207e88] __device_attach+0xe8/0x160
[
cf82dd20] [
c02071e0] bus_probe_device+0xa0/0xbc
[
cf82dd40] [
c02050c8] device_add+0x404/0x5c4
[
cf82dd90] [
c0288978] of_platform_device_create_pdata+0x88/0xd8
[
cf82ddb0] [
c0288b70] of_platform_bus_create+0x134/0x220
[
cf82de10] [
c0288bcc] of_platform_bus_create+0x190/0x220
[
cf82de70] [
c0288cf4] of_platform_bus_probe+0x98/0xec
[
cf82de90] [
c0449650] __machine_initcall_canyonlands_ppc460ex_device_probe+0x38/0x54
[
cf82dea0] [
c0002404] do_one_initcall+0x40/0x188
[
cf82df00] [
c043daec] kernel_init_freeable+0x130/0x1d0
[
cf82df30] [
c0002600] kernel_init+0x18/0x104
[
cf82df40] [
c000c23c] ret_from_kernel_thread+0x14/0x1c
Instruction dump:
90010024 813d0024 2f890000 83c30058 41bd0014 48000038 813d0024 7f89f800
409d002c 813e000c 57ea103a 3bff0001 <
7c69502e>
2f830000 419effe0 4803b26d
---[ end trace
8cf551077ecfc42a ]---
Fix it up. Specifically,
- Return valid error codes from ppc4xx_setup_pcieh_hw(), have it clean
up after itself, and only access hardware after all possible error
conditions have been handled.
- Use devm_kzalloc() instead of kzalloc() in ppc4xx_msi_probe()
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 2 Aug 2018 15:39:51 +0000 (01:39 +1000)]
powernv/cpuidle: Fix idle states all being marked invalid
Commit
9c7b185ab2fe ("powernv/cpuidle: Parse dt idle properties into
global structure") parses dt idle states into structs, but never marks
them valid. This results in all idle states being lost.
Fixes: 9c7b185ab2fe ("powernv/cpuidle: Parse dt idle properties into global structure")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Akshay Adiga <akshay.adiga@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sam Bobroff [Mon, 30 Jul 2018 01:59:14 +0000 (11:59 +1000)]
powerpc/pseries: fix EEH recovery of some IOV devices
EEH recovery currently fails on pSeries for some IOV capable PCI
devices, if CONFIG_PCI_IOV is on and the hypervisor doesn't provide
certain device tree properties for the device. (Found on an IOV
capable device using the ipr driver.)
Recovery fails in pci_enable_resources() at the check on r->parent,
because r->flags is set and r->parent is not. This state is due to
sriov_init() setting the start, end and flags members of the IOV BARs
but the parent not being set later in
pseries_pci_fixup_iov_resources(), because the
"ibm,open-sriov-vf-bar-info" property is missing.
Correct this by zeroing the resource flags for IOV BARs when they
can't be configured (this is the same method used by sriov_init() and
__pci_read_base()).
VFs cleared this way can't be enabled later, because that requires
another device tree property, "ibm,number-of-configurable-vfs" as well
as support for the RTAS function "ibm_map_pes". These are all part of
hypervisor support for IOV and it seems unlikely that a hypervisor
would ever partially, but not fully, support it. (None are currently
provided by QEMU/KVM.)
Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Reviewed-by: Bryant G. Ly <bryantly@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Shilpasri G Bhat [Tue, 24 Jul 2018 09:13:09 +0000 (14:43 +0530)]
hwmon: (ibmpowernv) Add attributes to enable/disable sensor groups
OPAL firmware provides the facility for some groups of sensors to be
enabled/disabled at runtime to give the user the option of using the
system resources for collecting these sensors or not.
For example, on POWER9 systems, the On Chip Controller (OCC) gathers
various system and chip level sensors and maintains their values in
main memory.
This patch provides support for enabling/disabling the sensor groups
like power, temperature, current and voltage.
Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
[stewart@linux.vnet.ibm.com: Commit message]
Acked-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Shilpasri G Bhat [Tue, 24 Jul 2018 09:13:08 +0000 (14:43 +0530)]
powerpc/powernv: Add support to enable sensor groups
Adds support to enable/disable a sensor group at runtime. This
can be used to select the sensor groups that needs to be copied to
main memory by OCC. Sensor groups like power, temperature, current,
voltage, frequency, utilization can be enabled/disabled at runtime.
Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Akshay Adiga [Thu, 5 Jul 2018 11:40:22 +0000 (17:10 +0530)]
powernv/cpuidle: Use parsed device tree values for cpuidle_init
Export pnv_idle_states and nr_pnv_idle_states so that its accessible to
cpuidle driver. Use properties from pnv_idle_states structure for powernv
cpuidle_init.
Signed-off-by: Akshay Adiga <akshay.adiga@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Akshay Adiga [Thu, 5 Jul 2018 11:40:21 +0000 (17:10 +0530)]
powernv/cpuidle: Parse dt idle properties into global structure
Device-tree parsing happens twice, once while deciding idle state to be
used for hotplug and once during cpuidle init. Hence, parsing the device
tree and caching it will reduce code duplication. Parsing code has been
moved to pnv_parse_cpuidle_dt() from pnv_probe_idle_states(). In addition
to the properties in the device tree the number of available states is
also required.
Signed-off-by: Akshay Adiga <akshay.adiga@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:19 +0000 (04:21 -0400)]
macintosh/via-pmu: Disambiguate interrupt statistics
Some of the event counters are overloaded which makes it very
difficult to interpret their values.
Counter 0 is supposed to report CB1 interrupts but it can also count
PMU_INT_WAITING_CHARGER events.
Counter 1 is supposed to report GPIO interrupts but it can also count
other events (depending upon the value of the PMU_INT_ADB bit).
Disambiguate these statistics with dedicated counters for GPIO and
CB1 interrupts.
Comments in the MkLinux source code say that the type 0 and type 1
interrupts are model-specific. Label them as "unknown".
This change to the contents of /proc/pmu/interrupts is by necessity
visible in userland. However, packages which interact with the PMU
(that is, pbbuttonsd, pmac-utils and pmud) don't open this file.
AFAIK, user software has no need to poll these counters.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:19 +0000 (04:21 -0400)]
macintosh/via-pmu: Clean up interrupt statistics
Replace an open-coded ffs() with the function call.
Simplify an if-else cascade using a switch statement.
Correct a typo and an indentation issue.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:19 +0000 (04:21 -0400)]
macintosh/via-pmu: Replace via-pmu68k driver with via-pmu driver
Now that the PowerMac via-pmu driver supports m68k PowerBooks,
switch over to that driver and remove the via-pmu68k driver.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:19 +0000 (04:21 -0400)]
macintosh/via-pmu68k: Don't load driver on unsupported hardware
Don't load the via-pmu68k driver on early PowerBooks. The M50753 PMU
device found in those models was never supported by this driver.
Attempting to load the driver usually causes a boot hang.
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Michael Schmitz <schmitzmic@gmail.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:19 +0000 (04:21 -0400)]
macintosh/via-pmu: Explicitly specify CONFIG_PPC_PMAC dependencies
At present, CONFIG_ADB_PMU depends on CONFIG_PPC_PMAC. When this gets
relaxed to CONFIG_PPC_PMAC || CONFIG_MAC, those Kconfig symbols with
implicit deps on PPC_PMAC will need explicit deps. Add them now.
No functional change.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:19 +0000 (04:21 -0400)]
macintosh/via-pmu: Add support for m68k PowerBooks
Put #ifdefs around the Open Firmware, xmon, interrupt dispatch,
battery and suspend code. Add the necessary interrupt handling to
support m68k PowerBooks.
The pmu_kind value is available to userspace using the
PMU_IOC_GET_MODEL ioctl. It is not clear yet what hardware classes
are be needed to describe m68k PowerBook models, so pmu_kind is given
the provisional value PMU_UNKNOWN.
To find out about the hardware, user programs can use /proc/bootinfo
or /proc/hardware, or send the PMU_GET_VERSION command using /dev/adb.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:18 +0000 (04:21 -0400)]
macintosh/via-pmu: Replace via pointer with via1 and via2 pointers
On most PowerPC Macs, the PMU driver uses the shift register and
IO port B from a single VIA chip.
On 68k and early PowerPC PowerBooks, the driver uses the shift register
from one VIA chip together with IO port B from another.
Replace via with via1 and via2 to accommodate this. For the
CONFIG_PPC_PMAC case, set via1 = via2 so there is no change.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:18 +0000 (04:21 -0400)]
macintosh/via-pmu: Enhance state machine with new 'uninitialized' state
On 68k Macs, the via/vias pointer can't be used to determine whether
the PMU driver has been initialized. For portability, add a new state
to indicate that via_find_pmu() succeeded.
After via_find_pmu() executes, testing vias == NULL is equivalent to
testing via == NULL. Replace these tests with pmu_state == uninitialized
which is simpler and more consistent. No functional change.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:18 +0000 (04:21 -0400)]
macintosh/via-pmu: Don't clear shift register interrupt flag twice
The shift register interrupt flag gets cleared in via_pmu_interrupt()
and once again in pmu_sr_intr(). Fix this theoretical race condition.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:18 +0000 (04:21 -0400)]
macintosh/via-pmu: Add missing mmio accessors
Add missing in_8() accessors to init_pmu() and pmu_sr_intr().
This fixes several sparse warnings:
drivers/macintosh/via-pmu.c:536:29: warning: dereference of noderef expression
drivers/macintosh/via-pmu.c:537:33: warning: dereference of noderef expression
drivers/macintosh/via-pmu.c:1455:17: warning: dereference of noderef expression
drivers/macintosh/via-pmu.c:1456:69: warning: dereference of noderef expression
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Finn Thain [Mon, 2 Jul 2018 08:21:18 +0000 (04:21 -0400)]
macintosh/via-pmu: Fix section mismatch warning
The pmu_init() function has the __init qualifier, but the ops struct
that holds a pointer to it does not. This causes a build warning.
The driver works fine because the pointer is only dereferenced early.
The function is so small that there's negligible benefit from using
the __init qualifier. Remove it to fix the warning, consistent with
the other ADB drivers.
Tested-by: Stan Johnson <userm57@yahoo.com>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Spirkov [Thu, 26 Jul 2018 12:52:50 +0000 (12:52 +0000)]
powerpc/44x: Mark mmu_init_secondary() as __init
mmu_init_secondary() calls ppc44x_pin_tlb() which is marked __init,
leading to a warning:
The function mmu_init_secondary() references
the function __init ppc44x_pin_tlb().
There's no CPU hotplug support on 44x so mmu_init_secondary() will
only be called at boot. Therefore we should mark it as __init.
Signed-off-by: Alexey Spirkov <alexeis@astrosoft.ru>
[mpe: Flesh out change log details]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 19 Jul 2018 14:33:16 +0000 (00:33 +1000)]
powerpc/mm: Don't report PUDs as memory leaks when using kmemleak
Paul Menzel reported that kmemleak was producing reports such as:
unreferenced object 0xc0000000f8b80000 (size 16384):
comm "init", pid 1, jiffies
4294937416 (age 312.240s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<
00000000d997deb7>] __pud_alloc+0x80/0x190
[<
0000000087f2e8a3>] move_page_tables+0xbac/0xdc0
[<
00000000091e51c2>] shift_arg_pages+0xc0/0x210
[<
00000000ab88670c>] setup_arg_pages+0x22c/0x2a0
[<
0000000060871529>] load_elf_binary+0x41c/0x1648
[<
00000000ecd9d2d4>] search_binary_handler.part.11+0xbc/0x280
[<
0000000034e0cdd7>] __do_execve_file.isra.13+0x73c/0x940
[<
000000005f953a6e>] sys_execve+0x58/0x70
[<
000000009700a858>] system_call+0x5c/0x70
Indicating that a PUD was being leaked.
However what's really happening is that kmemleak is not able to
recognise the references from the PGD to the PUD, because they are not
fully qualified pointers.
We can confirm that in xmon, eg:
Find the task struct for pid 1 "init":
0:mon> P
task_struct ->thread.ksp PID PPID S P CMD
c0000001fe7c0000 c0000001fe803960 1 0 S 13 systemd
Dump virtual address 0 to find the PGD:
0:mon> dv 0
c0000001fe7c0000
pgd @ 0xc0000000f8b01000
Dump the memory of the PGD:
0:mon> d
c0000000f8b01000
c0000000f8b01000 00000000f8b90000 0000000000000000 |................|
c0000000f8b01010 0000000000000000 0000000000000000 |................|
c0000000f8b01020 0000000000000000 0000000000000000 |................|
c0000000f8b01030 0000000000000000 00000000f8b80000 |................|
^^^^^^^^^^^^^^^^
There we can see the reference to our supposedly leaked PUD. But
because it's missing the leading 0xc, kmemleak won't recognise it.
We can confirm it's still in use by translating an address that is
mapped via it:
0:mon> dv
7fff94000000 c0000001fe7c0000
pgd @ 0xc0000000f8b01000
pgdp @ 0xc0000000f8b01038 = 0x00000000f8b80000 <--
pudp @ 0xc0000000f8b81ff8 = 0x00000000037c4000
pmdp @ 0xc0000000037c5ca0 = 0x00000000fbd89000
ptep @ 0xc0000000fbd89000 = 0xc0800001d5ce0386
Maps physical address = 0x00000001d5ce0000
Flags = Accessed Dirty Read Write
The fix is fairly simple. We need to tell kmemleak to ignore PUD
allocations and never report them as leaks. We can also tell it not to
scan the PGD, because it will never find pointers in there. However it
will still notice if we allocate a PGD and then leak it.
Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tested-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:21 +0000 (16:25 +0000)]
powerpc: split asm/tlbflush.h
Split asm/tlbflush.h into:
asm/nohash/tlbflush.h
asm/book3s/32/tlbflush.h
asm/book3s/64/tlbflush.h (already existing)
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:19 +0000 (16:25 +0000)]
powerpc: remove unnecessary inclusion of asm/tlbflush.h
asm/tlbflush.h is only needed for:
- using functions xxx_flush_tlb_xxx()
- using MMU_NO_CONTEXT
- including asm-generic/pgtable.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:13 +0000 (16:25 +0000)]
powerpc/44x: remove page.h from mmu-44x.h
mmu-44x.h doesn't need asm/page.h if PAGE_SHIFT are replaced by CONFIG_PPC_XX_PAGES
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:11 +0000 (16:25 +0000)]
powerpc/nohash: fix hash related comments in pgtable.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:09 +0000 (16:25 +0000)]
powerpc: fix includes in asm/processor.h
Remove superflous includes and add missing ones
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:07 +0000 (16:25 +0000)]
powerpc/book3s: Remove PPC_PIN_SIZE
PPC_PIN_SIZE is specific to the 44x and is defined in mmu.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:05 +0000 (16:25 +0000)]
powerpc: declare set_breakpoint() static
set_breakpoint() is only used in process.c so make it static
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:03 +0000 (16:25 +0000)]
powerpc: remove superflous inclusions of asm/fixmap.h
Files not using fixmap consts or functions don't need asm/fixmap.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:25:01 +0000 (16:25 +0000)]
powerpc: clean inclusions of asm/feature-fixups.h
files not using feature fixup don't need asm/feature-fixups.h
files using feature fixup need asm/feature-fixups.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:24:59 +0000 (16:24 +0000)]
powerpc: clean the inclusion of stringify.h
Only include linux/stringify.h is files using __stringify()
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:24:57 +0000 (16:24 +0000)]
powerpc: move ASM_CONST and stringify_in_c() into asm-const.h
This patch moves ASM_CONST() and stringify_in_c() into
dedicated asm-const.h, then cleans all related inclusions.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: asm-compat.h should include asm-const.h]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:24:55 +0000 (16:24 +0000)]
powerpc/405: move PPC405_ERR77 in asm-405.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:24:53 +0000 (16:24 +0000)]
powerpc: remove unneeded inclusions of cpu_has_feature.h
Files not using cpu_has_feature() don't need cpu_has_feature.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 5 Jul 2018 16:24:51 +0000 (16:24 +0000)]
powerpc: remove kdump.h from page.h
page.h doesn't need kdump.h
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:52 +0000 (00:55 +1000)]
tty: hvc: remove unexplained "just in case" spin delay
This delay was in the very first OPAL console commit 6.5 years ago,
and came from the vio hvc driver. The firmware console has hardened
sufficiently to remove it.
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:51 +0000 (00:55 +1000)]
powerpc/powernv: implement opal_put_chars_atomic
The RAW console does not need writes to be atomic, so relax
opal_put_chars to be able to do partial writes, and implement an
_atomic variant which does not take a spinlock. This API is used
in xmon, so the less locking that is used, the better chance there
is that a crash can be debugged.
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:50 +0000 (00:55 +1000)]
powerpc/powernv: move opal console flushing to udbg
OPAL console writes do not have to synchronously flush firmware /
hardware buffers unless they are going through the udbg path.
Remove the unconditional flushing from opal_put_chars. Flush if
there was no space in the buffer as an optimisation (callers loop
waiting for success in that case). udbg flushing is moved to
udbg_opal_putc.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:49 +0000 (00:55 +1000)]
powerpc/powernv: Remove OPALv1 support from opal console driver
opal_put_chars deals with partial writes because in OPALv1,
opal_console_write_buffer_space did not work correctly. That firmware
is not supported.
This reworks the opal_put_chars code to no longer deal with partial
writes by turning them into full writes. Partial write handling is still
supported in terms of what gets returned to the caller, but it may not
go to the console atomically. A warning message is printed in this
case.
This allows console flushing to be moved out of the opal_write_lock
spinlock. That could cause the lock to be held for long periods if the
console is busy (especially if it was being spammed by firmware),
which is dangerous because the lock is taken by xmon to debug the
system. Flushing outside the lock improves the situation a bit.
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:48 +0000 (00:55 +1000)]
powerpc/powernv: Implement and use opal_flush_console
A new console flushing firmware API was introduced to replace event
polling loops, and implemented in opal-kmsg with
affddff69c55e
("powerpc/powernv: Add a kmsg_dumper that flushes console output on
panic"), to flush the console in the panic path.
The OPAL console driver has other situations where interrupts are off
and it needs to flush the console synchronously. These still use a
polling loop.
So move the opal-kmsg flush code to opal_flush_console, and use the
new function in opal-kmsg and opal_put_chars.
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:47 +0000 (00:55 +1000)]
powerpc/powernv: opal-kmsg use flush fallback from console code
Use the more refined and tested event polling loop from opal_put_chars
as the fallback console flush in the opal-kmsg path. This loop is used
by the console driver today, whereas the opal-kmsg fallback is not
likely to have been used for years.
Use WARN_ONCE rather than a printk when the fallback is invoked to
prepare for moving the console flush into a common function.
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:46 +0000 (00:55 +1000)]
powerpc/powernv: opal-kmsg standardise OPAL_BUSY handling
OPAL_CONSOLE_FLUSH is documented as being able to return OPAL_BUSY,
so implement the standard OPAL_BUSY handling for it.
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:45 +0000 (00:55 +1000)]
powerpc/powernv: Fix OPAL console driver OPAL_BUSY loops
The OPAL console driver does not delay in case it gets OPAL_BUSY or
OPAL_BUSY_EVENT from firmware.
It can't yet be made to sleep because it is called under spinlock,
but it can be changed to the standard OPAL_BUSY loop form, and a
delay added to keep it from hitting the firmware too frequently.
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 30 Apr 2018 14:55:44 +0000 (00:55 +1000)]
powerpc/powernv: opal_put_chars partial write fix
The intention here is to consume and discard the remaining buffer
upon error. This works if there has not been a previous partial write.
If there has been, then total_len is no longer total number of bytes
to copy. total_len is always "bytes left to copy", so it should be
added to written bytes.
This code may not be exercised any more if partial writes will not be
hit, but this is a small bugfix before a larger change.
Reviewed-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Mukesh Ojha [Mon, 20 Feb 2017 13:22:11 +0000 (18:52 +0530)]
powerpc/powernv/opal-dump : Use IRQ_HANDLED instead of numbers in interrupt handler
Fixes: 8034f715f ("powernv/opal-dump: Convert to irq domain")
Converts all the return explicit number to a more proper IRQ_HANDLED,
which looks proper incase of interrupt handler returning case.
Here, It also removes error message like "nobody cared" which was
getting unveiled while returning -1 or 0 from handler.
Signed-off-by: Mukesh Ojha <mukesh02@linux.vnet.ibm.com>
Reviewed-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Mukesh Ojha [Mon, 20 Feb 2017 13:22:10 +0000 (18:52 +0530)]
powerpc/powernv/opal-dump : Handles opal_dump_info properly
Moves the return value check of 'opal_dump_info' to a proper place which
was previously unnecessarily filling all the dump info even on failure.
Signed-off-by: Mukesh Ojha <mukesh02@linux.vnet.ibm.com>
Acked-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Cyril Bur [Thu, 1 Feb 2018 01:07:46 +0000 (12:07 +1100)]
powerpc/tm: Remove struct thread_info param from tm_reclaim_thread()
Since commit
dc3106690b20 ("powerpc: tm: Always use fp_state and
vr_state to store live registers") tm_reclaim_thread() doesn't use the
parameter anymore, both callers have to bother getting it as they have
no need for a struct thread_info either.
Just remove it and adjust the callers.
Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Cyril Bur [Mon, 5 Feb 2018 05:17:16 +0000 (16:17 +1100)]
powerpc/tm: Update function prototype comment
In commit
eb5c3f1c8647 ("powerpc: Always save/restore checkpointed regs
during treclaim/trecheckpoint") __tm_recheckpoint was modified to no
longer take the second parameter 'unsigned long orig_msr' as part of a
TM rewrite to simplify the reclaiming/recheckpointing process.
There is a comment in the asm file where the function is delcared which
has an incorrect prototype with the 'orig_msr' parameter.
This patch corrects the comment.
Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Simon Guo [Thu, 7 Jun 2018 01:57:55 +0000 (09:57 +0800)]
selftests/powerpc: Update memcmp_64 selftest for VMX implementation
This patch reworked selftest memcmp_64 so that memcmp selftest can
cover more test cases.
It adds testcases for:
- memcmp over 4K bytes size.
- s1/s2 with different/random offset on 16 bytes boundary.
- enter/exit_vmx_ops pairness.
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
[mpe: Add -maltivec to fix build on some toolchains]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Simon Guo [Thu, 7 Jun 2018 01:57:54 +0000 (09:57 +0800)]
powerpc/64: add 32 bytes prechecking before using VMX optimization on memcmp()
This patch is based on the previous VMX patch on memcmp().
To optimize ppc64 memcmp() with VMX instruction, we need to think about
the VMX penalty brought with: If kernel uses VMX instruction, it needs
to save/restore current thread's VMX registers. There are 32 x 128 bits
VMX registers in PPC, which means 32 x 16 = 512 bytes for load and store.
The major concern regarding the memcmp() performance in kernel is KSM,
who will use memcmp() frequently to merge identical pages. So it will
make sense to take some measures/enhancement on KSM to see whether any
improvement can be done here. Cyril Bur indicates that the memcmp() for
KSM has a higher possibility to fail (unmatch) early in previous bytes
in following mail.
https://patchwork.ozlabs.org/patch/817322/#
1773629
And I am taking a follow-up on this with this patch.
Per some testing, it shows KSM memcmp() will fail early at previous 32
bytes. More specifically:
- 76% cases will fail/unmatch before 16 bytes;
- 83% cases will fail/unmatch before 32 bytes;
- 84% cases will fail/unmatch before 64 bytes;
So 32 bytes looks a better choice than other bytes for pre-checking.
The early failure is also true for memcmp() for non-KSM case. With a
non-typical call load, it shows ~73% cases fail before first 32 bytes.
This patch adds a 32 bytes pre-checking firstly before jumping into VMX
operations, to avoid the unnecessary VMX penalty. It is not limited to
KSM case. And the testing shows ~20% improvement on memcmp() average
execution time with this patch.
And note the 32B pre-checking is only performed when the compare size
is long enough (>=4K currently) to allow VMX operation.
The detail data and analysis is at:
https://github.com/justdoitqd/publicFiles/blob/master/memcmp/README.md
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Simon Guo [Thu, 7 Jun 2018 01:57:53 +0000 (09:57 +0800)]
powerpc/64: enhance memcmp() with VMX instruction for long bytes comparision
This patch add VMX primitives to do memcmp() in case the compare size
is equal or greater than 4K bytes. KSM feature can benefit from this.
Test result with following test program(replace the "^>" with ""):
------
># cat tools/testing/selftests/powerpc/stringloops/memcmp.c
>#include <malloc.h>
>#include <stdlib.h>
>#include <string.h>
>#include <time.h>
>#include "utils.h"
>#define SIZE (1024 * 1024 * 900)
>#define ITERATIONS 40
int test_memcmp(const void *s1, const void *s2, size_t n);
static int testcase(void)
{
char *s1;
char *s2;
unsigned long i;
s1 = memalign(128, SIZE);
if (!s1) {
perror("memalign");
exit(1);
}
s2 = memalign(128, SIZE);
if (!s2) {
perror("memalign");
exit(1);
}
for (i = 0; i < SIZE; i++) {
s1[i] = i & 0xff;
s2[i] = i & 0xff;
}
for (i = 0; i < ITERATIONS; i++) {
int ret = test_memcmp(s1, s2, SIZE);
if (ret) {
printf("return %d at[%ld]! should have returned zero\n", ret, i);
abort();
}
}
return 0;
}
int main(void)
{
return test_harness(testcase, "memcmp");
}
------
Without this patch (but with the first patch "powerpc/64: Align bytes
before fall back to .Lshort in powerpc64 memcmp()." in the series):
4.
726728762 seconds time elapsed ( +- 3.54%)
With VMX patch:
4.
234335473 seconds time elapsed ( +- 2.63%)
There is ~+10% improvement.
Testing with unaligned and different offset version (make s1 and s2 shift
random offset within 16 bytes) can archieve higher improvement than 10%..
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Simon Guo [Thu, 7 Jun 2018 01:57:52 +0000 (09:57 +0800)]
powerpc: add vcmpequd/vcmpequb ppc instruction macro
Some old tool chains don't know about instructions like vcmpequd.
This patch adds .long macro for vcmpequd and vcmpequb, which is
a preparation to optimize ppc64 memcmp with VMX instructions.
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Simon Guo [Thu, 7 Jun 2018 01:57:51 +0000 (09:57 +0800)]
powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp()
Currently memcmp() 64bytes version in powerpc will fall back to .Lshort
(compare per byte mode) if either src or dst address is not 8 bytes aligned.
It can be opmitized in 2 situations:
1) if both addresses are with the same offset with 8 bytes boundary:
memcmp() can compare the unaligned bytes within 8 bytes boundary firstly
and then compare the rest 8-bytes-aligned content with .Llong mode.
2) If src/dst addrs are not with the same offset of 8 bytes boundary:
memcmp() can align src addr with 8 bytes, increment dst addr accordingly,
then load src with aligned mode and load dst with unaligned mode.
This patch optmizes memcmp() behavior in the above 2 situations.
Tested with both little/big endian. Performance result below is based on
little endian.
Following is the test result with src/dst having the same offset case:
(a similar result was observed when src/dst having different offset):
(1) 256 bytes
Test with the existing tools/testing/selftests/powerpc/stringloops/memcmp:
- without patch
29.
773018302 seconds time elapsed ( +- 0.09% )
- with patch
16.
485568173 seconds time elapsed ( +- 0.02% )
-> There is ~+80% percent improvement
(2) 32 bytes
To observe performance impact on < 32 bytes, modify
tools/testing/selftests/powerpc/stringloops/memcmp.c with following:
-------
#include <string.h>
#include "utils.h"
-#define SIZE 256
+#define SIZE 32
#define ITERATIONS 10000
int test_memcmp(const void *s1, const void *s2, size_t n);
--------
- Without patch
0.
244746482 seconds time elapsed ( +- 0.36%)
- with patch
0.
215069477 seconds time elapsed ( +- 0.51%)
-> There is ~+13% improvement
(3) 0~8 bytes
To observe <8 bytes performance impact, modify
tools/testing/selftests/powerpc/stringloops/memcmp.c with following:
-------
#include <string.h>
#include "utils.h"
-#define SIZE 256
-#define ITERATIONS 10000
+#define SIZE 8
+#define ITERATIONS
1000000
int test_memcmp(const void *s1, const void *s2, size_t n);
-------
- Without patch
1.
845642503 seconds time elapsed ( +- 0.12% )
- With patch
1.
849767135 seconds time elapsed ( +- 0.26% )
-> They are nearly the same. (-0.2%)
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Fri, 29 Jun 2018 08:39:04 +0000 (14:09 +0530)]
powerpc/pseries/mm: Improve error reporting on HCALL failures
This patch adds error reporting to H_ENTER and H_READ hcalls. A
failure for both these hcalls are mostly fatal and it would be good to
log the failure reason.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Split out of larger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Fri, 29 Jun 2018 08:39:04 +0000 (14:09 +0530)]
powerpc/pseries: Use pr_xxx() in lpar.c
Switch from printk to pr_fmt() / pr_xxx().
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Split out of larger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Fri, 29 Jun 2018 08:36:31 +0000 (14:06 +0530)]
powerpc/mm/hash: Reduce contention on hpte lock
We do this in some part. This patch make sure we always try to search
for hpte without holding lock and redo the compare with lock held once
match found.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Fri, 29 Jun 2018 08:36:30 +0000 (14:06 +0530)]
powerpc/mm/hash: Add hpte_get_old_v and use that instead of opencoding
No functional change
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Fri, 29 Jun 2018 08:36:29 +0000 (14:06 +0530)]
powerpc/mm/hash: Remove the superfluous bitwise operation when find hpte group
When computing the starting slot number for a hash page table group we used
to do this
hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL;
Multiplying with 8 (HPTES_PER_GROUP) imply the last three bits are 0. Hence we
really don't need to clear then separately.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Thu, 21 Jun 2018 08:31:58 +0000 (14:01 +0530)]
powerpc/mm: Increase MAX_PHYSMEM_BITS to 128TB with SPARSEMEM_VMEMMAP config
We do this only with VMEMMAP config so that our page_to_[nid/section] etc are not
impacted.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Thu, 21 Jun 2018 08:31:57 +0000 (14:01 +0530)]
powerpc/mm: Check memblock_add against MAX_PHYSMEM_BITS range
With SPARSEMEM config enabled, we make sure that we don't add sections beyond
MAX_PHYSMEM_BITS range. This results in not building vmemmap mapping for
range beyond max range. But our memblock layer looks the device tree and create
mapping for the full memory range. Prevent this by checking against
MAX_PHSYSMEM_BITS when doing memblock_add.
We don't do similar check for memeblock_reserve_range. If reserve range is beyond
MAX_PHYSMEM_BITS we expect that to be configured with 'nomap'. Any other
reserved range should come from existing memblock ranges which we already
filtered while adding.
This avoids crash as below when running on a system with system ram config above
MAX_PHSYSMEM_BITS
Unable to handle kernel paging request for data at address 0xc00a001000000440
Faulting instruction address: 0xc000000001034118
cpu 0x0: Vector: 300 (Data Access) at [
c00000000124fb30]
pc:
c000000001034118: __free_pages_bootmem+0xc0/0x1c0
lr:
c00000000103b258: free_all_bootmem+0x19c/0x22c
sp:
c00000000124fdb0
msr:
9000000002001033
dar:
c00a001000000440
dsisr:
40000000
current = 0xc00000000120dd00
paca = 0xc000000001f60000^I irqmask: 0x03^I irq_happened: 0x01
pid = 0, comm = swapper
[
c00000000124fe20]
c00000000103b258 free_all_bootmem+0x19c/0x22c
[
c00000000124fee0]
c000000001010a68 mem_init+0x3c/0x5c
[
c00000000124ff00]
c00000000100401c start_kernel+0x298/0x5e4
[
c00000000124ff90]
c00000000000b57c start_here_common+0x1c/0x520
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 9 Jul 2018 14:24:26 +0000 (00:24 +1000)]
powerpc: Add ppc64le and ppc64_book3e allmodconfig targets
Similarly as we just did for 32-bit, add phony targets for generating
a little endian and Book3E allmodconfig. These aren't covered by the
regular allmodconfig, which is big endian and Book3S due to the way
the Kconfig symbols are structured.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 9 Jul 2018 14:24:25 +0000 (00:24 +1000)]
powerpc: Add ppc32_allmodconfig defconfig target
Because the allmodconfig logic just sets every symbol to M or Y, it
has the effect of always generating a 64-bit config, because
CONFIG_PPC64 becomes Y.
So to make it easier for folks to test 32-bit code, provide a phony
defconfig target that generates a 32-bit allmodconfig.
The 32-bit port has several mutually exclusive CPU types, we choose
the Book3S variants as that's what the help text in Kconfig says is
most common.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 9 Jul 2018 06:25:21 +0000 (16:25 +1000)]
powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
When I added the spectre_v2 information in sysfs, I included the
availability of the ori31 speculation barrier.
Although the ori31 barrier can be used to mitigate v2, it's primarily
intended as a spectre v1 mitigation. Spectre v2 is mitigated by
hardware changes.
So rework the sysfs files to show the ori31 information in the
spectre_v1 file, rather than v2.
Currently we display eg:
$ grep . spectre_v*
spectre_v1:Mitigation: __user pointer sanitization
spectre_v2:Mitigation: Indirect branch cache disabled, ori31 speculation barrier enabled
After:
$ grep . spectre_v*
spectre_v1:Mitigation: __user pointer sanitization, ori31 speculation barrier enabled
spectre_v2:Mitigation: Indirect branch cache disabled
Fixes: d6fbe1c55c55 ("powerpc/64s: Wire up cpu_show_spectre_v2()")
Cc: stable@vger.kernel.org # v4.17+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 25 Apr 2018 05:17:59 +0000 (15:17 +1000)]
powerpc: NMI IPI make NMI IPIs fully sychronous
There is an asynchronous aspect to smp_send_nmi_ipi. The caller waits
for all CPUs to call in to the handler, but it does not wait for
completion of the handler. This is a needless complication, so remove
it and always wait synchronously.
The synchronous wait allows the caller to easily time out and clear
the wait for completion (zero nmi_ipi_busy_count) in the case of badly
behaved handlers. This would have prevented the recent smp_send_stop
NMI IPI bug from causing the system to hang.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 3 Jun 2018 12:24:32 +0000 (22:24 +1000)]
powerpc/64s: make PACA_IRQ_HARD_DIS track MSR[EE] closely
When the masked interrupt handler clears MSR[EE] for an interrupt in
the PACA_IRQ_MUST_HARD_MASK set, it does not set PACA_IRQ_HARD_DIS.
This makes them get out of synch.
With that taken into account, it's only low level irq manipulation
(and interrupt entry before reconcile) where they can be out of synch.
This makes the code less surprising.
It also allows the IRQ replay code to rely on the IRQ_HARD_DIS value
and not have to mtmsrd again in this case (e.g., for an external
interrupt that has been masked). The bigger benefit might just be
that there is not such an element of surprise in these two bits of
state.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ram Pai [Tue, 17 Jul 2018 13:51:10 +0000 (06:51 -0700)]
selftests/powerpc: Fix ptrace-pkey for default execute permission change
The test case assumes execute-permissions of unallocated keys are
enabled by default, which is incorrect.
Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ram Pai [Tue, 17 Jul 2018 13:51:09 +0000 (06:51 -0700)]
selftests/powerpc: Fix core-pkey for default execute permission change
Only when the key is allocated, its permission are enabled.
Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ram Pai [Tue, 17 Jul 2018 13:51:08 +0000 (06:51 -0700)]
powerpc/pkeys: make protection key 0 less special
Applications need the ability to associate an address-range with some
key and latter revert to its initial default key. Pkey-0 comes close to
providing this function but falls short, because the current
implementation disallows applications to explicitly associate pkey-0 to
the address range.
Lets make pkey-0 less special and treat it almost like any other key.
Thus it can be explicitly associated with any address range, and can be
freed. This gives the application more flexibility and power. The
ability to free pkey-0 must be used responsibily, since pkey-0 is
associated with almost all address-range by default.
Even with this change pkey-0 continues to be slightly more special
from the following point of view.
(a) it is implicitly allocated.
(b) it is the default key assigned to any address-range.
(c) its permissions cannot be modified by userspace.
NOTE: (c) is specific to powerpc only. pkey-0 is associated by default
with all pages including kernel pages, and pkeys are also active in
kernel mode. If any permission is denied on pkey-0, the kernel running
in the context of the application will be unable to operate.
Tested on powerpc.
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
[mpe: Drop #define PKEY_0 0 in favour of plain old 0]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ram Pai [Tue, 17 Jul 2018 13:51:07 +0000 (06:51 -0700)]
powerpc/pkeys: Preallocate execute-only key
execute-only key is allocated dynamically. This is a problem. When a
thread implicitly creates an execute-only key, and resets the UAMOR
for that key, the UAMOR value does not percolate to all the other
threads. Any other thread may ignorantly change the permissions on the
key. This can cause the key to be not execute-only for that thread.
Preallocate the execute-only key and ensure that no thread can change
the permission of the key, by resetting the corresponding bit in
UAMOR.
Fixes: 5586cf61e108 ("powerpc: introduce execute-only pkey")
Cc: stable@vger.kernel.org # v4.16+
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>