Madhavan Srinivasan [Fri, 2 Dec 2016 00:35:00 +0000 (06:05 +0530)]
powerpc/perf: update attribute_group data structure
Rename the power_pmu and attribute_group variables that
support PowerISA v2.07. Add a cpu feature flag check to pick
the PowerISA v2.07 format structures to support.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Madhavan Srinivasan [Fri, 2 Dec 2016 00:34:59 +0000 (06:04 +0530)]
powerpc/perf: factor out the event format field
Factor out the format field structure for PowerISA v2.07.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:52:05 +0000 (17:52 +1100)]
powerpc/mm/iommu, vfio/spapr: Put pages on VFIO container shutdown
At the moment the userspace tool is expected to request pinning of
the entire guest RAM when VFIO IOMMU SPAPR v2 driver is present.
When the userspace process finishes, all the pinned pages need to
be put; this is done as a part of the userspace memory context (MM)
destruction which happens on the very last mmdrop().
This approach has a problem that a MM of the userspace process
may live longer than the userspace process itself as kernel threads
use userspace process MMs which was runnning on a CPU where
the kernel thread was scheduled to. If this happened, the MM remains
referenced until this exact kernel thread wakes up again
and releases the very last reference to the MM, on an idle system this
can take even hours.
This moves preregistered regions tracking from MM to VFIO; insteads of
using mm_iommu_table_group_mem_t::used, tce_container::prereg_list is
added so each container releases regions which it has pre-registered.
This changes the userspace interface to return EBUSY if a memory
region is already registered in a container. However it should not
have any practical effect as the only userspace tool available now
does register memory region once per container anyway.
As tce_iommu_register_pages/tce_iommu_unregister_pages are called
under container->lock, this does not need additional locking.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:52:04 +0000 (17:52 +1100)]
vfio/spapr: Reference mm in tce_container
In some situations the userspace memory context may live longer than
the userspace process itself so if we need to do proper memory context
cleanup, we better have tce_container take a reference to mm_struct and
use it later when the process is gone (@current or @current->mm is NULL).
This references mm and stores the pointer in the container; this is done
in a new helper - tce_iommu_mm_set() - when one of the following happens:
- a container is enabled (IOMMU v1);
- a first attempt to pre-register memory is made (IOMMU v2);
- a DMA window is created (IOMMU v2).
The @mm stays referenced till the container is destroyed.
This replaces current->mm with container->mm everywhere except debug
prints.
This adds a check that current->mm is the same as the one stored in
the container to prevent userspace from making changes to a memory
context of other processes.
DMA map/unmap ioctls() do not check for @mm as they already check
for @enabled which is set after tce_iommu_mm_set() is called.
This does not reference a task as multiple threads within the same mm
are allowed to ioctl() to vfio and supposedly they will have same limits
and capabilities and if they do not, we'll just fail with no harm made.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:52:03 +0000 (17:52 +1100)]
vfio/spapr: Postpone default window creation
We are going to allow the userspace to configure container in
one memory context and pass container fd to another so
we are postponing memory allocations accounted against
the locked memory limit. One of previous patches took care of
it_userspace.
At the moment we create the default DMA window when the first group is
attached to a container; this is done for the userspace which is not
DDW-aware but familiar with the SPAPR TCE IOMMU v2 in the part of memory
pre-registration - such client expects the default DMA window to exist.
This postpones the default DMA window allocation till one of
the folliwing happens:
1. first map/unmap request arrives;
2. new window is requested;
This adds noop for the case when the userspace requested removal
of the default window which has not been created yet.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:52:02 +0000 (17:52 +1100)]
vfio/spapr: Add a helper to create default DMA window
There is already a helper to create a DMA window which does allocate
a table and programs it to the IOMMU group. However
tce_iommu_take_ownership_ddw() did not use it and did these 2 calls
itself to simplify error path.
Since we are going to delay the default window creation till
the default window is accessed/removed or new window is added,
we need a helper to create a default window from all these cases.
This adds tce_iommu_create_default_window(). Since it relies on
a VFIO container to have at least one IOMMU group (for future use),
this changes tce_iommu_attach_group() to add a group to the container
first and then call the new helper.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:52:01 +0000 (17:52 +1100)]
vfio/spapr: Postpone allocation of userspace version of TCE table
The iommu_table struct manages a hardware TCE table and a vmalloc'd
table with corresponding userspace addresses. Both are allocated when
the default DMA window is created and this happens when the very first
group is attached to a container.
As we are going to allow the userspace to configure container in one
memory context and pas container fd to another, we have to postpones
such allocations till a container fd is passed to the destination
user process so we would account locked memory limit against the actual
container user constrainsts.
This postpones the it_userspace array allocation till it is used first
time for mapping. The unmapping patch already checks if the array is
allocated.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:52:00 +0000 (17:52 +1100)]
powerpc/iommu: Stop using @current in mm_iommu_xxx
This changes mm_iommu_xxx helpers to take mm_struct as a parameter
instead of getting it from @current which in some situations may
not have a valid reference to mm.
This changes helpers to receive @mm and moves all references to @current
to the caller, including checks for !current and !current->mm;
checks in mm_iommu_preregistered() are removed as there is no caller
yet.
This moves the mm_iommu_adjust_locked_vm() call to the caller as
it receives mm_iommu_table_group_mem_t but it needs mm.
This should cause no behavioral change.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 30 Nov 2016 06:51:59 +0000 (17:51 +1100)]
powerpc/iommu: Pass mm_struct to init/cleanup helpers
We are going to get rid of @current references in mmu_context_boos3s64.c
and cache mm_struct in the VFIO container. Since mm_context_t does not
have reference counting, we will be using mm_struct which does have
the reference counter.
This changes mm_iommu_init/mm_iommu_cleanup to receive mm_struct rather
than mm_context_t (which is embedded into mm).
This should not cause any behavioral change.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 15 Nov 2016 10:59:38 +0000 (21:59 +1100)]
powerpc/64: Define ILLEGAL_POINTER_VALUE for 64-bit
This is used in poison.h to offset poison values so that they don't
point directly into user space.
The value we choose sits roughly between user and kernel space, which
means on their own the poison values don't point anywhere useful. If an
attacker can cause an access at some offset from the poison value then
we may still be in trouble, but by putting the poison values between
user and kernel space we maximise the required size of that offset.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Wed, 30 Nov 2016 06:45:09 +0000 (17:45 +1100)]
powerpc Don't print misleading facility name in facility unavailable exception
The current facility_strings[] are correct when the trap address is
0xf80 (hypervisor facility unavailable). When the trap address is
0xf60 (facility unavailable) IC (Interruption Cause) a.k.a status in the
code is undefined for values 0 and 1.
Add a check to prevent printing the (misleading) facility name for IC 0
and 1 when we came in via 0xf60. In all cases, print the actual IC
value, to avoid any confusion.
This hasn't been seen on real hardware, on only qemu which was
misreporting an exception.
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Fix indentation, combine printks(), massage change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 1 Dec 2016 09:50:46 +0000 (20:50 +1100)]
powerpc: Make selects of IBM_EMAC_* depend on IBM_EMAC
We have a bunch of Kconfig symbols which select various IBM_EMAC_*
symbols. These all cause warnings when IBM_EMAC is not selected.
eg.
warning: (PPC_CELL_NATIVE && BLUESTONE && CANYONLANDS && GLACIER &&
EIGER && 440EPX && 440GRX && 440GX && 460SX && 405EX) selects
IBM_EMAC_RGMII which has unmet direct dependencies (NETDEVICES &&
ETHERNET && NET_VENDOR_IBM)
So make them all depend on IBM_EMAC being enabled first.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 1 Dec 2016 09:50:45 +0000 (20:50 +1100)]
powerpc/cell: Drop select of MEMORY_HOTPLUG
SPU_FS selects MEMORY_HOTPLUG, which is problematic because
MEMORY_HOTPLUG is user selectable, meaning we can end up with a broken
.config where MEMORY_HOTPLUG is enabled but its dependencies are not,
leading to build breakages.
The select of MEMORY_HOTPLUG for SPU_FS was added back in 2006, in
commit
4da30d15b6d5 ("[POWERPC] spufs: fix memory hotplug dependency").
However we reworked the spufs code and removed the dependency on memory
hotplug in 2007 in commit
78bde53e351b ("[POWERPC] spufs: remove need
for struct page for SPEs").
So drop the select as it's no longer needed and causes problems.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nathan Fontenot [Mon, 28 Nov 2016 16:50:45 +0000 (11:50 -0500)]
powerpc/pseries: Use lmb_is_removable() to check removability
We should be using lmb_is_removable() to validate that enough LMBs
are available to remove when doing a remove by count. This will check
that the LMB is owned by the system and it is considered removable.
This patch also adds a pr_info() notification to report the LMB count
to remove was not satisfied.
What we do now is just check that there are enough LMBs owned by the
system when validating there are enough LMBs to remove. This can
lead to situations where there are enough LMBs owned by the system
but not enough that are considered removable. This results in having
to bail out of the remove operation instead of just failing the request
that we should have known wouldn't succeed.
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 30 Nov 2016 08:41:02 +0000 (19:41 +1100)]
powerpc/mm: Fix page table dump build on non-Book3S
In the recent commit
1515ab932156 ("powerpc/mm: Dump hash table") we
added code to dump the hage page table. Currently this can be selected
to build on any platform. However it breaks the build if we're building
for a non-Book3S platform, because none of the hash page table related
defines and so on exist. So restrict it to building only on Book3S.
Similarly in commit
8eb07b187000 ("powerpc/mm: Dump linux pagetables")
we added code to dump the Linux page tables, which uses some constants
which are only defined on Book3S - so guard those with an #ifdef.
Fixes: 1515ab932156 ("powerpc/mm: Dump hash table")
Fixes: 8eb07b187000 ("powerpc/mm: Dump linux pagetables")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Geoff Levand [Tue, 29 Nov 2016 18:47:32 +0000 (10:47 -0800)]
powerpc/ps3: Fix system hang with GCC 5 builds
GCC 5 generates different code for this bootwrapper null check that
causes the PS3 to hang very early in its bootup. This check is of
limited value, so just get rid of it.
Cc: stable@vger.kernel.org
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 18 Nov 2016 12:15:42 +0000 (23:15 +1100)]
powerpc/prom: Switch to using structs for ibm_architecture_vec
Now that we've defined structures to describe each of the client
architecture vectors, we can use those to construct the value we pass to
firmware.
This avoids the tricks we previously played with the W() macro, allows
us to properly endian annotate fields, and should help to avoid bugs
introduced by failing to have the correct number of zero pad bytes
between fields.
It also means we can avoid hard coding IBM_ARCH_VEC_NRCORES_OFFSET in
order to update the max_cpus value and instead just set it.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 18 Nov 2016 12:15:41 +0000 (23:15 +1100)]
powerpc/prom: Define structs for client architecture vectors
The "client architecture vectors" are a series of structures we pass to
firmware to define various things, such as what processors we support
and many other options.
Each structure is entirely different so we have to define a different
struct for each one, but that's OK.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 8 Nov 2016 06:08:06 +0000 (17:08 +1100)]
powerpc/pseries: add definitions for new H_SIGNAL_SYS_RESET hcall
This has not made its way to a PAPR release yet, but we have an hcall
number assigned.
H_SIGNAL_SYS_RESET = 0x380
Syntax:
hcall(uint64 H_SIGNAL_SYS_RESET, int64 target);
Generate a system reset NMI on the threads indicated by target.
Values for target:
-1 = target all online threads including the caller
-2 = target all online threads except for the caller
All other negative values: reserved
Positive values: The thread to be targeted, obtained from the value
of the "ibm,ppc-interrupt-server#s" property of the CPU in the OF
device tree.
Semantics:
- Invalid target: return H_Parameter.
- Otherwise: Generate a system reset NMI on target thread(s),
return H_Success.
This will be used by crash/debug code to get stuck CPUs into a known
state.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:54 +0000 (23:45 +1100)]
powerpc: Enable CONFIG_KEXEC_FILE in powerpc server defconfigs.
Enable CONFIG_KEXEC_FILE in powernv_defconfig, ppc64_defconfig and
pseries_defconfig.
It depends on CONFIG_CRYPTO_SHA256=y, so add that as well.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:53 +0000 (23:45 +1100)]
powerpc/kexec: Enable kexec_file_load() syscall
Define the Kconfig symbol so that the kexec_file_load() code can be
built, and wire up the syscall so that it can be called.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:52 +0000 (23:45 +1100)]
powerpc: Add purgatory for kexec_file_load() implementation.
This purgatory implementation is based on the versions from kexec-tools
and kexec-lite, with additional changes.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:51 +0000 (23:45 +1100)]
powerpc: Add support code for kexec_file_load()
This patch adds the support code needed for implementing
kexec_file_load() on powerpc.
This consists of functions to load the ELF kernel, either big or little
endian, and setup the purgatory enviroment which switches from the first
kernel to the second kernel.
None of this code is built yet, as it depends on CONFIG_KEXEC_FILE which
we have not yet defined. Although we could define CONFIG_KEXEC_FILE in
this patch, we'd then have a window in history where the kconfig symbol
is present but the syscall is not, which would be awkward.
Signed-off-by: Josh Sklar <sklar@linux.vnet.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:50 +0000 (23:45 +1100)]
powerpc: Change places using CONFIG_KEXEC to use CONFIG_KEXEC_CORE instead.
Commit
2965faa5e03d ("kexec: split kexec_load syscall from kexec core
code") introduced CONFIG_KEXEC_CORE so that CONFIG_KEXEC means whether
the kexec_load system call should be compiled-in and CONFIG_KEXEC_FILE
means whether the kexec_file_load system call should be compiled-in.
These options can be set independently from each other.
Since until now powerpc only supported kexec_load, CONFIG_KEXEC and
CONFIG_KEXEC_CORE were synonyms. That is not the case anymore, so we
need to make a distinction. Almost all places where CONFIG_KEXEC was
being used should be using CONFIG_KEXEC_CORE instead, since
kexec_file_load also needs that code compiled in.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:49 +0000 (23:45 +1100)]
kexec_file: Factor out kexec_locate_mem_hole from kexec_add_buffer.
kexec_locate_mem_hole will be used by the PowerPC kexec_file_load
implementation to find free memory for the purgatory stack.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:48 +0000 (23:45 +1100)]
kexec_file: Change kexec_add_buffer to take kexec_buf as argument.
This is done to simplify the kexec_add_buffer argument list.
Adapt all callers to set up a kexec_buf to pass to kexec_add_buffer.
In addition, change the type of kexec_buf.buffer from char * to void *.
There is no particular reason for it to be a char *, and the change
allows us to get rid of 3 existing casts to char * in the code.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thiago Jung Bauermann [Tue, 29 Nov 2016 12:45:47 +0000 (23:45 +1100)]
kexec_file: Allow arch-specific memory walking for kexec_add_buffer
Allow architectures to specify a different memory walking function for
kexec_add_buffer. x86 uses iomem to track reserved memory ranges, but
PowerPC uses the memblock subsystem.
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Acked-by: Dave Young <dyoung@redhat.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Wed, 30 Nov 2016 00:35:36 +0000 (11:35 +1100)]
powerpc/mm: Fix no execute fault handling on pre-POWER5
Aneesh/Ben reported that the change to do_page_fault() we made in commit
1d18ad026844 ("powerpc/mm: Detect instruction fetch denied and report")
needs to handle the case where CPU_FTR_COHERENT_ICACHE is missing but we
have CPU_FTR_NOEXECUTE. In those cases the check added for
SRR1_ISI_N_OR_G might trigger a false positive.
This patch adds a check for CPU_FTR_COHERENT_ICACHE in addition to the
MSR value.
Fixes: 1d18ad026844 ("powerpc/mm: Detect instruction fetch denied and report")
Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 21 Nov 2016 10:14:35 +0000 (21:14 +1100)]
powerpc/boot: Fix rebuild when changing kernel endian
Now that we don't set ARCH incorrectly when calling the boot Makefile,
we can use the generic cpp_lds_S rule for converting our zImage.lds.S
into zImage.lds.
The main advantage of using the generic rule is that it correctly uses
if_changed, which means we correctly regenerate the linker script when
switching endian. Fixing that means we are finally able to build one
endian and then rebuild the other endian without requiring to clean
between builds.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 21 Nov 2016 10:14:34 +0000 (21:14 +1100)]
powerpc/boot: All uses of if_changed should depend on FORCE
If we're using if_changed then we must depend on FORCE, so that
if_changed gets a chance to check if something changed.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 21 Nov 2016 10:14:33 +0000 (21:14 +1100)]
powerpc: Stop passing ARCH=ppc64 to boot Makefile
Back in 2005 when the ppc/ppc64 merge started, we used to build the
kernel code in arch/powerpc but use the boot code from arch/ppc or
arch/ppc64 depending on whether we were building for 32 or 64-bit.
Originally we called the boot Makefile passing ARCH=$(OLDARCH), where
OLDARCH was ppc or ppc64.
In commit
20f629549b30 ("powerpc: Make building the boot image work for
both 32-bit and 64-bit") (2005-10-11) we split the call for 32/64-bit
using an ifeq check, because the two Makefiles took different targets,
and explicitly passed ARCH=ppc64 for the 64-bit case and ARCH=ppc for
the 32-bit case.
Then in commit
94b212c29f68 ("powerpc: Move ppc64 boot wrapper code over
to arch/powerpc") (2005-11-16) we moved the boot code into arch/powerpc
and dropped the ppc case, but kept passing ARCH=ppc64 to
arch/powerpc/boot/Makefile.
Since then there have been several more boot targets added, all of which
have copied the ARCH=ppc64 setting, such that now we have four targets
using it.
Currently it seems that nothing actually uses the ARCH value, but that's
basically just luck, and in particular it prevents us from using the
generic cpp_lds_S rule. It's also clearly wrong, ARCH=ppc64 is dead,
buried and cremated.
Fix it by dropping the setting of ARCH completely, the correct value is
exported by the top level Makefile.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:17:04 +0000 (11:47 +0530)]
powerpc/mm: Batch tlb flush when invalidating pte entries
This will improve the task exit case, by batching tlb invalidates.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:17:03 +0000 (11:47 +0530)]
powerpc/mm: update radix__pte_update to not do full mm tlb flush
When we are updating a pte, we just need to flush the tlb mapping
that pte. Right now we do a full mm flush because we don't track page
size. Now that we have page size details in pte use that to do the
optimized flush
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:17:02 +0000 (11:47 +0530)]
powerpc/mm: update radix__ptep_set_access_flag to not do full mm tlb flush
When we are updating a pte, we just need to flush the tlb mapping
that pte. Right now we do a full mm flush because we don't track the page
size. Now that we have page size details in pte use that to do the
optimized flush
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:17:01 +0000 (11:47 +0530)]
powerpc/mm: Add radix__tlb_flush_pte_p9_dd1()
Now that we have page size details encoded in pte using software pte
bits, use that to find the page size needed for tlb flush.
This function should only be used on P9 DD1, so give it a horrible name
to make that clear.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:17:00 +0000 (11:47 +0530)]
powerpc/mm: Introduce _PAGE_LARGE software pte bits
This patch adds a new software defined pte bit. We use the reserved
fields of ISA 3.0 pte definition since we will only be using this on DD1
code paths. We can possibly look at removing this code later.
The software bit will be used to differentiate between 64K/4K and 2M
ptes. This helps in finding the page size mapping by a pte so that we
can do efficient tlb flush.
We don't support 1G hugetlb pages yet. So we add a DEBUG WARN_ON to
catch wrong usage.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:16:59 +0000 (11:46 +0530)]
powerpc/mm/hugetlb: Handle hugepage size supported by hash config
W.r.t hash page table config, we support 16MB and 16GB as the hugepage
size. Update the hstate_get_psize to handle 16M and 16G.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Mon, 28 Nov 2016 06:16:58 +0000 (11:46 +0530)]
powerpc/mm: Rename hugetlb-radix.h to hugetlb.h
We will start moving some book3s specific hugetlb functions there.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 23 Nov 2016 13:02:09 +0000 (00:02 +1100)]
powerpc/64e: Don't branch to dot symbols
This converts one that was missed by
b1576fec7f4d ("powerpc: No need
to use dot symbols when branching to a function").
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 23 Nov 2016 13:02:07 +0000 (00:02 +1100)]
powerpc/64e: Convert cmpi to cmpwi in head_64.S
From
80f23935cadb ("powerpc: Convert cmp to cmpd in idle enter sequence"):
PowerPC's "cmp" instruction has four operands. Normally people write
"cmpw" or "cmpd" for the second cmp operand 0 or 1. But, frequently
people forget, and write "cmp" with just three operands.
With older binutils this is silently accepted as if this was "cmpw",
while often "cmpd" is wanted. With newer binutils GAS will complain
about this for 64-bit code. For 32-bit code it still silently assumes
"cmpw" is what is meant.
In this case, cmpwi is called for, so this is just a build fix for
new toolchains.
Cc: stable@vger.kernel.org # v3.0+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 15 Nov 2016 06:56:16 +0000 (17:56 +1100)]
powerpc/mm/radix: Prevent kernel execution of user space
ISA 3 defines new encoded access authority that allows instruction
access prevention in privileged mode and allows normal access
to problem state. This patch just enables IAMR (Instruction Authority
Mask Register), enabling AMR would require more work.
I've tested this with a buggy driver and a simple payload. The payload
is specific to the build I've tested.
mpe: Also tested with LKDTM:
# echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT
lkdtm: Performing direct entry EXEC_USERSPACE
lkdtm: attempting ok execution at
c0000000005bf560
lkdtm: attempting bad execution at
00003fff8d940000
Unable to handle kernel paging request for instruction fetch
Faulting instruction address: 0x3fff8d940000
Oops: Kernel access of bad area, sig: 11 [#1]
NIP:
00003fff8d940000 LR:
c0000000005bfa58 CTR:
00003fff8d940000
REGS:
c0000000f1fcf900 TRAP: 0400 Not tainted (4.9.0-rc5-compiler_gcc
-6.2.0-00109-g956dbc06232a)
MSR:
9000000010009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR:
48002222 XER:
00000000
...
Call Trace:
lkdtm_EXEC_USERSPACE+0x104/0x120 (unreliable)
lkdtm_do_action+0x3c/0x80
direct_entry+0x100/0x1b0
full_proxy_write+0x94/0x100
__vfs_write+0x3c/0x1b0
vfs_write+0xcc/0x230
SyS_write+0x60/0x110
system_call+0x38/0xfc
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 15 Nov 2016 06:56:15 +0000 (17:56 +1100)]
powerpc/mm: Detect instruction fetch denied and report
ISA 3 allows for prevention of instruction fetch and execution
of user mode pages. If such an error occurs, SRR1 bit 35 reports the
error. We catch and report the error in do_page_fault().
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 15 Nov 2016 06:56:14 +0000 (17:56 +1100)]
powerpc/mm/radix: Setup AMOR in HV mode to allow key 0
Setup AMOR (Authority Mask Override Register) in HV mode so that the
host and guest kernel can in turn setup IAMR.
This allows us to enable key 0 in a following patch.
Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Tue, 22 Nov 2016 18:06:40 +0000 (23:36 +0530)]
powernv: Clear SPRN_PSSCR when a POWER9 CPU comes online
Ensure that PSSCR is set to a safe value corresponding to no
state-loss each time a POWER9 CPU comes online.
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Acked-By: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 6 Nov 2015 02:21:17 +0000 (13:21 +1100)]
powerpc/xmon: Add 'dt' command to dump trace buffers
There is a nice interface for asking ftrace to dump all its tracing
buffers. The only down side for use in xmon is that it uses printk.
Depending on circumstances printk may not work when in xmon, but it also
may, so add a 'dt' command which dumps the ftrace buffers, and add a
note to the help to mentiont that it uses printk.
Calling this routine also disables tracing, which is problematic if you
return from xmon and expect the system to keep operating normally. So
after we do the dump turn tracing back on.
Both functions already have nop versions defined for when ftrace is not
enabled, so we don't need any extra #ifdefs.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Geliang Tang [Wed, 23 Nov 2016 14:58:56 +0000 (22:58 +0800)]
powerpc/of_platform: Use builtin_platform_driver
Use builtin_platform_driver() helper to simplify the code.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Acked-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Geliang Tang [Wed, 23 Nov 2016 15:27:38 +0000 (23:27 +0800)]
cxl: drop duplicate header sched.h
Drop duplicate header sched.h from native.c.
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 24 Nov 2016 06:08:11 +0000 (17:08 +1100)]
powerpc: Fix __cmpxchg() to take a volatile ptr again
In commit
d0563a1297e2 ("powerpc: Implement {cmp}xchg for u8 and u16")
we removed the volatile from __cmpxchg().
This is leading to warnings such as:
drivers/gpu/drm/drm_lock.c: In function ‘drm_lock_take’:
arch/powerpc/include/asm/cmpxchg.h:484:37: warning: passing argument 1
of ‘__cmpxchg’ discards ‘volatile’ qualifier from pointer target
(__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \
There doesn't seem to be consensus across architectures whether the
argument is volatile or not, so at least for now put the volatile back.
Fixes: d0563a1297e2 ("powerpc: Implement {cmp}xchg for u8 and u16")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 24 Nov 2016 11:14:52 +0000 (22:14 +1100)]
Merge branch 'topic/ppc-kvm' into next
Merge the topic branch we're sharing with the kvm-ppc tree.
Andrew Donnellan [Tue, 22 Nov 2016 10:13:27 +0000 (21:13 +1100)]
cxl: Fix coccinelle warnings
Fix the following coccinelle warnings:
drivers/misc/cxl/debugfs.c:46:0-23: WARNING: fops_io_x64 should be
defined with DEFINE_DEBUGFS_ATTRIBUTE
drivers/misc/cxl/guest.c:890:5-26: WARNING: Comparison to bool
drivers/misc/cxl/irq.c:107:3-23: WARNING: Assignment of bool to 0/1
drivers/misc/cxl/native.c:57:2-3: Unneeded semicolon
drivers/misc/cxl/native.c:170:2-3: Unneeded semicolon
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 22 Nov 2016 10:49:32 +0000 (11:49 +0100)]
powerpc/32: Change the stack protector canary value per task
Partially copied from commit
df0698be14c66 ("ARM: stack protector:
change the canary value per task")
A new random value for the canary is stored in the task struct whenever
a new task is forked. This is meant to allow for different canary values
per task. On powerpc, GCC expects the canary value to be found in a global
variable called __stack_chk_guard. So this variable has to be updated
with the value stored in the task struct whenever a task switch occurs.
Because the variable GCC expects is global, this cannot work on SMP
unfortunately. So, on SMP, the same initial canary value is kept
throughout, making this feature a bit less effective although it is still
useful.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 22 Nov 2016 10:49:30 +0000 (11:49 +0100)]
powerpc: Initial stack protector (-fstack-protector) support
Partialy copied from commit
c743f38013aef ("ARM: initial stack protector
(-fstack-protector) support")
This is the very basic stuff without the changing canary upon
task switch yet. Just the Kconfig option and a constant canary
value initialized at boot time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pan Xinhui [Wed, 27 Apr 2016 09:16:45 +0000 (17:16 +0800)]
powerpc: Implement {cmp}xchg for u8 and u16
Implement xchg{u8,u16}{local,relaxed}, and
cmpxchg{u8,u16}{,local,acquire,relaxed}.
It works on all ppc.
remove volatile of first parameter in __cmpxchg_local and __cmpxchg
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Acked-by: Boqun Feng <boqun.feng@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Lars-Peter Clausen [Sat, 19 Nov 2016 13:42:14 +0000 (14:42 +0100)]
powerpc/pseries/ibmebus: Remove legacy suspend/resume support
There are no ibmebus driver that make use of legacy suspend/resume. This
patch removes the support for it from ibmebus framework, new ibmebus
driver (as unlikely as they are) wanting to use suspend/resume should
use dev_pm_ops.
Since there aren't any special bus specific things to do during
suspend/resume and since the PM core will automatically fallback
directly to using the device's PM ops if no bus PM ops are specified
there is no need to have any special ibmebus PM ops at all.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Mon, 21 Nov 2016 17:06:41 +0000 (22:36 +0530)]
powerpc/kprobes: Invoke handlers directly
Invoke the kprobe handlers directly rather than through notify_die(), to
reduce path taken for handling kprobes. Similar to commit
6f6343f53d13
("kprobes/x86: Call exception handlers directly from do_int3/do_debug").
While at it, rename post_kprobe_handler() to kprobe_post_handler() for
more uniform naming.
Reported-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Mon, 21 Nov 2016 17:06:40 +0000 (22:36 +0530)]
powerpc: Remove extraneous header from asm-prototypes.h
Commit
03465f899bda ("powerpc: Use kprobe blacklist for exception
handlers") removed __kprobes annotation from some of the prototypes,
but left the kprobes header include directive unchanged. Remove it as it
is no longer needed.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Tue, 22 Nov 2016 23:44:09 +0000 (10:44 +1100)]
powerpc/powernv: Define and set POWER9 HFSCR doorbell bit
Define and set the POWER9 HFSCR doorbell bit so that guests can use
msgsndp.
ISA 3.0 calls this MSGP, so name it accordingly in the code.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 22 Nov 2016 03:50:39 +0000 (14:50 +1100)]
powerpc/reg: Add definition for LPCR_PECE_HVEE
ISA 3.0 defines a new PECE (Power-saving mode Exit Cause Enable) field
in the LPCR (Logical Partitioning Control Register), called
LPCR_PECE_HVEE (Hypervisor Virtualization Exit Enable).
KVM code will need to know about this bit, so add a definition for it.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Mon, 21 Nov 2016 05:02:35 +0000 (16:02 +1100)]
powerpc/64: Define new ISA v3.00 logical PVR value and PCR register value
ISA 3.00 adds the logical PVR value 0x0f000005, so add a definition for
this.
Define PCR_ARCH_207 to reflect ISA 2.07 compatibility mode in the processor
compatibility register (PCR).
[paulus@ozlabs.org - moved dummy PCR_ARCH_300 value into next patch]
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Paul Mackerras [Mon, 21 Nov 2016 05:01:36 +0000 (16:01 +1100)]
powerpc/powernv: Define real-mode versions of OPAL XICS accessors
This defines real-mode versions of opal_int_get_xirr(), opal_int_eoi()
and opal_int_set_mfrr(), for use by KVM real-mode code.
It also exports opal_int_set_mfrr() so that the modular part of KVM
can use it to send IPIs.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Paul Mackerras [Mon, 21 Nov 2016 05:00:58 +0000 (16:00 +1100)]
powerpc/64: Provide functions for accessing POWER9 partition table
POWER9 requires the host to set up a partition table, which is a
table in memory indexed by logical partition ID (LPID) which
contains the pointers to page tables and process tables for the
host and each guest.
This factors out the initialization of the partition table into
a single function. This code was previously duplicated between
hash_utils_64.c and pgtable-radix.c.
This provides a function for setting a partition table entry,
which is used in early MMU initialization, and will be used by
KVM whenever a guest is created. This function includes a tlbie
instruction which will flush all TLB entries for the LPID and
all caches of the partition table entry for the LPID, across the
system.
This also moves a call to memblock_set_current_limit(), which was
in radix_init_partition_table(), but has nothing to do with the
partition table. By analogy with the similar code for hash, the
call gets moved to near the end of radix__early_init_mmu(). It
now gets called when running as a guest, whereas previously it
would only be called if the kernel is running as the host.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 15 Nov 2016 15:06:06 +0000 (20:36 +0530)]
powerpc/mm/coproc: Handle bad address on coproc slb fault
VSID 0 is bad address. Don't create slb entries on coproc fault for
bad address
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
Reviewed-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Thu, 17 Nov 2016 05:07:47 +0000 (16:07 +1100)]
powerpc/eeh: Refactor EEH PE reset functions
eeh_pe_reset and eeh_reset_pe are two different functions in the same
file which do mostly the same thing. Not only is this confusing, but
potentially causes disrepancies in functionality, notably eeh_reset_pe
as it does not check return values for failure.
Refactor this into the following:
- eeh_pe_reset(): stays as is, performs a single operation, exported
- eeh_pe_reset_full(): new, full reset process that calls eeh_pe_reset()
- eeh_reset_pe(): removed and replaced by eeh_pe_reset_full()
- eeh_reset_pe_once(): removed
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 16 Nov 2016 03:02:15 +0000 (14:02 +1100)]
powerpc/pci: Always print PHB and PE numbers as hexadecimal
PHB, PE (and by association MVE) numbers are printed as a mix of decimal
and hexadecimal throughout the kernel. This can be misleading, so make
them all hexadecimal.
Standardising on hex instead of dec because:
- PHB numbers are presented in hex in sysfs/debugfs (and lspci, etc)
- PE numbers are presented as hex in sysfs and parsed in hex in debugfs
The only place I think this could cause confusing are the messages during
boot, i.e.
pci 000a:01 : [PE# 000] Secondary bus 1 associated with PE#0
which can be a quick way to check PE numbers. pe_level_printk() will
only print two characters instead of three, so the above would be
pci 000a:01 : [PE# 00] Secondary bus 1 associated with PE#0
which gives a hint it's in hex.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 16 Nov 2016 01:12:26 +0000 (12:12 +1100)]
powerpc/powernv: Don't warn on PE init if unfreeze is unsupported
Whenever a PE is initialised in powernv, opal_pci_eeh_freeze_clear() is
called. This is to remove any existing freeze, and has no negative side
effects if the PE is already in an unfrozen state. On PHB backends that
don't support this operation and return OPAL_UNSUPPORTED, this creates a
scary and misleading warning message.
Skip the warning message on init if OPAL_UNSUPPORTED is returned.
As far as I'm aware, this currently only affects NPUs.
Fixes: 313483d ("powerpc/powernv: Unfreeze PE on allocation")
Signed-off-by: Russell Currey <ruscur@russell.cc>
Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Paul Mackerras [Mon, 21 Nov 2016 05:00:23 +0000 (16:00 +1100)]
powerpc/64: Add some more SPRs and SPR bits for POWER9
These definitions will be needed by KVM.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 28 Oct 2016 06:39:53 +0000 (17:39 +1100)]
powerpc/64: Used named initialisers for ibm_pa_features
The ibm_pa_features array consists of structures that describe which bit
and byte in the ibm,pa-features property toggles one or more flags in
either the CPU, MMU, or user visible feature flags.
Each one consists of 7 values, which are all unsigned long, int or char,
meaning the compiler gives us no warning if we assign the wrong values
to the wrong elements. In fact we have had a bug here in the past, where
we were setting incorrect bits, see commit
6997e57d693b ("powerpc:
scan_features() updates incorrect bits for REAL_LE").
So switch to using named initialisers for the structure elements, to
reduce the likelihood of future bugs, and hopefully improve readability
also.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
Michael Ellerman [Wed, 2 Nov 2016 10:06:17 +0000 (21:06 +1100)]
powerpc/configs: Turn on PPC crypto implementations in the server defconfigs
These are the PPC optimised versions of various crypto algorithms, so we
should turn them on by default to get test coverage.
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 15 Nov 2016 03:47:44 +0000 (14:47 +1100)]
powerpc/pseries: Disable IBMEBUS on little endian builds
The IBMEBUS code supports the GX bus found on Power7 and earlier CPUs.
On Power8 it has been replaced, and so we have no need for it.
We don't actually have a config symbol for Power8 vs Power7 etc., but
we only support booting little endian on Power8 or later, so use that as
a reasonable approximation.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 15 Nov 2016 03:47:43 +0000 (14:47 +1100)]
powerpc/pseries: Move ibmebus.c into platforms pseries
ibmebus.c is pseries only code, so move it in there.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 15 Nov 2016 03:47:42 +0000 (14:47 +1100)]
powerpc/pseries: Move vio.c into platforms pseries
vio.c is pseries only code, so move it in there.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Frederic Barrat [Fri, 18 Nov 2016 12:00:31 +0000 (23:00 +1100)]
cxl: Fix coredump generation when cxl_get_fd() is used
If a process dumps core while owning a cxl file descriptor obtained
from an AFU driver (e.g. cxlflash) through the cxl_get_fd() API, the
following error occurs:
[ 868.027591] Unable to handle kernel paging request for data at address ...
[ 868.027778] Faulting instruction address: 0xc00000000035edb0
cpu 0x8c: Vector: 300 (Data Access) at [
c000003c688275e0]
pc:
c00000000035edb0: elf_core_dump+0xd60/0x1300
lr:
c00000000035ed80: elf_core_dump+0xd30/0x1300
sp:
c000003c68827860
msr:
9000000100009033
dar: c
dsisr:
40000000
current = 0xc000003c68780000
paca = 0xc000000001b73200 softe: 0 irq_happened: 0x01
pid = 46725, comm = hxesurelock
enter ? for help
[
c000003c68827a60]
c00000000036948c do_coredump+0xcec/0x11e0
[
c000003c68827c20]
c0000000000ce9e0 get_signal+0x540/0x7b0
[
c000003c68827d10]
c000000000017354 do_signal+0x54/0x2b0
[
c000003c68827e00]
c00000000001777c do_notify_resume+0xbc/0xd0
[
c000003c68827e30]
c000000000009838 ret_from_except_lite+0x64/0x68
--- Exception: 300 (Data Access) at
00003fff98ad2918
The root cause is that the address_space structure for the file
doesn't define a 'host' member.
When cxl allocates a file descriptor, it's using the anonymous inode
to back the file, but allocates a private address_space for each
context. The private address_space allows to track memory allocation
for each context. cxl doesn't define the 'host' member of the address
space, i.e. the inode. We don't want to define it as the anonymous
inode, since there's no longer a 1-to-1 relation between address_space
and inode.
To fix it, instead of using the anonymous inode, we introduce a simple
pseudo filesystem so that cxl can allocate its own inodes. So we now
have one inode for each file and address_space. The pseudo filesystem
is only mounted on the first allocation of a file descriptor by
cxl_get_fd().
Tested with cxlflash.
Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Vaibhav Jain [Wed, 16 Nov 2016 14:09:33 +0000 (19:39 +0530)]
cxl: Do adapter fence check before handling afu interrupt
If an afu interrupt is in flight when an eeh error is triggered the
control still reaches the function native_irq_multiplexed and the
PE-Handle read from the CXL_PSL_PEHandle_An register is 0xffff. The
function then erroneously assumes that the interrupt belonged to a
detached context and generates a warning with full stack dump in the
kernel log complaining:
"Unable to demultiplex CXL PSL IRQ for PE 65535 DSISR
ffffffff DAR
ffffffff. (Possible AFU HW issue - was a term/remove acked with
outstanding transactions"
To fix this the patch adds new code to the function
native_irq_multiplexed function to compares the read value of register
CXL_PSL_PEHandle_An to ~0ULL. If true then logs a warning message
saying that the interrupt is being ignored and returns IRQ_HANDLED from
the irq handler.
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Jaillet [Sun, 30 Oct 2016 21:40:47 +0000 (22:40 +0100)]
cxl: Fix error handling in _cxl_pci_associate_default_context()
'cxl_dev_context_init()' returns an error pointer in case of error, not
NULL. So test it with IS_ERR.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Jaillet [Sun, 30 Oct 2016 21:34:51 +0000 (22:34 +0100)]
cxl: Fix error handling in _cxl_cx4_setup_msi_irqs()
'cxl_dev_context_init()' returns an error pointer in case of error, not
NULL. So test it with IS_ERR.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Jaillet [Sun, 30 Oct 2016 19:35:57 +0000 (20:35 +0100)]
cxl: Fix memory allocation failure test
'cxl_context_alloc()' does not return an error pointer. It is just a
shortcut for a call to 'kzalloc' with 'sizeof(struct cxl_context)' as the
size parameter.
So its return value should be compared with NULL.
While fixing it, simplify a bit the code.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 8 Nov 2016 12:14:45 +0000 (23:14 +1100)]
powerpc: Fix second nested oops hang
When ending an oops, don't clear die_owner unless the nest count
went to zero. This prevents a second nested oops from hanging forever
on the die_lock.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 8 Nov 2016 12:14:44 +0000 (23:14 +1100)]
powerpc: Fix graceful debugger recovery
When exiting xmon with 'x' (exit and recover), oops_begin bails
out immediately, but die then calls __die() and oops_end(), which
cause a lot of bad things to happen.
If the debugger was attached then went to graceful recovery, exit
from die() immediately.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 19 Oct 2016 03:15:59 +0000 (14:15 +1100)]
powerpc: Add option to use thin archives
Add an option to use thin archives to build the kernel.
Thin archives are explained in commit
a5967db9af51 ("kbuild: allow
architectures to use thin archives instead of ld -r").
This is a gradual way to introduce the option to testers.
Some change to the way we invoke ar is required so it can be used
by scripts/link-vmlinux.sh.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Make it an explicit option not dependant on COMPILE_TEST]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 18 Nov 2016 00:51:14 +0000 (11:51 +1100)]
powerpc/lib: Fix randconfig build failure in sstep.c
Under some configs we need to explicitly include cpu_has_feature.h,
otherwise we fail with:
arch/powerpc/lib/sstep.c:1992:7: error: implicit declaration of function 'cpu_has_feature'
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nathan Fontenot [Thu, 17 Nov 2016 16:38:10 +0000 (11:38 -0500)]
powerpc/pseries: Correct possible read beyond dlpar sysfs buffer
The pasrsing of data written to the dlpar file in sysfs does not correctly
account for the possibility of reading past the end of the buffer. The code
assumes that all pieces of the command witten to the sysfs file are present
in the form "<resource> <action> <id_type> <id>".
Correct this by updating the buffer parsing code to make a local copy and
use the strsep() and sysfs_streq() routines to parse the buffer. This patch
also separates the parsing code into subroutines for each piece of the
command.
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tobias Klauser [Thu, 17 Nov 2016 16:19:21 +0000 (17:19 +0100)]
powerpc/mce: Remove unused but set variable
Remove the unused but set variable srr1 in save_mce_event() to
fix the following GCC warning when building with 'W=1':
arch/powerpc/kernel/mce.c:75:11: warning: variable 'srr1' set but not used
It has never been used.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tobias Klauser [Thu, 17 Nov 2016 16:20:24 +0000 (17:20 +0100)]
powerpc: Fix old style declaration GCC warnings
Fix two [-Wold-style-declaration] GCC warnings by moving the inline
keyword before the return type.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 21 Sep 2016 14:49:27 +0000 (16:49 +0200)]
powerpc/64: get rid of MIN_HUGEPTE_SHIFT
MIN_HUGEPTE_SHIFT hasn't been used since commit
d1837cba5d5d5
("powerpc/mm: Cleanup initialization of hugepages on powerpc")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Wed, 9 Nov 2016 05:36:33 +0000 (16:36 +1100)]
powerpc/mm: Correct process and partition table max size
Version 3.00 of the ISA states that the PATS (partition table size) field
of the PTCR (partition table control register) and the PRTS (process table
size) field of the partition table entry must both be less than or equal
to 24. However the actual size of the partition and process tables is equal
to 2 to the power of 12 plus the PATS and PRTS fields, respectively. This
means that the max allowable size of each of these tables is 2^36 or 64GB
for both.
Thus when checking the size shift for each we should be checking for values
of greater than 36 instead of the current check for shifts larger than 24
and 23.
Fixes: 2bfd65e45e877fb5704730244da67c748d28a1b8
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:33:02 +0000 (10:33 +0800)]
selftests/powerpc: Add ptrace tests for TM SPR registers
This patch adds ptrace interface test for TM SPR registers. This
also adds ptrace interface based helper functions related to TM
SPR registers access.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:33:01 +0000 (10:33 +0800)]
selftests/powerpc: Add ptrace tests for VSX, VMX registers in suspended TM
This patch adds ptrace interface test for VSX, VMX registers
inside suspended TM context.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:33:00 +0000 (10:33 +0800)]
selftests/powerpc: Add ptrace tests for VSX, VMX registers in TM
This patch adds ptrace interface test for VSX, VMX registers
inside TM context. This also adds ptrace interface based helper
functions related to chckpointed VSX, VMX registers access.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:59 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for VSX, VMX registers
This patch adds ptrace interface test for VSX, VMX registers.
This also adds ptrace interface based helper functions related
to VSX, VMX registers access. This also adds some assembly
helper functions related to VSX and VMX registers.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:58 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for TAR, PPR, DSCR in suspended TM
This patch adds ptrace interface test for TAR, PPR, DSCR
registers inside suspended TM context.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:57 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for TAR, PPR, DSCR in TM
This patch adds ptrace interface test for TAR, PPR, DSCR
registers inside TM context. This also adds ptrace
interface based helper functions related to checkpointed
TAR, PPR, DSCR register access.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:56 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for TAR, PPR, DSCR registers
This patch adds ptrace interface test for TAR, PPR, DSCR
registers. This also adds ptrace interface based helper
functions related to TAR, PPR, DSCR register access.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:55 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for GPR/FPR registers in suspended TM
This patch adds ptrace interface test for GPR/FPR registers
inside suspended TM context.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:54 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for GPR/FPR registers in TM
This patch adds ptrace interface test for GPR/FPR registers
inside TM context. This adds ptrace interface based helper
functions related to checkpointed GPR/FPR access.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:52 +0000 (10:32 +0800)]
selftests/powerpc: Add ptrace tests for GPR/FPR registers
This patch adds ptrace interface test for GPR/FPR registers.
This adds ptrace interface based helper functions related to
GPR/FPR access and some assembly helper functions related to
GPR/FPR registers.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
[mpe: Add #defines for the new note types when headers don't define them]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Simon Guo [Fri, 30 Sep 2016 02:32:51 +0000 (10:32 +0800)]
selftests/powerpc: Move shared headers into new include dir
There are some functions, especially register related, which can
be shared across multiple selftests/powerpc test directories.
This patch creates a new include directory to store those shared
files, so that the file layout becomes more neat.
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
[mpe: Reworked to move the headers only]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 30 Sep 2016 02:32:50 +0000 (10:32 +0800)]
selftests/powerpc: Add more SPR numbers, TM & VMX instructions to 'reg.h'/'instructions.h'
This patch adds SPR number for TAR, PPR, DSCR special
purpose registers. It also adds TM, VSX, VMX related
instructions which will then be used by patches later
in the series.
Now that the new DSCR register definitions (SPRN_DSCR_PRIV and
SPRN_DSCR) are defined outside this directory, use them instead.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Simon Guo <wei.guo.simon@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Rashmica Gupta [Fri, 27 May 2016 05:49:00 +0000 (15:49 +1000)]
powerpc/mm: Dump hash table
Useful to be able to dump the kernel hash page table to check
which pages are hashed along with their sizes and other details.
Add a debugfs file to check the hash page table. If radix is enabled
(and so there is no hash page table) then this file doesn't exist. To
use this the PPC_PTDUMP config option must be selected.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
[mpe: Fix build with SPARSEMEM_VMEMMAP=n & PSERIES=n]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Rashmica Gupta [Fri, 27 May 2016 05:48:59 +0000 (15:48 +1000)]
powerpc/mm: Dump linux pagetables
Useful to be able to dump the kernels page tables to check permissions
and memory types - derived from arm64's implementation.
Add a debugfs file to check the page tables. To use this the PPC_PTDUMP
config option must be selected.
Signed-off-by: Rashmica Gupta <rashmicy@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 14 Nov 2016 05:28:10 +0000 (16:28 +1100)]
powerpc/pseries: Move CMO code from plapr_wrappers.h to platforms/pseries
Currently there's some CMO (Cooperative Memory Overcommit) code, in
plpar_wrappers.h. Some of it is #ifdef CONFIG_PSERIES and some of it
isn't. The end result being if a file includes plpar_wrappers.h it won't
build with CONFIG_PSERIES=n.
Fix it by moving the CMO code into platforms/pseries. The two hcall
wrappers can just be moved into their only caller, cmm.c, and the
accessors can go in pseries.h.
Note we need the accessors because cmm.c can be built as a module, so
there needs to be a split between the built-in code vs the module, and
that's achieved by using those accessors.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>