Nicholas Piggin [Tue, 28 Aug 2018 08:11:27 +0000 (18:11 +1000)]
powerpc/64s/radix: Fix radix__flush_tlb_collapsed_pmd double flushing pmd
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 24 Jul 2018 05:53:22 +0000 (15:53 +1000)]
selftests/powerpc: Add a test of wild bctr
This tests that a bctr (Branch to counter and link), ie. a function
call, to a wildly out-of-bounds address is handled correctly.
Some old kernel versions didn't handle it correctly, see eg:
"powerpc/slb: Force a full SLB flush when we insert for a bad EA"
https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-April/157397.html
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 15 Aug 2018 11:29:45 +0000 (21:29 +1000)]
powerpc/mm: Fix page table dump to work on Radix
When we're running on Book3S with the Radix MMU enabled the page table
dump currently prints the wrong addresses because it uses the wrong
start address.
Fix it to use PAGE_OFFSET rather than KERN_VIRT_START.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 17 Oct 2018 12:53:38 +0000 (23:53 +1100)]
powerpc/mm/radix: Display if mappings are exec or not
At boot we print the ranges we've mapped for the linear mapping and
what page size we've used. Also track whether the range is mapped
executable or not and display that as well.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 14 Aug 2018 12:37:32 +0000 (22:37 +1000)]
powerpc/mm/radix: Simplify split mapping logic
If we look closely at the logic in create_physical_mapping(), when
we're doing STRICT_KERNEL_RWX, we do the following steps:
- determine the gap from where we are to the end of the range
- choose an appropriate mapping_size based on the gap
- check if that mapping_size would overlap the __init_begin
boundary, and if not choose an appropriate mapping_size
We can simplify the logic by taking the __init_begin boundary into
account when we calculate the initial gap.
So add a next_boundary() function which tells us what the next
boundary is, either the __init_begin boundary or end. In future we can
add more boundaries.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 14 Aug 2018 12:01:44 +0000 (22:01 +1000)]
powerpc/mm/radix: Remove the retry in the split mapping logic
When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the
linear mapping at the text/data boundary so we can map the kernel
text read only.
The current logic uses a goto inside the for loop, which works, but is
hard to reason about.
When we hit the goto retry case we set max_mapping_size to PMD_SIZE
and go back to the start.
Setting max_mapping_size means we skip the PUD case and go to the PMD
case.
We know we will pass the alignment and gap checks because the only
reason we are there is we hit the goto retry, and that is guarded by
mapping_size == PUD_SIZE, which means addr is PUD aligned and gap is
greater or equal to PUD_SIZE.
So the only part of the check that can fail is the mmu_psize_defs
check for the 2M page size.
If we just duplicate that check we can avoid the goto, and we get the
same result.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 14 Aug 2018 11:05:20 +0000 (21:05 +1000)]
powerpc/mm/radix: Fix small page at boundary when splitting
When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the
linear mapping at the text/data boundary so we can map the kernel
text read only.
Currently we always use a small page at the text/data boundary, even
when that's not necessary:
Mapped 0x0000000000000000-0x0000000000e00000 with 2.00 MiB pages
Mapped 0x0000000000e00000-0x0000000001000000 with 64.0 KiB pages
Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
This is because the check that the mapping crosses the __init_begin
boundary is too strict, it also returns true when we map exactly up to
the boundary.
So fix it to check that the mapping would actually map past
__init_begin, and with that we see:
Mapped 0x0000000000000000-0x0000000040000000 with 2.00 MiB pages
Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 14 Aug 2018 10:48:22 +0000 (20:48 +1000)]
powerpc/mm/radix: Fix overuse of small pages in splitting logic
When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the
linear mapping at the text/data boundary so we can map the kernel text
read only.
But the current logic uses small pages for the entire text section,
regardless of whether a larger page size would fit. eg. with the
boundary at 16M we could use 2M pages, but instead we use 64K pages up
to the 16M boundary:
Mapped 0x0000000000000000-0x0000000001000000 with 64.0 KiB pages
Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
This is because the test is checking if addr is < __init_begin
and addr + mapping_size is >= _stext. But that is true for all pages
between _stext and __init_begin.
Instead what we want to check is if we are crossing the text/data
boundary, which is at __init_begin. With that fixed we see:
Mapped 0x0000000000000000-0x0000000000e00000 with 2.00 MiB pages
Mapped 0x0000000000e00000-0x0000000001000000 with 64.0 KiB pages
Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
ie. we're correctly using 2MB pages below __init_begin, but we still
drop down to 64K pages unnecessarily at the boundary.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 10 Aug 2018 12:29:26 +0000 (22:29 +1000)]
powerpc/mm/radix: Fix off-by-one in split mapping logic
When we have CONFIG_STRICT_KERNEL_RWX enabled, we try to split the
kernel linear (1:1) mapping so that the kernel text is in a separate
page to kernel data, so we can mark the former read-only.
We could achieve that just by always using 64K pages for the linear
mapping, but we try to be smarter. Instead we use huge pages when
possible, and only switch to smaller pages when necessary.
However we have an off-by-one bug in that logic, which causes us to
calculate the wrong boundary between text and data.
For example with the end of the kernel text at 16M we see:
radix-mmu: Mapped 0x0000000000000000-0x0000000001200000 with 64.0 KiB pages
radix-mmu: Mapped 0x0000000001200000-0x0000000040000000 with 2.00 MiB pages
radix-mmu: Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
ie. we mapped from 0 to 18M with 64K pages, even though the boundary
between text and data is at 16M.
With the fix we see we're correctly hitting the 16M boundary:
radix-mmu: Mapped 0x0000000000000000-0x0000000001000000 with 64.0 KiB pages
radix-mmu: Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages
radix-mmu: Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Tue, 16 Oct 2018 20:25:00 +0000 (01:55 +0530)]
powerpc/ftrace: Handle large kernel configs
Currently, we expect to be able to reach ftrace_caller() from all
ftrace-enabled functions through a single relative branch. With large
kernel configs, we see functions outside of 32MB of ftrace_caller()
causing ftrace_init() to bail.
In such configurations, gcc/ld emits two types of trampolines for mcount():
1. A long_branch, which has a single branch to mcount() for functions that
are one hop away from mcount():
c0000000019e8544 <
00031b56.long_branch._mcount>:
c0000000019e8544: 4a 69 3f ac b
c00000000007c4f0 <._mcount>
2. A plt_branch, for functions that are farther away from mcount():
c0000000051f33f8 <
0008ba04.plt_branch._mcount>:
c0000000051f33f8: 3d 82 ff a4 addis r12,r2,-92
c0000000051f33fc: e9 8c 04 20 ld r12,1056(r12)
c0000000051f3400: 7d 89 03 a6 mtctr r12
c0000000051f3404: 4e 80 04 20 bctr
We can reuse those trampolines for ftrace if we can have those
trampolines go to ftrace_caller() instead. However, with ABIv2, we
cannot depend on r2 being valid. As such, we use only the long_branch
trampolines by patching those to instead branch to ftrace_caller or
ftrace_regs_caller.
In addition, we add additional trampolines around .text and .init.text
to catch locations that are covered by the plt branches. This allows
ftrace to work with most large kernel configurations.
For now, we always patch the trampolines to go to ftrace_regs_caller,
which is slightly inefficient. This can be optimized further at a later
point.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Sat, 13 Oct 2018 16:48:15 +0000 (22:18 +0530)]
powerpc/mm: Fix WARN_ON with THP NUMA migration
WARNING: CPU: 12 PID: 4322 at /arch/powerpc/mm/pgtable-book3s64.c:76 set_pmd_at+0x4c/0x2b0
Modules linked in:
CPU: 12 PID: 4322 Comm: qemu-system-ppc Tainted: G W
4.19.0-rc3-00758-g8f0c636b0542 #36
NIP:
c0000000000872fc LR:
c000000000484eec CTR:
0000000000000000
REGS:
c000003fba876fe0 TRAP: 0700 Tainted: G W (
4.19.0-rc3-00758-g8f0c636b0542)
MSR:
900000010282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR:
24282884 XER:
00000000
CFAR:
c000000000484ee8 IRQMASK: 0
GPR00:
c000000000484eec c000003fba877268 c000000001f0ec00 c000003fbd229f80
GPR04:
00007c8fe8e00000 c000003f864c5a38 860300853e0000c0 0000000000000080
GPR08:
0000000080000000 0000000000000001 0401000000000080 0000000000000001
GPR12:
0000000000002000 c000003fffff5400 c000003fce292000 00007c9024570000
GPR16:
0000000000000000 0000000000ffffff 0000000000000001 c000000001885950
GPR20:
0000000000000000 001ffffc0004807c 0000000000000008 c000000001f49d05
GPR24:
00007c8fe8e00000 c0000000020f2468 ffffffffffffffff c000003fcd33b090
GPR28:
00007c8fe8e00000 c000003fbd229f80 c000003f864c5a38 860300853e0000c0
NIP [
c0000000000872fc] set_pmd_at+0x4c/0x2b0
LR [
c000000000484eec] do_huge_pmd_numa_page+0xb1c/0xc20
Call Trace:
[
c000003fba877268] [
c00000000045931c] mpol_misplaced+0x1bc/0x230 (unreliable)
[
c000003fba8772c8] [
c000000000484eec] do_huge_pmd_numa_page+0xb1c/0xc20
[
c000003fba877398] [
c00000000040d344] __handle_mm_fault+0x5e4/0x2300
[
c000003fba8774d8] [
c00000000040f400] handle_mm_fault+0x3a0/0x420
[
c000003fba877528] [
c0000000003ff6f4] __get_user_pages+0x2e4/0x560
[
c000003fba877628] [
c000000000400314] get_user_pages_unlocked+0x104/0x2a0
[
c000003fba8776c8] [
c000000000118f44] __gfn_to_pfn_memslot+0x284/0x6a0
[
c000003fba877748] [
c0000000001463a0] kvmppc_book3s_radix_page_fault+0x360/0x12d0
[
c000003fba877838] [
c000000000142228] kvmppc_book3s_hv_page_fault+0x48/0x1300
[
c000003fba877988] [
c00000000013dc08] kvmppc_vcpu_run_hv+0x1808/0x1b50
[
c000003fba877af8] [
c000000000126b44] kvmppc_vcpu_run+0x34/0x50
[
c000003fba877b18] [
c000000000123268] kvm_arch_vcpu_ioctl_run+0x288/0x2d0
[
c000003fba877b98] [
c00000000011253c] kvm_vcpu_ioctl+0x1fc/0x8c0
[
c000003fba877d08] [
c0000000004e9b24] do_vfs_ioctl+0xa44/0xae0
[
c000003fba877db8] [
c0000000004e9c44] ksys_ioctl+0x84/0xf0
[
c000003fba877e08] [
c0000000004e9cd8] sys_ioctl+0x28/0x80
We removed the pte_protnone check earlier with the understanding that we
mark the pte invalid before the set_pte/set_pmd usage. But the huge pmd
autonuma still use the set_pmd_at directly. This is ok because a protnone pte
won't have translation cache in TLB.
Fixes: da7ad366b497 ("powerpc/mm/book3s: Update pmd_present to look at _PAGE_PRESENT bit")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Thu, 18 Oct 2018 13:11:33 +0000 (00:11 +1100)]
selftests/powerpc: Fix out-of-tree build errors
Some of our Makefiles don't do the right thing when building the
selftests with O=, fix them up.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 2 Aug 2018 07:54:01 +0000 (07:54 +0000)]
powerpc/time: no steal_time when CONFIG_PPC_SPLPAR is not selected
If CONFIG_PPC_SPLPAR is not selected, steal_time will always
be NUL, so accounting it is pointless
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 2 Aug 2018 07:53:59 +0000 (07:53 +0000)]
powerpc/time: Only set CONFIG_ARCH_HAS_SCALED_CPUTIME on PPC64
scaled cputime is only meaningfull when the processor has
SPURR and/or PURR, which means only on PPC64.
Removing it on PPC32 significantly reduces the size of
vtime_account_system() and vtime_account_idle() on an 8xx:
Before:
00000000 l F .text
000000a8 vtime_delta
00000280 g F .text
0000010c vtime_account_system
0000038c g F .text
00000048 vtime_account_idle
After:
(vtime_delta gets inlined inside the two functions)
000001d8 g F .text
000000a0 vtime_account_system
00000278 g F .text
00000038 vtime_account_idle
In terms of performance, we also get approximatly 7% improvement on
task switch. The following small benchmark app is run with perf stat:
void *thread(void *arg)
{
int i;
for (i = 0; i < atoi((char*)arg); i++)
pthread_yield();
}
int main(int argc, char **argv)
{
pthread_t th1, th2;
pthread_create(&th1, NULL, thread, argv[1]);
pthread_create(&th2, NULL, thread, argv[1]);
pthread_join(th1, NULL);
pthread_join(th2, NULL);
return 0;
}
Before the patch:
Performance counter stats for 'chrt -f 98 ./sched 100000' (50 runs):
8228.476465 task-clock (msec) # 0.954 CPUs utilized ( +- 0.23% )
200004 context-switches # 0.024 M/sec ( +- 0.00% )
After the patch:
Performance counter stats for 'chrt -f 98 ./sched 100000' (50 runs):
7649.070444 task-clock (msec) # 0.955 CPUs utilized ( +- 0.27% )
200004 context-switches # 0.026 M/sec ( +- 0.00% )
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 2 Aug 2018 07:53:57 +0000 (07:53 +0000)]
powerpc/time: isolate scaled cputime accounting in dedicated functions.
scaled cputime is only meaningfull when the processor has
SPURR and/or PURR, which means only on PPC64.
In preparation of the following patch that will remove
CONFIG_ARCH_HAS_SCALED_CPUTIME on PPC32, this patch moves
all scaled cputing accounting logic into dedicated functions.
This patch doesn't change any functionality. It's only code
reorganisation.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 18 Sep 2018 09:26:03 +0000 (09:26 +0000)]
powerpc/kgdb: add kgdb_arch_set/remove_breakpoint()
Generic implementation fails to remove breakpoints after init
when CONFIG_STRICT_KERNEL_RWX is selected:
[ 13.251285] KGDB: BP remove failed:
c001c338
[ 13.259587] kgdbts: ERROR PUT: end of test buffer on 'do_fork_test' line 8 expected OK got $E14#aa
[ 13.268969] KGDB: re-enter exception: ALL breakpoints killed
[ 13.275099] CPU: 0 PID: 1 Comm: init Not tainted
4.18.0-g82bbb913ffd8 #860
[ 13.282836] Call Trace:
[ 13.285313] [
c60e1ba0] [
c0080ef0] kgdb_handle_exception+0x6f4/0x720 (unreliable)
[ 13.292618] [
c60e1c30] [
c000e97c] kgdb_handle_breakpoint+0x3c/0x98
[ 13.298709] [
c60e1c40] [
c000af54] program_check_exception+0x104/0x700
[ 13.305083] [
c60e1c60] [
c000e45c] ret_from_except_full+0x0/0x4
[ 13.310845] [
c60e1d20] [
c02a22ac] run_simple_test+0x2b4/0x2d4
[ 13.316532] [
c60e1d30] [
c0081698] put_packet+0xb8/0x158
[ 13.321694] [
c60e1d60] [
c00820b4] gdb_serial_stub+0x230/0xc4c
[ 13.327374] [
c60e1dc0] [
c0080af8] kgdb_handle_exception+0x2fc/0x720
[ 13.333573] [
c60e1e50] [
c000e928] kgdb_singlestep+0xb4/0xcc
[ 13.339068] [
c60e1e70] [
c000ae1c] single_step_exception+0x90/0xac
[ 13.345100] [
c60e1e80] [
c000e45c] ret_from_except_full+0x0/0x4
[ 13.350865] [
c60e1f40] [
c000e11c] ret_from_syscall+0x0/0x38
[ 13.356346] Kernel panic - not syncing: Recursive entry to debugger
This patch creates powerpc specific version of
kgdb_arch_set_breakpoint() and kgdb_arch_remove_breakpoint()
using patch_instruction()
Fixes: 1e0fc9d1eb2b ("powerpc/Kconfig: Enable STRICT_KERNEL_RWX for some configs")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 27 Aug 2018 08:27:27 +0000 (08:27 +0000)]
powerpc/sysdev/ipic: check primary_ipic NULL pointer before using it
ipic_get_mcp_status() is used by targets implementing NMI
watchdog in target specific machine check handler in order
to known whether a machine check results from a watchdog
NMI reset.
In case of very early machine check, primary_ipic pointer
might not have been set yet, so ipic_get_mcp_status() needs
to check it for nullity before using it.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 2 Aug 2018 09:25:55 +0000 (09:25 +0000)]
powerpc/mm: fix always true/false warning in slice.c
This patch fixes the following warnings (obtained with make W=1).
arch/powerpc/mm/slice.c: In function 'slice_range_to_mask':
arch/powerpc/mm/slice.c:73:12: error: comparison is always true due to limited range of data type [-Werror=type-limits]
if (start < SLICE_LOW_TOP) {
^
arch/powerpc/mm/slice.c:81:20: error: comparison is always false due to limited range of data type [-Werror=type-limits]
if ((start + len) > SLICE_LOW_TOP) {
^
arch/powerpc/mm/slice.c: In function 'slice_mask_for_free':
arch/powerpc/mm/slice.c:136:17: error: comparison is always true due to limited range of data type [-Werror=type-limits]
if (high_limit <= SLICE_LOW_TOP)
^
arch/powerpc/mm/slice.c: In function 'slice_check_range_fits':
arch/powerpc/mm/slice.c:185:12: error: comparison is always true due to limited range of data type [-Werror=type-limits]
if (start < SLICE_LOW_TOP) {
^
arch/powerpc/mm/slice.c:195:39: error: comparison is always false due to limited range of data type [-Werror=type-limits]
if (SLICE_NUM_HIGH && ((start + len) > SLICE_LOW_TOP)) {
^
arch/powerpc/mm/slice.c: In function 'slice_scan_available':
arch/powerpc/mm/slice.c:306:11: error: comparison is always true due to limited range of data type [-Werror=type-limits]
if (addr < SLICE_LOW_TOP) {
^
arch/powerpc/mm/slice.c: In function 'get_slice_psize':
arch/powerpc/mm/slice.c:709:11: error: comparison is always true due to limited range of data type [-Werror=type-limits]
if (addr < SLICE_LOW_TOP) {
^
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 22 Jun 2018 13:49:48 +0000 (13:49 +0000)]
powerpc/mm: fix missing prototypes in slice.c
This patch fixes the following warnings (obtained with make W=1).
arch/powerpc/mm/slice.c: At top level:
arch/powerpc/mm/slice.c:682:15: error: no previous prototype for 'arch_get_unmapped_area' [-Werror=missing-prototypes]
unsigned long arch_get_unmapped_area(struct file *filp,
^
arch/powerpc/mm/slice.c:692:15: error: no previous prototype for 'arch_get_unmapped_area_topdown' [-Werror=missing-prototypes]
unsigned long arch_get_unmapped_area_topdown(struct file *filp,
^
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 21 Mar 2018 14:17:00 +0000 (15:17 +0100)]
powerpc/mm: Trace tlbia instruction
Add a trace point for tlbia (Translation Lookaside Buffer Invalidate
All) instruction.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 21 Mar 2018 14:16:58 +0000 (15:16 +0100)]
powerpc/mm: Add missing tracepoint for tlbie
commit
0428491cba927 ("powerpc/mm: Trace tlbie(l) instructions")
added tracepoints for tlbie calls, but _tlbil_va() was forgotten
Fixes: 0428491cba927 ("powerpc/mm: Trace tlbie(l) instructions")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 15 Oct 2018 06:37:41 +0000 (06:37 +0000)]
powerpc/book3s64: fix dump_linuxpagetables "present" flag
Since commit
bd0dbb73e013 ("powerpc/mm/books3s: Add new pte bit to
mark pte temporarily invalid."), _PAGE_PRESENT doesn't mean exactly
that a page is present. A page is also considered preset when
_PAGE_INVALID is set.
This patch changes the meaning of "present" and adds a status "valid"
associated to the _PAGE_PRESENT flag.
Fixes: bd0dbb73e013 ("powerpc/mm/books3s: Add new pte bit to mark pte temporarily invalid.")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aravinda Prasad [Tue, 16 Oct 2018 11:50:05 +0000 (17:20 +0530)]
powerpc/pseries: Export raw per-CPU VPA data via debugfs
This patch exports the raw per-CPU VPA data via debugfs.
A per-CPU file is created which exports the VPA data of
that CPU to help debug some of the VPA related issues or
to analyze the per-CPU VPA related statistics.
v3: Removed offline CPU check.
v2: Included offline CPU check and other review comments.
Signed-off-by: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Mon, 21 May 2018 15:13:57 +0000 (20:43 +0530)]
selftests/powerpc: Add test to verify rfi flush across a system call
This adds a test to verify proper functioning of the rfi flush
capability implemented to mitigate meltdown. The test works by
measuring the number of L1d cache misses encountered while loading
data from memory. Across a system call, since the L1d cache is flushed
when rfi_flush is enabled, the number of cache misses is expected to
be relative to the number of cachelines corresponding to the data
being loaded.
The current system setting is reflected via powerpc/rfi_flush under
debugfs (assumed to be /sys/kernel/debug/). This test verifies the
expected result with rfi_flush enabled as well as when it is disabled.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Add SPDX tags, clang format, skip if the debugfs is missing, use
__u64 and SANE_USERSPACE_TYPES to avoid printf() build errors.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Mon, 21 May 2018 15:13:56 +0000 (20:43 +0530)]
selftests/powerpc: Move UCONTEXT_NIA() into utils.h
... so that it can be used by others.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Tue, 29 May 2018 06:51:00 +0000 (12:21 +0530)]
powerpc64/module elfv1: Set opd addresses after module relocation
module_frob_arch_sections() is called before the module is moved to its
final location. The function descriptor section addresses we are setting
here are thus invalid. Fix this by processing opd section during
module_finalize()
Fixes: 5633e85b2c313 ("powerpc64: Add .opd based function descriptor dereference")
Cc: stable@vger.kernel.org # v4.16
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Thu, 7 Jun 2018 09:52:02 +0000 (15:22 +0530)]
powerpc: Add support for function error injection
We implement regs_set_return_value() and override_function_with_return()
for this purpose.
On powerpc, a return from a function (blr) just branches to the location
contained in the link register. So, we can just update pt_regs rather
than redirecting execution to a dummy function that returns.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 17 Oct 2018 12:39:41 +0000 (23:39 +1100)]
powerpc/time: Fix clockevent_decrementer initalisation for PR KVM
In the recent commit
8b78fdb045de ("powerpc/time: Use
clockevents_register_device(), fixing an issue with large
decrementer") we changed the way we initialise the decrementer
clockevent(s).
We no longer initialise the mult & shift values of
decrementer_clockevent itself.
This has the effect of breaking PR KVM, because it uses those values
in kvmppc_emulate_dec(). The symptom is guest kernels spin forever
mid-way through boot.
For now fix it by assigning back to decrementer_clockevent the mult
and shift values.
Fixes: 8b78fdb045de ("powerpc/time: Use clockevents_register_device(), fixing an issue with large decrementer")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 15 Oct 2018 12:01:43 +0000 (23:01 +1100)]
powerpc/aout: Fix struct user definition to use user_pt_regs
I'm pretty sure this is dead code, it's only used by the a.out core
dump code, and we don't support a.out. We should remove it.
But while it's in the tree it should be using the ABI version of
pt_regs which is called user_pt_regs in the kernel, because the whole
struct is written to the core dump and so its size shouldn't change.
Note this isn't a uapi header so we don't need an ifdef.
Fixes: 002af9391bfb ("powerpc: Split user/kernel definitions of struct pt_regs")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 15 Oct 2018 12:01:42 +0000 (23:01 +1100)]
powerpc/uapi: Fix sigcontext definition to use user_pt_regs
My recent patch to split pt_regs between user and kernel missed
the usage in struct sigcontext.
Because this is a user visible struct it should be using the user
visible definition, which when we're building for the kernel is called
struct user_pt_regs.
As far as I can see this hasn't actually caused a bug (yet), because
we don't use the sizeof() the sigcontext->regs anywhere. But we should
still fix it to avoid confusion and future bugs.
Fixes: 002af9391bfb ("powerpc: Split user/kernel definitions of struct pt_regs")
Reported-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 16 Oct 2018 12:33:40 +0000 (12:33 +0000)]
powerpc/io: remove old GCC version implementation
GCC 4.6 is the minimum supported now.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 10 Oct 2018 05:13:06 +0000 (16:13 +1100)]
powerpc: Add -Werror at arch/powerpc level
Back when I added -Werror in commit
ba55bd74360e ("powerpc: Add
configurable -Werror for arch/powerpc") I did it by adding it to most
of the arch Makefiles.
At the time we excluded math-emu, because apparently it didn't build
cleanly. But that seems to have been fixed somewhere in the interim.
So move the -Werror addition to the top-level of the arch, this saves
us from repeating it in every Makefile and means we won't forget to
add it to any new sub-dirs.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 10 Oct 2018 05:13:05 +0000 (16:13 +1100)]
powerpc: Move core kernel logic into arch/powerpc/Kbuild
This is a nice cleanup, arch/powerpc/Makefile is long and messy so
moving this out helps a little.
It also allows us to do:
$ make arch/powerpc
Which can be helpful if you just want to compile test some changes to
arch code and not link everything.
Finally it also gives us a single place to do subdir-cc-flags
assignments which affect the whole of arch/powerpc, which we will do
in a future patch.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 00:18:49 +0000 (11:18 +1100)]
macintosh/windfarm_smu_sat: Fix debug output
There's some antiquated debug output that's trying
to do a hand-made hexdump and turning into horrible
1-byte-per-line output these days.
Use print_hex_dump() instead
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 15 Oct 2018 07:38:10 +0000 (07:38 +0000)]
powerpc/traps: remove redundant in_interrupt panic in die()
do_exit() already includes a test to panic() is in_interrupt()
This patch removes powerpc one which is redundant.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:50:00 +0000 (13:50 +1100)]
powerpc/prom_init: Generate "phandle" instead of "linux, phandle"
When creating the boot-time FDT from an actual Open Firmware live
tree, let's generate "phandle" properties for the phandles instead
of the old deprecated "linux,phandle".
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Unsplit warning printf()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:59 +0000 (13:49 +1100)]
powerpc: Check prom_init for disallowed sections
prom_init.c must not modify the kernel image outside
of the .bss.prominit section. Thus make sure that
prom_init.o doesn't have anything in any of these:
.data
.bss
.init.data
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:58 +0000 (13:49 +1100)]
powerpc/prom_init: Move __prombss to it's own section and store it in .bss
This makes __prombss its own section, and for now store
it in .bss.
This will give us the ability later to store it elsewhere
and/or free it after boot (it's about 8KB).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:57 +0000 (13:49 +1100)]
powerpc/prom_init: Move a few remaining statics to appropriate sections
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:56 +0000 (13:49 +1100)]
powerpc/prom_init: Move const structures to __initconst
As they are no longer used past the end of prom_init
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:55 +0000 (13:49 +1100)]
powerpc/prom_init: Move ibm_arch_vec to __prombss
Make the existing initialized definition constant and copy
it to a __prombss copy
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:54 +0000 (13:49 +1100)]
powerpc/prom_init: Move prom_radix_disable to __prombss
Initialize it dynamically instead of statically
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:53 +0000 (13:49 +1100)]
powerpc/prom_init: Remove support for OPAL v2
We removed support for running under any OPAL version
earlier than v3 in 2015 (they never saw the light of day
anyway), but we kept some leftovers of this support in
prom_init.c, so let's take it out.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Mon, 15 Oct 2018 02:49:52 +0000 (13:49 +1100)]
powerpc/prom_init: Replace __initdata with __prombss when applicable
This replaces all occurrences of __initdata for uninitialized
data with a new __prombss
Currently __promdata is defined to be __initdata but we'll
eventually change that.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Oliver O'Halloran [Sun, 14 Oct 2018 23:18:28 +0000 (10:18 +1100)]
powerpc/pseries: Add driver for PAPR SCM regions
Adds a driver that implements support for enabling and accessing PAPR
SCM regions. Unfortunately due to how the PAPR interface works we can't
use the existing of_pmem driver (yet) because:
a) The guest is required to use the H_SCM_BIND_MEM h-call to add
add the SCM region to it's physical address space, and
b) There is currently no mechanism for relating a bare of_pmem region
to the backing DIMM (or not-a-DIMM for our case).
Both of these are easily handled by rolling the functionality into a
seperate driver so here we are...
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Oliver O'Halloran [Sun, 14 Oct 2018 23:18:27 +0000 (10:18 +1100)]
powerpc/pseries: PAPR persistent memory support
This patch implements support for discovering storage class memory
devices at boot and for handling hotplug of new regions via RTAS
hotplug events.
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
[mpe: Fix CONFIG_MEMORY_HOTPLUG=n build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 15 Oct 2018 07:20:45 +0000 (07:20 +0000)]
powerpc/traps: fix machine check handlers to use pr_cont()
When printing the machine check cause, the cause appears on the
following line due to bad use of printk without \n:
[ 33.663993] Machine check in kernel mode.
[ 33.664011] Caused by (from SRR1=9032):
[ 33.664036] Data access error at address
c90c8000
This patch fixes it by using pr_cont() for the second part:
[ 133.258131] Machine check in kernel mode.
[ 133.258146] Caused by (from SRR1=9032): Data access error at address
c90c8000
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 17 Oct 2018 13:03:22 +0000 (13:03 +0000)]
powerpc/book3e: redefine pte_mkprivileged() and pte_mkuser()
Book3e defines both _PAGE_USER and _PAGE_PRIVILEGED, so the nohash
default pte_mkprivileged() and pte_mkuser() are not usable.
This patch redefines them for book3e.
In theorie, only pte_mkprivileged() needs to be redefined because
_PAGE_USER includes _PAGE_PRIVILEGED, but it is less confusing
to redefine both.
Fixes: a0da4bc166f2 ("powerpc/mm: Allow platforms to redefine some helpers")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 17 Oct 2018 09:39:21 +0000 (15:09 +0530)]
powerpc/mm: Make pte_pgprot return all pte bits
Other archs do the same and instead of adding required pte bits (which
got masked out) in __ioremap_at(), make sure we filter only pfn bits
out.
Fixes: 26973fa5ac0e ("powerpc/mm: use pte helpers in generic code")
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Thu, 20 Sep 2018 08:33:58 +0000 (14:03 +0530)]
powerpc/mm: Increase the max addressable memory to 2PB
Currently we limit the max addressable memory to 128TB. This patch increase the
limit to 2PB. We can have devices like nvdimm which adds memory above 512TB
limit.
We still don't support regular system ram above 512TB. One of the challenge with
that is the percpu allocator, that allocates per node memory and use the max
distance between them as the percpu offsets. This means with large gap in
address space ( system ram above 1PB) we will run out of vmalloc space to map
the percpu allocation.
In order to support addressable memory above 512TB, kernel should be able to
linear map this range. To do that with hash translation we now add 4 context
to kernel linear map region. Our per context addressable range is 512TB. We
still keep VMALLOC and VMEMMAP region to old size. SLB miss handlers is updated
to validate these limit.
We also limit this update to SPARSEMEM_VMEMMAP and SPARSEMEM_EXTREME
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Thu, 20 Sep 2018 08:33:57 +0000 (14:03 +0530)]
powerpc/mm/hash: Rename get_ea_context to get_user_context
We will be adding get_kernel_context later. Update function name to indicate
this handle context allocation user space address.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 2 Oct 2018 14:27:59 +0000 (00:27 +1000)]
powerpc/64s/hash: Add some SLB debugging tests
This adds CONFIG_DEBUG_VM checks to ensure:
- The kernel stack is in the SLB after it's flushed and bolted.
- We don't insert an SLB for an address that is aleady in the SLB.
- The kernel SLB miss handler does not take an SLB miss.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 2 Oct 2018 14:27:58 +0000 (00:27 +1000)]
powerpc/64s/hash: Simplify slb_flush_and_rebolt()
slb_flush_and_rebolt() is misleading, it is called in virtual mode, so
it can not possibly change the stack, so it should not be touching the
shadow area. And since vmalloc is no longer bolted, it should not
change any bolted mappings at all.
Change the name to slb_flush_and_restore_bolted(), and have it just
load the kernel stack from what's currently in the shadow SLB area.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 14 Sep 2018 15:30:56 +0000 (01:30 +1000)]
powerpc/64s/hash: Add a SLB preload cache
When switching processes, currently all user SLBEs are cleared, and a
few (exec_base, pc, and stack) are preloaded. In trivial testing with
small apps, this tends to miss the heap and low 256MB segments, and it
will also miss commonly accessed segments on large memory workloads.
Add a simple round-robin preload cache that just inserts the last SLB
miss into the head of the cache and preloads those at context switch
time. Every 256 context switches, the oldest entry is removed from the
cache to shrink the cache and require fewer slbmte if they are unused.
Much more could go into this, including into the SLB entry reclaim
side to track some LRU information etc, which would require a study of
large memory workloads. But this is a simple thing we can do now that
is an obvious win for common workloads.
With the full series, process switching speed on the context_switch
benchmark on POWER9/hash (with kernel speculation security masures
disabled) increases from 140K/s to 178K/s (27%).
POWER8 does not change much (within 1%), it's unclear why it does not
see a big gain like POWER9.
Booting to busybox init with 256MB segments has SLB misses go down
from 945 to 69, and with 1T segments 900 to 21. These could almost all
be eliminated by preloading a bit more carefully with ELF binary
loading.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 14 Sep 2018 15:30:55 +0000 (01:30 +1000)]
powerpc/64s/hash: Provide arch_setup_exec() hooks for hash slice setup
This will be used by the SLB code in the next patch, but for now this
sets the slb_addr_limit to the correct size for 32-bit tasks.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 14 Sep 2018 15:30:53 +0000 (01:30 +1000)]
powerpc/64s/hash: Add SLB allocation status bitmaps
Add 32-entry bitmaps to track the allocation status of the first 32
SLB entries, and whether they are user or kernel entries. These are
used to allocate free SLB entries first, before resorting to the round
robin allocator.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 14 Sep 2018 15:30:51 +0000 (01:30 +1000)]
powerpc/64s/hash: Convert SLB miss handlers to C
This patch moves SLB miss handlers completely to C, using the standard
exception handler macros to set up the stack and branch to C.
This can be done because the segment containing the kernel stack is
always bolted, so accessing it with relocation on will not cause an
SLB exception.
Arbitrary kernel memory must not be accessed when handling kernel
space SLB misses, so care should be taken there. However user SLB
misses can access any kernel memory, which can be used to move some
fields out of the paca (in later patches).
User SLB misses could quite easily reconcile IRQs and set up a first
class kernel environment and exit via ret_from_except, however that
doesn't seem to be necessary at the moment, so we only do that if a
bad fault is encountered.
[ Credit to Aneesh for bug fixes, error checks, and improvements to
bad address handling, etc ]
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Disallow tracing for all of slb.c for now.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 12 Oct 2018 13:15:16 +0000 (00:15 +1100)]
powerpc/64: Interrupts save PPR on stack rather than thread_struct
PPR is the odd register out when it comes to interrupt handling, it is
saved in current->thread.ppr while all others are saved on the stack.
The difficulty with this is that accessing thread.ppr can cause a SLB
fault, but the SLB fault handler implementation in C change had
assumed the normal exception entry handlers would not cause an SLB
fault.
Fix this by allocating room in the interrupt stack to save PPR.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 12 Oct 2018 13:39:31 +0000 (00:39 +1100)]
powerpc/ptrace: Don't use sizeof(struct pt_regs) in ptrace code
Now that we've split the user & kernel versions of pt_regs we need to
be more careful in the ptrace code.
For now we've ensured the location of the fields in both structs is
the same, so most of the ptrace code doesn't need updating.
But there are a few places where we use sizeof(pt_regs), and these
will be wrong as soon as we increase the size of the kernel structure.
So flip them all to use sizeof(user_pt_regs).
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 12 Oct 2018 12:13:17 +0000 (23:13 +1100)]
powerpc: Split user/kernel definitions of struct pt_regs
We use a shared definition for struct pt_regs in uapi/asm/ptrace.h.
That means the layout of the structure is ABI, ie. we can't change it.
That would be fine if it was only used to describe the user-visible
register state of a process, but it's also the struct we use in the
kernel to describe the registers saved in an interrupt frame.
We'd like more flexibility in the content (and possibly layout) of the
kernel version of the struct, but currently that's not possible.
So split the definition into a user-visible definition which remains
unchanged, and a kernel internal one.
At the moment they're still identical, and we check that at build
time. That's because we have code (in ptrace etc.) that assumes that
they are the same. We will fix that code in future patches, and then
we can break the strict symmetry between the two structs.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Thu, 31 May 2018 04:33:41 +0000 (14:33 +1000)]
powerpc/prom_init: Make "default_colors" const
It's never modified.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Thu, 31 May 2018 04:33:40 +0000 (14:33 +1000)]
powerpc/prom_init: Make "fake_elf" const
It is never modified
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Thu, 31 May 2018 04:33:39 +0000 (14:33 +1000)]
powerpc/prom_init: Make of_workarounds static
It's not used anywhere else.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:20 +0000 (13:52 +0000)]
powerpc/book3s64: Avoid multiple endian conversion in pte helpers
In the same spirit as already done in pte query helpers,
this patch changes pte setting helpers to perform endian
conversions on the constants rather than on the pte value.
In the meantime, it changes pte_access_permitted() to use
pte helpers for the same reason.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:18 +0000 (13:52 +0000)]
powerpc/8xx: change name of a few page flags to avoid confusion
_PAGE_PRIVILEGED corresponds to the SH bit which doesn't protect
against user access but only disables ASID verification on kernel
accesses. User access is controlled with _PMD_USER flag.
Name it _PAGE_SH instead of _PAGE_PRIVILEGED
_PAGE_HUGE corresponds to the SPS bit which doesn't really tells
that's it is a huge page but only that it is not a 4k page.
Name it _PAGE_SPS instead of _PAGE_HUGE
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:16 +0000 (13:52 +0000)]
powerpc/mm: Get rid of pte-common.h
Do not include pte-common.h in nohash/32/pgtable.h
As that was the last includer, get rid of pte-common.h
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:14 +0000 (13:52 +0000)]
powerpc/mm: Define platform default caches related flags
Cache related flags like _PAGE_COHERENT and _PAGE_WRITETHRU
are defined on most platforms. The platforms not defining
them don't define any alternative. So we can give them a NUL
value directly for those platforms directly.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:12 +0000 (13:52 +0000)]
powerpc/mm: Allow platforms to redefine some helpers
The 40xx defines _PAGE_HWWRITE while others don't.
The 8xx defines _PAGE_RO instead of _PAGE_RW.
The 8xx defines _PAGE_PRIVILEGED instead of _PAGE_USER.
The 8xx defines _PAGE_HUGE and _PAGE_NA while others don't.
Lets those platforms redefine pte_write(), pte_wrprotect() and
pte_mkwrite() and get _PAGE_RO and _PAGE_HWWRITE off the common
helpers.
Lets the 8xx redefine pte_user(), pte_mkprivileged() and pte_mkuser()
and get rid of _PAGE_PRIVILEGED and _PAGE_USER default values.
Lets the 8xx redefine pte_mkhuge() and get rid of
_PAGE_HUGE default value.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:10 +0000 (13:52 +0000)]
powerpc/nohash/64: do not include pte-common.h
nohash/64 only uses book3e PTE flags, so it doesn't need pte-common.h
This also allows to drop PAGE_SAO and H_PAGE_4K_PFN from pte_common.h
as they are only used by PPC64
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:08 +0000 (13:52 +0000)]
powerpc/mm: Distribute platform specific PAGE and PMD flags and definitions
The base kernel PAGE_XXXX definition sets are more or less platform
specific. Lets distribute them close to platform _PAGE_XXX flags
definition, and customise them to their exact platform flags.
Also defines _PAGE_PSIZE and _PTE_NONE_MASK for each platform
allthough they are defined as 0.
Do the same with _PMD flags like _PMD_USER and _PMD_PRESENT_MASK
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:06 +0000 (13:52 +0000)]
powerpc/mm: Move pte_user() into nohash/pgtable.h
Now the pte-common.h is only for nohash platforms, lets
move pte_user() helper out of pte-common.h to put it
together with other helpers.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:04 +0000 (13:52 +0000)]
powerpc/book3s/32: do not include pte-common.h
As done for book3s/64, add necessary flags/defines in
book3s/32/pgtable.h and do not include pte-common.h
It allows in the meantime to remove all related hash
definitions from pte-common.h and to also remove
_PAGE_EXEC default as _PAGE_EXEC is defined on all
platforms except book3s/32.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:02 +0000 (13:52 +0000)]
powerpc/mm: move __P and __S tables in the common pgtable.h
__P and __S flags are the same for all platform and should remain
as is in the future, so avoid duplication.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:52:00 +0000 (13:52 +0000)]
powerpc/mm: drop unused page flags
The following page flags in pte-common.h can be dropped:
_PAGE_ENDIAN is only used in mm/fsl_booke_mmu.c and is defined in
asm/nohash/32/pte-fsl-booke.h
_PAGE_4K_PFN is nowhere defined nor used
_PAGE_READ, _PAGE_WRITE and _PAGE_PTE are only defined and used
in book3s/64
The following page flags in book3s/64/pgtable.h can be dropped as
they are not used on this platform nor by common code.
_PAGE_NA, _PAGE_RO, _PAGE_USER and _PAGE_PSIZE
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:58 +0000 (13:51 +0000)]
powerpc/mm: Split dump_pagelinuxtables flag_array table
To reduce the complexity of flag_array, and allow the removal of
default 0 value of non existing flags, lets have one flag_array
table for each platform family with only the really existing flags.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:56 +0000 (13:51 +0000)]
powerpc/mm: use pte helpers in generic code
Get rid of platform specific _PAGE_XXXX in powerpc common code and
use helpers instead.
mm/dump_linuxpagetables.c will be handled separately
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:54 +0000 (13:51 +0000)]
powerpc/mm: don't use _PAGE_EXEC for calling hash_preload()
The 'access' parameter of hash_preload() is either 0 or _PAGE_EXEC.
Among the two versions of hash_preload(), only the PPC64 one is
doing something with this 'access' parameter.
In order to remove the use of _PAGE_EXEC outside platform code,
'access' parameter is replaced by 'is_exec' which will be either
true of false, and the PPC64 version of hash_preload() creates
the access flag based on 'is_exec'.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:52 +0000 (13:51 +0000)]
powerpc/mm: add pte helpers to query and change pte flags
In order to avoid using generic _PAGE_XXX flags in powerpc
core functions, define helpers for all needed flags:
- pte_mkuser() and pte_mkprivileged() to set/unset and/or
unset/set _PAGE_USER and/or _PAGE_PRIVILEGED
- pte_hashpte() to check if _PAGE_HASHPTE is set.
- pte_ci() check if cache is inhibited (already existing on book3s/64)
- pte_exprotect() to protect against execution
- pte_exec() and pte_mkexec() to query and set page execution
- pte_mkpte() to set _PAGE_PTE flag.
- pte_hw_valid() to check _PAGE_PRESENT since pte_present does
something different on book3s/64.
On book3s/32 there is no exec protection, so pte_mkexec() and
pte_exprotect() are nops and pte_exec() returns always true.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:50 +0000 (13:51 +0000)]
powerpc/mm: move some nohash pte helpers in nohash/[32:64]/pgtable.h
In order to allow their use in nohash/32/pgtable.h, we have to move the
following helpers in nohash/[32:64]/pgtable.h:
- pte_mkwrite()
- pte_mkdirty()
- pte_mkyoung()
- pte_wrprotect()
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:47 +0000 (13:51 +0000)]
powerpc/mm: don't use _PAGE_EXEC in book3s/32
book3s/32 doesn't define _PAGE_EXEC, so no need to use it.
All other platforms define _PAGE_EXEC so no need to check
it is not NUL when not book3s/32.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:45 +0000 (13:51 +0000)]
powerpc: handover page flags with a pgprot_t parameter
In order to avoid multiple conversions, handover directly a
pgprot_t to map_kernel_page() as already done for radix.
Do the same for __ioremap_caller() and __ioremap_at().
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:43 +0000 (13:51 +0000)]
powerpc/mm: properly set PAGE_KERNEL flags in ioremap()
Set PAGE_KERNEL directly in the caller and do not rely on a
hack adding PAGE_KERNEL flags when _PAGE_PRESENT is not set.
As already done for PPC64, use pgprot_cache() helpers instead of
_PAGE_XXX flags in PPC32 ioremap() derived functions.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:41 +0000 (13:51 +0000)]
powerpc: don't use ioremap_prot() nor __ioremap() unless really needed.
In many places, ioremap_prot() and __ioremap() can be replaced with
higher level functions like ioremap(), ioremap_coherent(),
ioremap_cache(), ioremap_wc() ...
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:39 +0000 (13:51 +0000)]
soc/fsl/qbman: use ioremap_cache() instead of ioremap_prot(0)
ioremap_prot() with flag set to 0 relies on a hack in
__ioremap_caller() which adds PAGE_KERNEL flags when the
handed flags don't look like a valid set of flags
(ie don't include _PAGE_PRESENT)
The intention being to map cached memory, use ioremap_cache() instead.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:37 +0000 (13:51 +0000)]
drivers/block/z2ram: use ioremap_wt() instead of __ioremap(_PAGE_WRITETHRU)
_PAGE_WRITETHRU is a target specific flag. Prefer generic functions.
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:35 +0000 (13:51 +0000)]
drivers/video/fbdev: use ioremap_wc/wt() instead of __ioremap()
_PAGE_NO_CACHE is a platform specific flag. In addition, this flag
is misleading because one would think it requests a noncached page
whereas a noncached page is _PAGE_NO_CACHE | _PAGE_GUARDED
_PAGE_NO_CACHE alone means write combined noncached page, so lets
use ioremap_wc() instead.
_PAGE_WRITETHRU is also platform specific flag. Use ioremap_wt()
instead.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 9 Oct 2018 13:51:33 +0000 (13:51 +0000)]
powerpc/32: Add ioremap_wt() and ioremap_coherent()
Other arches have ioremap_wt() to map IO areas write-through.
Implement it on PPC as well in order to avoid drivers using
__ioremap(_PAGE_WRITETHRU)
Also implement ioremap_coherent() to avoid drivers using
__ioremap(_PAGE_COHERENT)
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Mon, 1 Oct 2018 10:40:39 +0000 (16:10 +0530)]
powerpc/rtas: Fix a potential race between CPU-Offline & Migration
Live Partition Migrations require all the present CPUs to execute the
H_JOIN call, and hence rtas_ibm_suspend_me() onlines any offline CPUs
before initiating the migration for this purpose.
The commit
85a88cabad57
("powerpc/pseries: Disable CPU hotplug across migrations")
disables any CPU-hotplug operations once all the offline CPUs are
brought online to prevent any further state change. Once the
CPU-Hotplug operation is disabled, the code assumes that all the CPUs
are online.
However, there is a minor window in rtas_ibm_suspend_me() between
onlining the offline CPUs and disabling CPU-Hotplug when a concurrent
CPU-offline operations initiated by the userspace can succeed thereby
nullifying the the aformentioned assumption. In this unlikely case
these offlined CPUs will not call H_JOIN, resulting in a system hang.
Fix this by verifying that all the present CPUs are actually online
after CPU-Hotplug has been disabled, failing which we restore the
state of the offline CPUs in rtas_ibm_suspend_me() and return an
-EBUSY.
Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Cc: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Thu, 11 Oct 2018 05:33:03 +0000 (11:03 +0530)]
powerpc/cacheinfo: Report the correct shared_cpu_map on big-cores
Currently on POWER9 SMT8 cores systems, in sysfs, we report the
shared_cache_map for L1 caches (both data and instruction) to be the
cpu-ids of the threads in SMT8 cores. This is incorrect since on
POWER9 SMT8 cores there are two groups of threads, each of which
shares its own L1 cache.
This patch addresses this by reporting the shared_cpu_map correctly in
sysfs for L1 caches.
Before the patch
/sys/devices/system/cpu/cpu0/cache/index0/shared_cpu_map :
000000ff
/sys/devices/system/cpu/cpu0/cache/index1/shared_cpu_map :
000000ff
/sys/devices/system/cpu/cpu1/cache/index0/shared_cpu_map :
000000ff
/sys/devices/system/cpu/cpu1/cache/index1/shared_cpu_map :
000000ff
After the patch
/sys/devices/system/cpu/cpu0/cache/index0/shared_cpu_map :
00000055
/sys/devices/system/cpu/cpu0/cache/index1/shared_cpu_map :
00000055
/sys/devices/system/cpu/cpu1/cache/index0/shared_cpu_map :
000000aa
/sys/devices/system/cpu/cpu1/cache/index1/shared_cpu_map :
000000aa
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Thu, 11 Oct 2018 05:33:02 +0000 (11:03 +0530)]
powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores
POWER9 SMT8 cores consist of two groups of threads, where threads in
each group shares L1-cache. The scheduler is not aware of this
distinction as the current sched-domain hierarchy has all the threads
of the core defined at the SMT domain.
SMT [Thread siblings of the SMT8 core]
DIE [CPUs in the same die]
NUMA [All the CPUs in the system]
Due to this, we can observe run-to-run variance when we run a
multi-threaded benchmark bound to a single core based on how the
scheduler spreads the software threads across the two groups in the
core.
We fix this in this patch by defining each group of threads which
share L1-cache to be the SMT level. The group of threads in the SMT8
core is defined to be the CACHE level. The sched-domain hierarchy
after this patch will be :
SMT [Thread siblings in the core that share L1 cache]
CACHE [Thread siblings that are in the SMT8 core]
DIE [CPUs in the same die]
NUMA [All the CPUs in the system]
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Thu, 11 Oct 2018 05:33:01 +0000 (11:03 +0530)]
powerpc: Detect the presence of big-cores via "ibm, thread-groups"
On IBM POWER9, the device tree exposes a property array identifed by
"ibm,thread-groups" which will indicate which groups of threads share
a particular set of resources.
As of today we only have one form of grouping identifying the group of
threads in the core that share the L1 cache, translation cache and
instruction data flow.
This patch adds helper functions to parse the contents of
"ibm,thread-groups" and populate a per-cpu variable to cache
information about siblings of each CPU that share the L1, traslation
cache and instruction data-flow.
It also defines a new global variable named "has_big_cores" which
indicates if the cores on this configuration have multiple groups of
threads that share L1 cache.
For each online CPU, it maintains a cpu_smallcore_mask, which
indicates the online siblings which share the L1-cache with it.
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 12 Oct 2018 11:09:09 +0000 (22:09 +1100)]
powerpc: Fix stackprotector detection for non-glibc toolchains
If GCC is not built with glibc support then we must explicitly tell it
which register to use for TLS mode stack protector, otherwise it will
error out and the cc-option check will fail.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 12 Oct 2018 02:58:52 +0000 (13:58 +1100)]
powerpc/xmon: Show the stack protector canary in xmon
This is helpful for debugging stack protector crashes.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Joel Stanley [Fri, 12 Oct 2018 02:44:06 +0000 (13:14 +1030)]
powerpc: Use SWITCH_FRAME_SIZE for prom and rtas entry
Commit
6c1719942e19 ("powerpc/of: Remove useless register save/restore
when calling OF back") removed the saving of srr0 and srr1 when calling
into OpenFirmware. Commit
e31aa453bbc4 ("powerpc: Use LOAD_REG_IMMEDIATE
only for constants on 64-bit") did the same for rtas.
This means we don't need to save the extra stack space and can use
the common SWITCH_FRAME_SIZE.
There were already no users of _SRR0 and _SRR1 so we can remove them
too.
Link: https://github.com/linuxppc/linux/issues/83
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Bringmann [Tue, 9 Oct 2018 20:12:14 +0000 (15:12 -0500)]
powerpc/pseries/mobility: Extend start/stop topology update scope
The powerpc mobility code may receive RTAS requests to perform PRRN
(Platform Resource Reassignment Notification) topology changes at any
time, including during LPAR migration operations.
In some configurations where the affinity of CPUs or memory is being
changed on that platform, the PRRN requests may apply or refer to
outdated information prior to the complete update of the device-tree.
This patch changes the duration for which topology updates are
suppressed during LPAR migrations from just the rtas_ibm_suspend_me()
/ 'ibm,suspend-me' call(s) to cover the entire migration_store()
operation to allow all changes to the device-tree to be applied prior
to accepting and applying any PRRN requests.
For tracking purposes, pr_info notices are added to the functions
start_topology_update() and stop_topology_update() of 'numa.c'.
Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com>
Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Joel Stanley [Thu, 11 Oct 2018 02:43:03 +0000 (13:13 +1030)]
powerpc/Makefile: Fix PPC_BOOK3S_64 ASFLAGS
Ever since commit
15a3204d24a3 ("powerpc/64s: Set assembler machine type
to POWER4") we force -mpower4 to be passed to the assembler
irrespective of the CFLAGS used (for Book3s 64).
When building a powerpc64 kernel with clang, clang will not add -many
to the assembler flags, so any instructions that the compiler has
generated that are not available on power4 will cause an error:
/usr/bin/as -a64 -mppc64 -mlittle-endian -mpower8 \
-I ./arch/powerpc/include -I ./arch/powerpc/include/generated \
-I ./include -I ./arch/powerpc/include/uapi \
-I ./arch/powerpc/include/generated/uapi -I ./include/uapi \
-I ./include/generated/uapi -I arch/powerpc -I arch/powerpc \
-maltivec -mpower4 -o init/do_mounts.o /tmp/do_mounts-3b0a3d.s
/tmp/do_mounts-51ce54.s:748: Error: unrecognized opcode: `isel'
GCC does include -many, so the GCC driven gas call will succeed:
as -v -I ./arch/powerpc/include -I ./arch/powerpc/include/generated -I
./include -I ./arch/powerpc/include/uapi
-I ./arch/powerpc/include/generated/uapi -I ./include/uapi
-I ./include/generated/uapi -I arch/powerpc -I arch/powerpc
-a64 -mpower8 -many -mlittle -maltivec -mpower4 -o init/do_mounts.o
Note that isel is power7 and above for IBM CPUs. GCC only generates it
for Power9 and above, but the above test was run against the clang
generated assembly.
Peter Bergner explains:
When using -many -mpower4, gas will first try and find a matching
power4 mnemonic and failing that, it will then allow any valid
mnemonic that gas knows about. GCC's use of -many predates me
though.
IIRC, Alan looked at trying to remove it, but I forget why he
didn't. Could be either a gcc or gas issue at the time. I'm not sure
whether issue still exists or not. He and I have modified how gas
works internally a fair amount since he tried removing gcc use of
-many.
I will also note that when using -many, gas will choose the first
mnemonic that matches in the mnemonic table and we have (mostly)
sorted the table so that server mnemonics show up earlier in the
table than other mnemonics, so they'll be seen/chosen first.
By explicitly setting -many we can build with Clang and GCC while
retaining the -mpower4 option.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
YueHaibing [Tue, 9 Oct 2018 13:59:13 +0000 (21:59 +0800)]
powerpc/pseries/memory-hotplug: Fix return value type of find_aa_index
The variable 'aa_index' is defined as an unsigned value in
update_lmb_associativity_index(), but find_aa_index() may return -1
when dlpar_clone_property() fails. So change find_aa_index() to return
a bool, which indicates whether 'aa_index' was found or not.
Fixes: c05a5a40969e ("powerpc/pseries: Dynamic add entires to associativity lookup array")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Nathan Fontenot nfont@linux.vnet.ibm.com>
[mpe: Tweak changelog, rename is_found to just found]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sam Bobroff [Wed, 12 Sep 2018 01:23:33 +0000 (11:23 +1000)]
powerpc/eeh: Cleanup control flow in eeh_handle_normal_event()
Rather than mixing "if (state)" blocks and gotos, convert entirely to
"if (state)" blocks to make the state machine behaviour clearer.
Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sam Bobroff [Wed, 12 Sep 2018 01:23:32 +0000 (11:23 +1000)]
powerpc/eeh: Cleanup eeh_ops.wait_state()
The wait_state member of eeh_ops does not need to be platform
dependent; it's just logic around eeh_ops.get_state(). Therefore,
merge the two (slightly different!) platform versions into a new
function, eeh_wait_state() and remove the eeh_ops member.
While doing this, also correct:
* The wait logic, so that it never waits longer than max_wait.
* The wait logic, so that it never waits less than
EEH_STATE_MIN_WAIT_TIME.
* One call site where the result is treated like a bit field before
it's checked for negative error values.
* In pseries_eeh_get_state(), rename the "state" parameter to "delay"
because that's what it is.
Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sam Bobroff [Wed, 12 Sep 2018 01:23:31 +0000 (11:23 +1000)]
powerpc/eeh: Cleanup eeh_pe_state_mark()
Currently, eeh_pe_state_mark() marks a PE (and it's children) with a
state and then performs additional processing if that state included
EEH_PE_ISOLATED.
The state parameter is always a constant at the call site, so
rearrange eeh_pe_state_mark() into two functions and just call the
appropriate one at each site.
Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>