openwrt/staging/blogic.git
6 years agoKVM: x86: Add CPUID support for new instruction WBNOINVD
Robert Hoo [Wed, 19 Dec 2018 13:51:43 +0000 (21:51 +0800)]
KVM: x86: Add CPUID support for new instruction WBNOINVD

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: ucall: fix exit mmio address guessing
Andrew Jones [Fri, 21 Dec 2018 11:22:22 +0000 (12:22 +0100)]
kvm: selftests: ucall: fix exit mmio address guessing

Fix two more bugs in the exit_mmio address guessing.
The first bug was that the start and step calculations were
wrong since they were dividing the number of address bits instead
of the address space. The second other bug was that the guessing
algorithm wasn't considering the valid physical and virtual address
ranges correctly for an identity map.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoRevert "compiler-gcc: disable -ftracer for __noclone functions"
Sean Christopherson [Thu, 20 Dec 2018 20:25:18 +0000 (12:25 -0800)]
Revert "compiler-gcc: disable -ftracer for __noclone functions"

The -ftracer optimization was disabled in __noclone as a workaround to
GCC duplicating a blob of inline assembly that happened to define a
global variable.  It has been pointed out that no amount of workarounds
can guarantee the compiler won't duplicate inline assembly[1], and that
disabling the -ftracer optimization has several unintended and nasty
side effects[2][3].

Now that the offending KVM code which required the workaround has
been properly fixed and no longer uses __noclone, remove the -ftracer
optimization tweak from __noclone.

[1] https://lore.kernel.org/lkml/ri6y38lo23g.fsf@suse.cz/T/#u
[2] https://lore.kernel.org/lkml/20181218140105.ajuiglkpvstt3qxs@treble/T/#u
[3] https://patchwork.kernel.org/patch/8707981/#21817015

This reverts commit 95272c29378ee7dc15f43fa2758cb28a5913a06d.

Suggested-by: Andi Kleen <ak@linux.intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Sean Christopherson [Thu, 20 Dec 2018 20:25:17 +0000 (12:25 -0800)]
KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines

Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc...  For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly.  The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.

The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits.  KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME.  The label is then exposed to C code via a global
variable definition in the inline assembly.

Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler.  The earliest approach placed the inline
assembly in a separate noinline function[1].  Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.

After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated.  This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].

Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6].  And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.

So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].

Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines.  Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw().  Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.

Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL.  There are no great
alternatives to fudging RSP.  Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI).  The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]).  But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required.  A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines.  And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path.  In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.

[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com

Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs
Sean Christopherson [Thu, 20 Dec 2018 20:25:16 +0000 (12:25 -0800)]
KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs

Use '%% " _ASM_CX"' instead of '%0' to dereference RCX, i.e. the
'struct vcpu_vmx' pointer, in the VM-Enter asm blobs of vmx_vcpu_run()
and nested_vmx_check_vmentry_hw().  Using the symbolic name means that
adding/removing an output parameter(s) requires "rewriting" almost all
of the asm blob, which makes it nearly impossible to understand what's
being changed in even the most minor patches.

Opportunistically improve the code comments.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoMerge tag 'kvm-ppc-next-4.21-2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Fri, 21 Dec 2018 10:48:41 +0000 (11:48 +0100)]
Merge tag 'kvm-ppc-next-4.21-2' of git://git./linux/kernel/git/paulus/powerpc into kvm-next

Second PPC KVM update for 4.21

This has 5 commits that fix page dirty tracking when running nested
HV KVM guests, from Suraj Jitindar Singh.

6 years agoKVM: x86: Use jmp to invoke kvm_spurious_fault() from .fixup
Sean Christopherson [Thu, 20 Dec 2018 22:21:08 +0000 (14:21 -0800)]
KVM: x86: Use jmp to invoke kvm_spurious_fault() from .fixup

____kvm_handle_fault_on_reboot() provides a generic exception fixup
handler that is used to cleanly handle faults on VMX/SVM instructions
during reboot (or at least try to).  If there isn't a reboot in
progress, ____kvm_handle_fault_on_reboot() treats any exception as
fatal to KVM and invokes kvm_spurious_fault(), which in turn generates
a BUG() to get a stack trace and die.

When it was originally added by commit 4ecac3fd6dc2 ("KVM: Handle
virtualization instruction #UD faults during reboot"), the "call" to
kvm_spurious_fault() was handcoded as PUSH+JMP, where the PUSH'd value
is the RIP of the faulting instructing.

The PUSH+JMP trickery is necessary because the exception fixup handler
code lies outside of its associated function, e.g. right after the
function.  An actual CALL from the .fixup code would show a slightly
bogus stack trace, e.g. an extra "random" function would be inserted
into the trace, as the return RIP on the stack would point to no known
function (and the unwinder will likely try to guess who owns the RIP).

Unfortunately, the JMP was replaced with a CALL when the macro was
reworked to not spin indefinitely during reboot (commit b7c4145ba2eb
"KVM: Don't spin on virt instruction faults during reboot").  This
causes the aforementioned behavior where a bogus function is inserted
into the stack trace, e.g. my builds like to blame free_kvm_area().

Revert the CALL back to a JMP.  The changelog for commit b7c4145ba2eb
("KVM: Don't spin on virt instruction faults during reboot") contains
nothing that indicates the switch to CALL was deliberate.  This is
backed up by the fact that the PUSH <insn RIP> was left intact.

Note that an alternative to the PUSH+JMP magic would be to JMP back
to the "real" code and CALL from there, but that would require adding
a JMP in the non-faulting path to avoid calling kvm_spurious_fault()
and would add no value, i.e. the stack trace would be the same.

Using CALL:

------------[ cut here ]------------
kernel BUG at /home/sean/go/src/kernel.org/linux/arch/x86/kvm/x86.c:356!
invalid opcode: 0000 [#1] SMP
CPU: 4 PID: 1057 Comm: qemu-system-x86 Not tainted 4.20.0-rc6+ #75
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:kvm_spurious_fault+0x5/0x10 [kvm]
Code: <0f> 0b 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 55 49 89 fd 41
RSP: 0018:ffffc900004bbcc8 EFLAGS: 00010046
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffffffffffff
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffff888273fd8000 R08: 00000000000003e8 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000784 R12: ffffc90000371fb0
R13: 0000000000000000 R14: 000000026d763cf4 R15: ffff888273fd8000
FS:  00007f3d69691700(0000) GS:ffff888277800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055f89bc56fe0 CR3: 0000000271a5a001 CR4: 0000000000362ee0
Call Trace:
 free_kvm_area+0x1044/0x43ea [kvm_intel]
 ? vmx_vcpu_run+0x156/0x630 [kvm_intel]
 ? kvm_arch_vcpu_ioctl_run+0x447/0x1a40 [kvm]
 ? kvm_vcpu_ioctl+0x368/0x5c0 [kvm]
 ? kvm_vcpu_ioctl+0x368/0x5c0 [kvm]
 ? __set_task_blocked+0x38/0x90
 ? __set_current_blocked+0x50/0x60
 ? __fpu__restore_sig+0x97/0x490
 ? do_vfs_ioctl+0xa1/0x620
 ? __x64_sys_futex+0x89/0x180
 ? ksys_ioctl+0x66/0x70
 ? __x64_sys_ioctl+0x16/0x20
 ? do_syscall_64+0x4f/0x100
 ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
Modules linked in: vhost_net vhost tap kvm_intel kvm irqbypass bridge stp llc
---[ end trace 9775b14b123b1713 ]---

Using JMP:

------------[ cut here ]------------
kernel BUG at /home/sean/go/src/kernel.org/linux/arch/x86/kvm/x86.c:356!
invalid opcode: 0000 [#1] SMP
CPU: 6 PID: 1067 Comm: qemu-system-x86 Not tainted 4.20.0-rc6+ #75
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:kvm_spurious_fault+0x5/0x10 [kvm]
Code: <0f> 0b 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 55 49 89 fd 41
RSP: 0018:ffffc90000497cd0 EFLAGS: 00010046
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffffffffffff
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffff88827058bd40 R08: 00000000000003e8 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000784 R12: ffffc90000369fb0
R13: 0000000000000000 R14: 00000003c8fc6642 R15: ffff88827058bd40
FS:  00007f3d7219e700(0000) GS:ffff888277900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f3d64001000 CR3: 0000000271c6b004 CR4: 0000000000362ee0
Call Trace:
 vmx_vcpu_run+0x156/0x630 [kvm_intel]
 ? kvm_arch_vcpu_ioctl_run+0x447/0x1a40 [kvm]
 ? kvm_vcpu_ioctl+0x368/0x5c0 [kvm]
 ? kvm_vcpu_ioctl+0x368/0x5c0 [kvm]
 ? __set_task_blocked+0x38/0x90
 ? __set_current_blocked+0x50/0x60
 ? __fpu__restore_sig+0x97/0x490
 ? do_vfs_ioctl+0xa1/0x620
 ? __x64_sys_futex+0x89/0x180
 ? ksys_ioctl+0x66/0x70
 ? __x64_sys_ioctl+0x16/0x20
 ? do_syscall_64+0x4f/0x100
 ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
Modules linked in: vhost_net vhost tap kvm_intel kvm irqbypass bridge stp llc
---[ end trace f9daedb85ab3ddba ]---

Fixes: b7c4145ba2eb ("KVM: Don't spin on virt instruction faults during reboot")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoMAINTAINERS: Add arch/x86/kvm sub-directories to existing KVM/x86 entry
Sean Christopherson [Thu, 20 Dec 2018 20:53:20 +0000 (12:53 -0800)]
MAINTAINERS: Add arch/x86/kvm sub-directories to existing KVM/x86 entry

A series currently sitting in KVM's queue for 4.21 moves the bulk of
KVM's VMX code to a dedicated VMX sub-directory[1].  As a result,
get_maintainers.pl doesn't get any hits on the newly relocated VMX
files when the script is run with --pattern-depth=1.

Add all arch/x86/kvm sub-directories to the existing MAINTAINERS entry
for KVM/x86 instead of arch/x86/kvm/vmx as other code, e.g. SVM, may
get similar treatment in the near future.

[1] https://patchwork.kernel.org/cover/10710751/

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/x86: Use SVM assembly instruction mnemonics instead of .byte streams
Uros Bizjak [Mon, 26 Nov 2018 16:00:08 +0000 (17:00 +0100)]
KVM/x86: Use SVM assembly instruction mnemonics instead of .byte streams

Recently the minimum required version of binutils was changed to 2.20,
which supports all SVM instruction mnemonics. The patch removes
all .byte #defines and uses real instruction mnemonics instead.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()
Lan Tianyu [Thu, 6 Dec 2018 13:21:13 +0000 (21:21 +0800)]
KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()

Originally, flush tlb is done by slot_handle_level_range(). This patch
moves the flush directly to kvm_zap_gfn_range() when range flush is
available, so that only the requested range can be flushed.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/MMU: Flush tlb directly in kvm_set_pte_rmapp()
Lan Tianyu [Thu, 6 Dec 2018 13:21:12 +0000 (21:21 +0800)]
KVM/MMU: Flush tlb directly in kvm_set_pte_rmapp()

This patch is to flush tlb directly in kvm_set_pte_rmapp()
function when Hyper-V remote TLB flush is available, returning 0
so that kvm_mmu_notifier_change_pte() does not flush again.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to kvm_mmu_notifier_change_pte()
Lan Tianyu [Thu, 6 Dec 2018 13:21:11 +0000 (21:21 +0800)]
KVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to kvm_mmu_notifier_change_pte()

This patch is to move tlb flush in kvm_set_pte_rmapp() to
kvm_mmu_notifier_change_pte() in order to avoid redundant tlb flush.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: Make kvm_set_spte_hva() return int
Lan Tianyu [Thu, 6 Dec 2018 13:21:10 +0000 (21:21 +0800)]
KVM: Make kvm_set_spte_hva() return int

The patch is to make kvm_set_spte_hva() return int and caller can
check return value to determine flush tlb or not.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: Replace old tlb flush function with new one to flush a specified range.
Lan Tianyu [Thu, 6 Dec 2018 13:21:09 +0000 (21:21 +0800)]
KVM: Replace old tlb flush function with new one to flush a specified range.

This patch is to replace kvm_flush_remote_tlbs() with kvm_flush_
remote_tlbs_with_address() in some functions without logic change.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/MMU: Add tlb flush with range helper function
Lan Tianyu [Thu, 6 Dec 2018 13:21:08 +0000 (21:21 +0800)]
KVM/MMU: Add tlb flush with range helper function

This patch is to add wrapper functions for tlb_remote_flush_with_range
callback and flush tlb directly in kvm_mmu_zap_collapsible_spte().
kvm_mmu_zap_collapsible_spte() returns flush request to the
slot_handle_leaf() and the latter does flush on demand. When
range flush is available, make kvm_mmu_zap_collapsible_spte()
to flush tlb with range directly to avoid returning range back
to slot_handle_leaf().

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/VMX: Add hv tlb range flush support
Lan Tianyu [Thu, 6 Dec 2018 13:21:07 +0000 (21:21 +0800)]
KVM/VMX: Add hv tlb range flush support

This patch is to register tlb_remote_flush_with_range callback with
hv tlb range flush interface.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agox86/hyper-v: Add HvFlushGuestAddressList hypercall support
Lan Tianyu [Thu, 6 Dec 2018 13:21:05 +0000 (21:21 +0800)]
x86/hyper-v: Add HvFlushGuestAddressList hypercall support

Hyper-V provides HvFlushGuestAddressList() hypercall to flush EPT tlb
with specified ranges. This patch is to add the hypercall support.

Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops
Lan Tianyu [Thu, 6 Dec 2018 13:21:04 +0000 (21:21 +0800)]
KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops

Add flush range call back in the kvm_x86_ops and platform can use it
to register its associated function. The parameter "kvm_tlb_range"
accepts a single range and flush list which contains a list of ranges.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Disable Intel PT when VMXON in L1 guest
Luwei Kang [Wed, 24 Oct 2018 08:05:16 +0000 (16:05 +0800)]
KVM: x86: Disable Intel PT when VMXON in L1 guest

Currently, Intel Processor Trace do not support tracing in L1 guest
VMX operation(IA32_VMX_MISC[bit 14] is 0). As mentioned in SDM,
on these type of processors, execution of the VMXON instruction will
clears IA32_RTIT_CTL.TraceEn and any attempt to write IA32_RTIT_CTL
causes a general-protection exception (#GP).

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Set intercept for Intel PT MSRs read/write
Chao Peng [Wed, 24 Oct 2018 08:05:15 +0000 (16:05 +0800)]
KVM: x86: Set intercept for Intel PT MSRs read/write

To save performance overhead, disable intercept Intel PT MSRs
read/write when Intel PT is enabled in guest.
MSR_IA32_RTIT_CTL is an exception that will always be intercepted.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Implement Intel PT MSRs read/write emulation
Chao Peng [Wed, 24 Oct 2018 08:05:14 +0000 (16:05 +0800)]
KVM: x86: Implement Intel PT MSRs read/write emulation

This patch implement Intel Processor Trace MSRs read/write
emulation.
Intel PT MSRs read/write need to be emulated when Intel PT
MSRs is intercepted in guest and during live migration.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Introduce a function to initialize the PT configuration
Luwei Kang [Wed, 24 Oct 2018 08:05:13 +0000 (16:05 +0800)]
KVM: x86: Introduce a function to initialize the PT configuration

Initialize the Intel PT configuration when cpuid update.
Include cpuid inforamtion, rtit_ctl bit mask and the number of
address ranges.

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Add Intel PT context switch for each vcpu
Chao Peng [Wed, 24 Oct 2018 08:05:12 +0000 (16:05 +0800)]
KVM: x86: Add Intel PT context switch for each vcpu

Load/Store Intel Processor Trace register in context switch.
MSR IA32_RTIT_CTL is loaded/stored automatically from VMCS.
In Host-Guest mode, we need load/resore PT MSRs only when PT
is enabled in guest.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Add Intel Processor Trace cpuid emulation
Chao Peng [Wed, 24 Oct 2018 08:05:11 +0000 (16:05 +0800)]
KVM: x86: Add Intel Processor Trace cpuid emulation

Expose Intel Processor Trace to guest only when
the PT works in Host-Guest mode.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Add Intel PT virtualization work mode
Chao Peng [Wed, 24 Oct 2018 08:05:10 +0000 (16:05 +0800)]
KVM: x86: Add Intel PT virtualization work mode

Intel Processor Trace virtualization can be work in one
of 2 possible modes:

a. System-Wide mode (default):
   When the host configures Intel PT to collect trace packets
   of the entire system, it can leave the relevant VMX controls
   clear to allow VMX-specific packets to provide information
   across VMX transitions.
   KVM guest will not aware this feature in this mode and both
   host and KVM guest trace will output to host buffer.

b. Host-Guest mode:
   Host can configure trace-packet generation while in
   VMX non-root operation for guests and root operation
   for native executing normally.
   Intel PT will be exposed to KVM guest in this mode, and
   the trace output to respective buffer of host and guest.
   In this mode, tht status of PT will be saved and disabled
   before VM-entry and restored after VM-exit if trace
   a virtual machine.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoperf/x86/intel/pt: add new capability for Intel PT
Luwei Kang [Wed, 24 Oct 2018 08:05:09 +0000 (16:05 +0800)]
perf/x86/intel/pt: add new capability for Intel PT

This adds support for "output to Trace Transport subsystem"
capability of Intel PT. It means that PT can output its
trace to an MMIO address range rather than system memory buffer.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoperf/x86/intel/pt: Add new bit definitions for PT MSRs
Luwei Kang [Wed, 24 Oct 2018 08:05:08 +0000 (16:05 +0800)]
perf/x86/intel/pt: Add new bit definitions for PT MSRs

Add bit definitions for Intel PT MSRs to support trace output
directed to the memeory subsystem and holds a count if packet
bytes that have been sent out.

These are required by the upcoming PT support in KVM guests
for MSRs read/write emulation.

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoperf/x86/intel/pt: Introduce intel_pt_validate_cap()
Luwei Kang [Wed, 24 Oct 2018 08:05:07 +0000 (16:05 +0800)]
perf/x86/intel/pt: Introduce intel_pt_validate_cap()

intel_pt_validate_hw_cap() validates whether a given PT capability is
supported by the hardware. It checks the PT capability array which
reflects the capabilities of the hardware on which the code is executed.

For setting up PT for KVM guests this is not correct as the capability
array for the guest can be different from the host array.

Provide a new function to check against a given capability array.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoperf/x86/intel/pt: Export pt_cap_get()
Chao Peng [Wed, 24 Oct 2018 08:05:06 +0000 (16:05 +0800)]
perf/x86/intel/pt: Export pt_cap_get()

pt_cap_get() is required by the upcoming PT support in KVM guests.

Export it and move the capabilites enum to a global header.

As a global functions, "pt_*" is already used for ptrace and
other things, so it makes sense to use "intel_pt_*" as a prefix.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoperf/x86/intel/pt: Move Intel PT MSRs bit defines to global header
Chao Peng [Wed, 24 Oct 2018 08:05:05 +0000 (16:05 +0800)]
perf/x86/intel/pt: Move Intel PT MSRs bit defines to global header

The Intel Processor Trace (PT) MSR bit defines are in a private
header. The upcoming support for PT virtualization requires these defines
to be accessible from KVM code.

Move them to the global MSR header file.

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: aarch64: dirty_log_test: support greater than 40-bit IPAs
Andrew Jones [Tue, 6 Nov 2018 13:57:12 +0000 (14:57 +0100)]
kvm: selftests: aarch64: dirty_log_test: support greater than 40-bit IPAs

When KVM has KVM_CAP_ARM_VM_IPA_SIZE we can test with > 40-bit IPAs by
using the 'type' field of KVM_CREATE_VM.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: selftests: add pa-48/va-48 VM modes
Andrew Jones [Tue, 6 Nov 2018 13:57:11 +0000 (14:57 +0100)]
kvm: selftests: add pa-48/va-48 VM modes

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: selftests: dirty_log_test: improve mode param management
Andrew Jones [Tue, 6 Nov 2018 13:57:10 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: improve mode param management

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: selftests: dirty_log_test: reset guest test phys offset
Andrew Jones [Tue, 6 Nov 2018 13:57:09 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: reset guest test phys offset

We need to reset the offset for each mode as it will change
depending on the number of guest physical address bits.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: selftests: dirty_log_test: always use -t
Andrew Jones [Tue, 6 Nov 2018 13:57:08 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: always use -t

There's no reason not to always test the topmost physical
addresses, and if the user wants to try lower addresses
then '-p' (used to be '-o before this patch) can be used.
Let's remove the '-t' option and just always do what it did.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: selftests: dirty_log_test: don't identity map the test mem
Andrew Jones [Tue, 6 Nov 2018 13:57:07 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: don't identity map the test mem

It isn't necessary and can even cause problems when testing high
guest physical addresses. This patch leaves the test memory id-
mapped by default, but when using '-t' the test memory virtual
addresses stay the same even though the physical addresses switch
to the topmost valid addresses.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: selftests: x86_64: dirty_log_test: fix -t
Andrew Jones [Tue, 6 Nov 2018 13:57:06 +0000 (14:57 +0100)]
kvm: selftests: x86_64: dirty_log_test: fix -t

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: fix some typos
Wei Yang [Mon, 5 Nov 2018 06:45:03 +0000 (14:45 +0800)]
KVM: fix some typos

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
[Preserved the iff and a probably intentional weird bracket notation.
 Also dropped the style change to make a single-purpose patch. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/kvmclock: convert to SPDX identifiers
Peng Hao [Fri, 2 Nov 2018 09:05:17 +0000 (17:05 +0800)]
x86/kvmclock: convert to SPDX identifiers

Update the verbose license text with the matching SPDX
license identifier.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
[Changed deprecated GPL-2.0+ to GPL-2.0-or-later. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: Remove KF() macro placeholder
Sean Christopherson [Mon, 5 Nov 2018 18:44:32 +0000 (10:44 -0800)]
KVM: x86: Remove KF() macro placeholder

Although well-intentioned, keeping the KF() definition as a hint for
handling scattered CPUID features may be counter-productive.  Simply
redefining the bit position only works for directly manipulating the
guest's CPUID leafs, e.g. it doesn't make guest_cpuid_has() magically
work.  Taking an alternative approach, e.g. ensuring the bit position
is identical between the Linux-defined and hardware-defined features,
may be a simpler and/or more effective method of exposing scattered
features to the guest.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: vmx: Allow guest read access to IA32_TSC
Jim Mattson [Fri, 9 Nov 2018 17:35:11 +0000 (09:35 -0800)]
kvm: vmx: Allow guest read access to IA32_TSC

Let the guest read the IA32_TSC MSR with the generic RDMSR instruction
as well as the specific RDTSC(P) instructions. Note that the hardware
applies the TSC multiplier and offset (when applicable) to the result of
RDMSR(IA32_TSC), just as it does to the result of RDTSC(P).

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: nVMX: NMI-window and interrupt-window exiting should wake L2 from HLT
Jim Mattson [Mon, 26 Nov 2018 19:22:32 +0000 (11:22 -0800)]
kvm: nVMX: NMI-window and interrupt-window exiting should wake L2 from HLT

According to the SDM, "NMI-window exiting" VM-exits wake a logical
processor from the same inactive states as would an NMI and
"interrupt-window exiting" VM-exits wake a logical processor from the
same inactive states as would an external interrupt. Specifically, they
wake a logical processor from the shutdown state and from the states
entered using the HLT and MWAIT instructions.

Fixes: 6dfacadd5858 ("KVM: nVMX: Add support for activity state HLT")
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
[Squashed comments of two Jim's patches and used the simplified code
 hunk provided by Sean. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: nSVM: Fix nested guest support for PAUSE filtering.
Tambe, William [Tue, 13 Nov 2018 16:51:20 +0000 (16:51 +0000)]
KVM: nSVM: Fix nested guest support for PAUSE filtering.

Currently, the nested guest's PAUSE intercept intentions are not being
honored.  Instead, since the L0 hypervisor's pause_filter_count and
pause_filter_thresh values are still in place, these values are used
instead of those programmed in the VMCB by the L1 hypervisor.

To honor the desired PAUSE intercept support of the L1 hypervisor, the L0
hypervisor must use the PAUSE filtering fields of the L1 hypervisor. This
requires saving and restoring of both the L0 and L1 hypervisor's PAUSE
filtering fields.

Signed-off-by: William Tambe <william.tambe@amd.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: Change offset in kvm_write_guest_offset_cached to unsigned
Jim Mattson [Fri, 14 Dec 2018 22:34:43 +0000 (14:34 -0800)]
kvm: Change offset in kvm_write_guest_offset_cached to unsigned

Since the offset is added directly to the hva from the
gfn_to_hva_cache, a negative offset could result in an out of bounds
write. The existing BUG_ON only checks for addresses beyond the end of
the gfn_to_hva_cache, not for addresses before the start of the
gfn_to_hva_cache.

Note that all current call sites have non-negative offsets.

Fixes: 4ec6e8636256 ("kvm: Introduce kvm_write_guest_offset_cached()")
Reported-by: Cfir Cohen <cfir@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Cfir Cohen <cfir@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agokvm: Disallow wraparound in kvm_gfn_to_hva_cache_init
Jim Mattson [Mon, 17 Dec 2018 21:53:33 +0000 (13:53 -0800)]
kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init

Previously, in the case where (gpa + len) wrapped around, the entire
region was not validated, as the comment claimed. It doesn't actually
seem that wraparound should be allowed here at all.

Furthermore, since some callers don't check the return code from this
function, it seems prudent to clear ghc->memslot in the event of an
error.

Fixes: 8f964525a121f ("KVM: Allow cross page reads and writes from cached translations.")
Reported-by: Cfir Cohen <cfir@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Cfir Cohen <cfir@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Cc: Andrew Honig <ahonig@google.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: VMX: Remove duplicated include from vmx.c
YueHaibing [Tue, 18 Dec 2018 01:01:49 +0000 (01:01 +0000)]
KVM: VMX: Remove duplicated include from vmx.c

Remove duplicated include.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoselftests: kvm: report failed stage when exit reason is unexpected
Vitaly Kuznetsov [Wed, 19 Dec 2018 11:15:18 +0000 (12:15 +0100)]
selftests: kvm: report failed stage when exit reason is unexpected

When we get a report like

==== Test Assertion Failure ====
  x86_64/state_test.c:157: run->exit_reason == KVM_EXIT_IO
  pid=955 tid=955 - Success
     1 0x0000000000401350: main at state_test.c:154
     2 0x00007fc31c9e9412: ?? ??:0
     3 0x000000000040159d: _start at ??:?
  Unexpected exit reason: 8 (SHUTDOWN),

it is not obvious which particular stage failed. Add the info.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: svm: report MSR_IA32_MCG_EXT_CTL as unsupported
Vitaly Kuznetsov [Wed, 19 Dec 2018 11:06:13 +0000 (12:06 +0100)]
KVM: x86: svm: report MSR_IA32_MCG_EXT_CTL as unsupported

AMD doesn't seem to implement MSR_IA32_MCG_EXT_CTL and svm code in kvm
knows nothing about it, however, this MSR is among emulated_msrs and
thus returned with KVM_GET_MSR_INDEX_LIST. The consequent KVM_GET_MSRS,
of course, fails.

Report the MSR as unsupported to not confuse userspace.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: fix size of x86_fpu_cache objects
Paolo Bonzini [Fri, 21 Dec 2018 10:25:59 +0000 (11:25 +0100)]
KVM: x86: fix size of x86_fpu_cache objects

The memory allocation in b666a4b69739 ("kvm: x86: Dynamically allocate
guest_fpu", 2018-11-06) is wrong, there are other members in struct fpu
before the fpregs_state union and the patch should be doing something
similar to the code in fpu__init_task_struct_size.  It's enough to run
a guest and then rmmod kvm to see slub errors which are actually caused
by memory corruption.

For now let's revert it to sizeof(struct fpu), which is conservative.
I have plans to move fsave/fxsave/xsave directly in KVM, without using
the kernel FPU helpers, and once it's done, the size of the object in
the cache will be something like kvm_xstate_size.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: PPC: Book3S HV: Keep rc bits in shadow pgtable in sync with host
Suraj Jitindar Singh [Fri, 21 Dec 2018 03:28:43 +0000 (14:28 +1100)]
KVM: PPC: Book3S HV: Keep rc bits in shadow pgtable in sync with host

The rc bits contained in ptes are used to track whether a page has been
accessed and whether it is dirty. The accessed bit is used to age a page
and the dirty bit to track whether a page is dirty or not.

Now that we support nested guests there are three ptes which track the
state of the same page:
- The partition-scoped page table in the L1 guest, mapping L2->L1 address
- The partition-scoped page table in the host for the L1 guest, mapping
  L1->L0 address
- The shadow partition-scoped page table for the nested guest in the host,
  mapping L2->L0 address

The idea is to attempt to keep the rc state of these three ptes in sync,
both when setting and when clearing rc bits.

When setting the bits we achieve consistency by:
- Initially setting the bits in the shadow page table as the 'and' of the
  other two.
- When updating in software the rc bits in the shadow page table we
  ensure the state is consistent with the other two locations first, and
  update these before reflecting the change into the shadow page table.
  i.e. only set the bits in the L2->L0 pte if also set in both the
       L2->L1 and the L1->L0 pte.

When clearing the bits we achieve consistency by:
- The rc bits in the shadow page table are only cleared when discarding
  a pte, and we don't need to record this as if either bit is set then
  it must also be set in the pte mapping L1->L0.
- When L1 clears an rc bit in the L2->L1 mapping it __should__ issue a
  tlbie instruction
  - This means we will discard the pte from the shadow page table
    meaning the mapping will have to be setup again.
  - When setup the pte again in the shadow page table we will ensure
    consistency with the L2->L1 pte.
- When the host clears an rc bit in the L1->L0 mapping we need to also
  clear the bit in any ptes in the shadow page table which map the same
  gfn so we will be notified if a nested guest accesses the page.
  This case is what this patch specifically concerns.
  - We can search the nest_rmap list for that given gfn and clear the
    same bit from all corresponding ptes in shadow page tables.
  - If a nested guest causes either of the rc bits to be set by software
    in future then we will update the L1->L0 pte and maintain consistency.

With the process outlined above we aim to maintain consistency of the 3
pte locations where we track rc for a given guest page.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Introduce kvmhv_update_nest_rmap_rc_list()
Suraj Jitindar Singh [Fri, 21 Dec 2018 03:28:42 +0000 (14:28 +1100)]
KVM: PPC: Book3S HV: Introduce kvmhv_update_nest_rmap_rc_list()

Introduce a function kvmhv_update_nest_rmap_rc_list() which for a given
nest_rmap list will traverse it, find the corresponding pte in the shadow
page tables, and if it still maps the same host page update the rc bits
accordingly.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Apply combination of host and l1 pte rc for nested guest
Suraj Jitindar Singh [Fri, 21 Dec 2018 03:28:41 +0000 (14:28 +1100)]
KVM: PPC: Book3S HV: Apply combination of host and l1 pte rc for nested guest

The shadow page table contains ptes for translations from nested guest
address to host address. Currently when creating these ptes we take the
rc bits from the pte for the L1 guest address to host address
translation. This is incorrect as we must also factor in the rc bits
from the pte for the nested guest address to L1 guest address
translation (as contained in the L1 guest partition table for the nested
guest).

By not calculating these bits correctly L1 may not have been correctly
notified when it needed to update its rc bits in the partition table it
maintains for its nested guest.

Modify the code so that the rc bits in the resultant pte for the L2->L0
translation are the 'and' of the rc bits in the L2->L1 pte and the L1->L0
pte, also accounting for whether this was a write access when setting
the dirty bit.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Align gfn to L1 page size when inserting nest-rmap entry
Suraj Jitindar Singh [Fri, 21 Dec 2018 03:28:40 +0000 (14:28 +1100)]
KVM: PPC: Book3S HV: Align gfn to L1 page size when inserting nest-rmap entry

Nested rmap entries are used to store the translation from L1 gpa to L2
gpa when entries are inserted into the shadow (nested) page tables. This
rmap list is located by indexing the rmap array in the memslot by L1
gfn. When we come to search for these entries we only know the L1 page size
(which could be PAGE_SIZE, 2M or a 1G page) and so can only select a gfn
aligned to that size. This means that when we insert the entry, so we can
find it later, we need to align the gfn we use to select the rmap list
in which to insert the entry to L1 page size as well.

By not doing this we were missing nested rmap entries when modifying L1
ptes which were for a page also passed through to an L2 guest.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Hold kvm->mmu_lock across updating nested pte rc bits
Suraj Jitindar Singh [Fri, 21 Dec 2018 03:28:39 +0000 (14:28 +1100)]
KVM: PPC: Book3S HV: Hold kvm->mmu_lock across updating nested pte rc bits

We already hold the kvm->mmu_lock spin lock across updating the rc bits
in the pte for the L1 guest. Continue to hold the lock across updating
the rc bits in the pte for the nested guest as well to prevent
invalidations from occurring.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoMerge tag 'kvm-ppc-next-4.21-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Radim Krčmář [Thu, 20 Dec 2018 13:54:09 +0000 (14:54 +0100)]
Merge tag 'kvm-ppc-next-4.21-1' of git://git./linux/kernel/git/paulus/powerpc

PPC KVM update for 4.21 from Paul Mackerras

The main new feature this time is support in HV nested KVM for passing
a device that is emulated by a level 0 hypervisor and presented to
level 1 as a PCI device through to a level 2 guest using VFIO.

Apart from that there are improvements for migration of radix guests
under HV KVM and some other fixes and cleanups.

6 years agoMerge tag 'kvm-s390-next-4.21-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Wed, 19 Dec 2018 21:17:09 +0000 (22:17 +0100)]
Merge tag 'kvm-s390-next-4.21-1' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Fixes for 4.21

Just two small fixes.

6 years agoMerge tag 'kvmarm-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm...
Paolo Bonzini [Wed, 19 Dec 2018 19:33:55 +0000 (20:33 +0100)]
Merge tag 'kvmarm-for-v4.21' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm updates for 4.21

- Large PUD support for HugeTLB
- Single-stepping fixes
- Improved tracing
- Various timer and vgic fixups

6 years agoarm: KVM: Add S2_PMD_{MASK,SIZE} constants
Marc Zyngier [Wed, 19 Dec 2018 08:31:54 +0000 (08:31 +0000)]
arm: KVM: Add S2_PMD_{MASK,SIZE} constants

They were missing, and it turns out that we do need them now.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro
Marc Zyngier [Wed, 19 Dec 2018 08:28:38 +0000 (08:28 +0000)]
arm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro

32 and 64bit use different symbols to identify the traps.
32bit has a fine grained approach (prefetch abort, data abort and HVC),
while 64bit is pretty happy with just "trap".

This has been fine so far, except that we now need to decode some
of that in tracepoints that are common to both architectures.

Introduce ARM_EXCEPTION_IS_TRAP which abstracts the trap symbols
and make the tracepoint use it.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1
Will Deacon [Thu, 13 Dec 2018 16:06:14 +0000 (16:06 +0000)]
arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1

Although bit 31 of VTCR_EL2 is RES1, we inadvertently end up setting all
of the upper 32 bits to 1 as well because we define VTCR_EL2_RES1 as
signed, which is sign-extended when assigning to kvm->arch.vtcr.

Lucky for us, the architecture currently treats these upper bits as RES0
so, whilst we've been naughty, we haven't set fire to anything yet.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Fix unintended stage 2 PMD mappings
Christoffer Dall [Fri, 2 Nov 2018 07:53:22 +0000 (08:53 +0100)]
KVM: arm/arm64: Fix unintended stage 2 PMD mappings

There are two things we need to take care of when we create block
mappings in the stage 2 page tables:

  (1) The alignment within a PMD between the host address range and the
  guest IPA range must be the same, since otherwise we end up mapping
  pages with the wrong offset.

  (2) The head and tail of a memory slot may not cover a full block
  size, and we have to take care to not map those with block
  descriptors, since we could expose memory to the guest that the host
  did not intend to expose.

So far, we have been taking care of (1), but not (2), and our commentary
describing (1) was somewhat confusing.

This commit attempts to factor out the checks of both into a common
function, and if we don't pass the check, we won't attempt any PMD
mappings for neither hugetlbfs nor THP.

Note that we used to only check the alignment for THP, not for
hugetlbfs, but as far as I can tell the check needs to be applied to
both scenarios.

Cc: Ralph Palutke <ralph.palutke@fau.de>
Cc: Lukas Braun <koomi@moshbit.net>
Reported-by: Lukas Braun <koomi@moshbit.net>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs...
Marc Zyngier [Tue, 18 Dec 2018 14:59:09 +0000 (14:59 +0000)]
arm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs/SGIs

We currently only halt the guest when a vCPU messes with the active
state of an SPI. This is perfectly fine for GICv2, but isn't enough
for GICv3, where all vCPUs can access the state of any other vCPU.

Let's broaden the condition to include any GICv3 interrupt that
has an active state (i.e. all but LPIs).

Cc: stable@vger.kernel.org
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Add trapped system register access tracepoint
Marc Zyngier [Tue, 4 Dec 2018 10:44:22 +0000 (10:44 +0000)]
arm64: KVM: Add trapped system register access tracepoint

We're pretty blind when it comes to system register tracing,
and rely on the ESR value displayed by kvm_handle_sys, which
isn't much.

Instead, let's add an actual name to the sysreg entries, so that
we can finally print it as we're about to perform the access
itself.

The new tracepoint is conveniently called kvm_sys_access.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Make vcpu const in vcpu_read_sys_reg
Christoffer Dall [Thu, 29 Nov 2018 11:20:01 +0000 (12:20 +0100)]
KVM: arm64: Make vcpu const in vcpu_read_sys_reg

vcpu_read_sys_reg should not be modifying the VCPU structure.
Eventually, to handle EL2 sysregs for nested virtualization, we will
call vcpu_read_sys_reg from places that have a const vcpu pointer, which
will complain about the lack of the const modifier on the read path.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate
Christoffer Dall [Tue, 11 Dec 2018 08:54:11 +0000 (09:54 +0100)]
KVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate

kvm_timer_vcpu_terminate can only be called in two scenarios:

 1. As part of cleanup during a failed VCPU create
 2. As part of freeing the whole VM (struct kvm refcount == 0)

In the first case, we cannot have programmed any timers or mapped any
IRQs, and therefore we do not have to cancel anything or unmap anything.

In the second case, the VCPU will have gone through kvm_timer_vcpu_put,
which will have canceled the emulated physical timer's hrtimer, and we
do not need to that here as well.  We also do not care if the irq is
recorded as mapped or not in the VGIC data structure, because the whole
VM is going away.  That leaves us only with having to ensure that we
cancel the bg_timer if we were blocking the last time we called
kvm_timer_vcpu_put().

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Remove arch timer workqueue
Christoffer Dall [Tue, 27 Nov 2018 12:48:08 +0000 (13:48 +0100)]
KVM: arm/arm64: Remove arch timer workqueue

The use of a work queue in the hrtimer expire function for the bg_timer
is a leftover from the time when we would inject interrupts when the
bg_timer expired.

Since we are no longer doing that, we can instead call
kvm_vcpu_wake_up() directly from the hrtimer function and remove all
workqueue functionality from the arch timer code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Fixup the kvm_exit tracepoint
Christoffer Dall [Mon, 3 Dec 2018 20:31:24 +0000 (21:31 +0100)]
KVM: arm/arm64: Fixup the kvm_exit tracepoint

The kvm_exit tracepoint strangely always reported exits as being IRQs.
This seems to be because either the __print_symbolic or the tracepoint
macros use a variable named idx.

Take this chance to update the fields in the tracepoint to reflect the
concepts in the arm64 architecture that we pass to the tracepoint and
move the exception type table to the same location and header files as
the exits code.

We also clear out the exception code to 0 for IRQ exits (which
translates to UNKNOWN in text) to make it slighyly less confusing to
parse the trace output.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic: Consider priority and active state for pending irq
Christoffer Dall [Sat, 1 Dec 2018 21:21:47 +0000 (13:21 -0800)]
KVM: arm/arm64: vgic: Consider priority and active state for pending irq

When checking if there are any pending IRQs for the VM, consider the
active state and priority of the IRQs as well.

Otherwise we could be continuously scheduling a guest hypervisor without
it seeing an IRQ.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq()
Gustavo A. R. Silva [Wed, 12 Dec 2018 20:11:23 +0000 (14:11 -0600)]
KVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq()

When using the nospec API, it should be taken into account that:

"...if the CPU speculates past the bounds check then
 * array_index_nospec() will clamp the index within the range of [0,
 * size)."

The above is part of the header for macro array_index_nospec() in
linux/nospec.h

Now, in this particular case, if intid evaluates to exactly VGIC_MAX_SPI
or to exaclty VGIC_MAX_PRIVATE, the array_index_nospec() macro ends up
returning VGIC_MAX_SPI - 1 or VGIC_MAX_PRIVATE - 1 respectively, instead
of VGIC_MAX_SPI or VGIC_MAX_PRIVATE, which, based on the original logic:

/* SGIs and PPIs */
if (intid <= VGIC_MAX_PRIVATE)
  return &vcpu->arch.vgic_cpu.private_irqs[intid];

  /* SPIs */
if (intid <= VGIC_MAX_SPI)
  return &kvm->arch.vgic.spis[intid - VGIC_NR_PRIVATE_IRQS];

are valid values for intid.

Fix this by calling array_index_nospec() macro with VGIC_MAX_PRIVATE + 1
and VGIC_MAX_SPI + 1 as arguments for its parameter size.

Fixes: 41b87599c743 ("KVM: arm/arm64: vgic: fix possible spectre-v1 in vgic_get_irq()")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
[dropped the SPI part which was fixed separately]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum
Marc Zyngier [Tue, 4 Dec 2018 17:11:19 +0000 (17:11 +0000)]
KVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum

SPIs should be checked against the VMs specific configuration, and
not the architectural maximum.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS
Christoffer Dall [Tue, 6 Nov 2018 12:33:38 +0000 (13:33 +0100)]
KVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS

In attempting to re-construct the logic for our stage 2 page table
layout I found the reasoning in the comment explaining how we calculate
the number of levels used for stage 2 page tables a bit backwards.

This commit attempts to clarify the comment, to make it slightly easier
to read without having the Arm ARM open on the right page.

While we're at it, fixup a typo in a comment that was recently changed.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled
Julien Thierry [Mon, 26 Nov 2018 18:26:44 +0000 (18:26 +0000)]
KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled

To change the active state of an MMIO, halt is requested for all vcpus of
the affected guest before modifying the IRQ state. This is done by calling
cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
disabled at this point and we cannot reschedule a vcpu.

We actually don't need any of this, as kvm_arm_halt_guest ensures that
all the other vcpus are out of the guest. Let's just drop that useless
code.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Suggested-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Add support for creating PUD hugepages at stage 2
Punit Agrawal [Tue, 11 Dec 2018 17:10:41 +0000 (17:10 +0000)]
KVM: arm64: Add support for creating PUD hugepages at stage 2

KVM only supports PMD hugepages at stage 2. Now that the various page
handling routines are updated, extend the stage 2 fault handling to
map in PUD hugepages.

Addition of PUD hugepage support enables additional page sizes (e.g.,
1G with 4K granule) which can be useful on cores that support mapping
larger block sizes in the TLB entries.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replace BUG() => WARN_ON(1) for arm32 PUD helpers ]
Signed-off-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Update age handlers to support PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:40 +0000 (17:10 +0000)]
KVM: arm64: Update age handlers to support PUD hugepages

In preparation for creating larger hugepages at Stage 2, add support
to the age handling notifiers for PUD hugepages when encountered.

Provide trivial helpers for arm32 to allow sharing code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) for arm32 PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Support handling access faults for PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:39 +0000 (17:10 +0000)]
KVM: arm64: Support handling access faults for PUD hugepages

In preparation for creating larger hugepages at Stage 2, extend the
access fault handling at Stage 2 to support PUD hugepages when
encountered.

Provide trivial helpers for arm32 to allow sharing of code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) in PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Support PUD hugepage in stage2_is_exec()
Punit Agrawal [Tue, 11 Dec 2018 17:10:38 +0000 (17:10 +0000)]
KVM: arm64: Support PUD hugepage in stage2_is_exec()

In preparation for creating PUD hugepages at stage 2, add support for
detecting execute permissions on PUD page table entries. Faults due to
lack of execute permissions on page table entries is used to perform
i-cache invalidation on first execute.

Provide trivial implementations of arm32 helpers to allow sharing of
code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) in arm32 PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Support dirty page tracking for PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:37 +0000 (17:10 +0000)]
KVM: arm64: Support dirty page tracking for PUD hugepages

In preparation for creating PUD hugepages at stage 2, add support for
write protecting PUD hugepages when they are encountered. Write
protecting guest tables is used to track dirty pages when migrating
VMs.

Also, provide trivial implementations of required kvm_s2pud_* helpers
to allow sharing of code with arm32.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON() in arm32 pud helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Introduce helpers to manipulate page table entries
Punit Agrawal [Tue, 11 Dec 2018 17:10:36 +0000 (17:10 +0000)]
KVM: arm/arm64: Introduce helpers to manipulate page table entries

Introduce helpers to abstract architectural handling of the conversion
of pfn to page table entries and marking a PMD page table entry as a
block entry.

The helpers are introduced in preparation for supporting PUD hugepages
at stage 2 - which are supported on arm64 but do not exist on arm.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault
Punit Agrawal [Tue, 11 Dec 2018 17:10:35 +0000 (17:10 +0000)]
KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault

Stage 2 fault handler marks a page as executable if it is handling an
execution fault or if it was a permission fault in which case the
executable bit needs to be preserved.

The logic to decide if the page should be marked executable is
duplicated for PMD and PTE entries. To avoid creating another copy
when support for PUD hugepages is introduced refactor the code to
share the checks needed to mark a page table entry as executable.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Share common code in user_mem_abort()
Punit Agrawal [Tue, 11 Dec 2018 17:10:34 +0000 (17:10 +0000)]
KVM: arm/arm64: Share common code in user_mem_abort()

The code for operations such as marking the pfn as dirty, and
dcache/icache maintenance during stage 2 fault handling is duplicated
between normal pages and PMD hugepages.

Instead of creating another copy of the operations when we introduce
PUD hugepages, let's share them across the different pagesizes.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state
Christoffer Dall [Tue, 11 Dec 2018 11:51:03 +0000 (12:51 +0100)]
KVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state

When restoring the active state from userspace, we don't know which CPU
was the source for the active state, and this is not architecturally
exposed in any of the register state.

Set the active_source to 0 in this case.  In the future, we can expand
on this and exposse the information as additional information to
userspace for GICv2 if anyone cares.

Cc: stable@vger.kernel.org
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Log PSTATE for unhandled sysregs
Mark Rutland [Thu, 6 Dec 2018 12:31:44 +0000 (12:31 +0000)]
KVM: arm/arm64: Log PSTATE for unhandled sysregs

When KVM traps an unhandled sysreg/coproc access from a guest, it logs
the guest PC. To aid debugging, it would be helpful to know which
exception level the trap came from, along with other PSTATE/CPSR bits,
so let's log the PSTATE/CPSR too.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Fix VMID alloc race by reverting to lock-less
Christoffer Dall [Tue, 11 Dec 2018 12:23:57 +0000 (13:23 +0100)]
KVM: arm/arm64: Fix VMID alloc race by reverting to lock-less

We recently addressed a VMID generation race by introducing a read/write
lock around accesses and updates to the vmid generation values.

However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but
does so without taking the read lock.

As far as I can tell, this can lead to the same kind of race:

  VM 0, VCPU 0 VM 0, VCPU 1
  ------------ ------------
  update_vttbr (vmid 254)
   update_vttbr (vmid 1) // roll over
read_lock(kvm_vmid_lock);
force_vm_exit()
  local_irq_disable
  need_new_vmid_gen == false //because vmid gen matches

  enter_guest (vmid 254)
   kvm_arch.vttbr = <PGD>:<VMID 1>
read_unlock(kvm_vmid_lock);

   enter_guest (vmid 1)

Which results in running two VCPUs in the same VM with different VMIDs
and (even worse) other VCPUs from other VMs could now allocate clashing
VMID 254 from the new generation as long as VCPU 0 is not exiting.

Attempt to solve this by making sure vttbr is updated before another CPU
can observe the updated VMID generation.

Cc: stable@vger.kernel.org
Fixes: f0cf47d939d0 "KVM: arm/arm64: Close VMID generation race"
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Consistently advance singlestep when emulating instructions
Mark Rutland [Fri, 9 Nov 2018 15:07:11 +0000 (15:07 +0000)]
arm64: KVM: Consistently advance singlestep when emulating instructions

When we emulate a guest instruction, we don't advance the hardware
singlestep state machine, and thus the guest will receive a software
step exception after a next instruction which is not emulated by the
host.

We bodge around this in an ad-hoc fashion. Sometimes we explicitly check
whether userspace requested a single step, and fake a debug exception
from within the kernel. Other times, we advance the HW singlestep state
rely on the HW to generate the exception for us. Thus, the observed step
behaviour differs for host and guest.

Let's make this simpler and consistent by always advancing the HW
singlestep state machine when we skip an instruction. Thus we can rely
on the hardware to generate the singlestep exception for us, and never
need to explicitly check for an active-pending step, nor do we need to
fake a debug exception from the guest.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Skip MMIO insn after emulation
Mark Rutland [Fri, 9 Nov 2018 15:07:10 +0000 (15:07 +0000)]
arm64: KVM: Skip MMIO insn after emulation

When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.

Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is problematic. It gets in the way of applying
consistent single-step logic, and it prevents us from being able to fail
an MMIO instruction with a synchronous exception.

Clean this up by only advancing the CPU state *after* the effects of the
instruction are emulated.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: s390: fix kmsg component kvm-s390
Michael Mueller [Mon, 3 Dec 2018 09:20:22 +0000 (10:20 +0100)]
KVM: s390: fix kmsg component kvm-s390

Relocate #define statement for kvm related kernel messages
before the include of printk to become effective.

Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
6 years agoKVM: s390: unregister debug feature on failing arch init
Michael Mueller [Fri, 30 Nov 2018 14:32:06 +0000 (15:32 +0100)]
KVM: s390: unregister debug feature on failing arch init

Make sure the debug feature and its allocated resources get
released upon unsuccessful architecture initialization.

A related indication of the issue will be reported as kernel
message.

Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20181130143215.69496-2-mimu@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
6 years agoKVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L3 guest
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:10 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L3 guest

Previously when a device was being emulated by an L1 guest for an L2
guest, that device couldn't then be passed through to an L3 guest. This
was because the L1 guest had no method for accessing L3 memory.

The hcall H_COPY_TOFROM_GUEST provides this access. Thus this setup for
passthrough can now be allowed.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S: Introduce new hcall H_COPY_TOFROM_GUEST to access quadrants 1 & 2
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:09 +0000 (16:29 +1100)]
KVM: PPC: Book3S: Introduce new hcall H_COPY_TOFROM_GUEST to access quadrants 1 & 2

A guest cannot access quadrants 1 or 2 as this would result in an
exception. Thus introduce the hcall H_COPY_TOFROM_GUEST to be used by a
guest when it wants to perform an access to quadrants 1 or 2, for
example when it wants to access memory for one of its nested guests.

Also provide an implementation for the kvm-hv module.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L2 guest
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:08 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L2 guest

Allow for a device which is being emulated at L0 (the host) for an L1
guest to be passed through to a nested (L2) guest.

The existing kvmppc_hv_emulate_mmio function can be used here. The main
challenge is that for a load the result must be stored into the L2 gpr,
not an L1 gpr as would normally be the case after going out to qemu to
complete the operation. This presents a challenge as at this point the
L2 gpr state has been written back into L1 memory.

To work around this we store the address in L1 memory of the L2 gpr
where the result of the load is to be stored and use the new io_gpr
value KVM_MMIO_REG_NESTED_GPR to indicate that this is a nested load for
which completion must be done when returning back into the kernel. Then
in kvmppc_complete_mmio_load() the resultant value is written into L1
memory at the location of the indicated L2 gpr.

Note that we don't currently let an L1 guest emulate a device for an L2
guest which is then passed through to an L3 guest.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Update kvmppc_st and kvmppc_ld to use quadrants
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:07 +0000 (16:29 +1100)]
KVM: PPC: Update kvmppc_st and kvmppc_ld to use quadrants

The functions kvmppc_st and kvmppc_ld are used to access guest memory
from the host using a guest effective address. They do so by translating
through the process table to obtain a guest real address and then using
kvm_read_guest or kvm_write_guest to make the access with the guest real
address.

This method of access however only works for L1 guests and will give the
incorrect results for a nested guest.

We can however use the store_to_eaddr and load_from_eaddr kvmppc_ops to
perform the access for a nested guesti (and a L1 guest). So attempt this
method first and fall back to the old method if this fails and we aren't
running a nested guest.

At this stage there is no fall back method to perform the access for a
nested guest and this is left as a future improvement. For now we will
return to the nested guest and rely on the fact that a translation
should be faulted in before retrying the access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Add load_from_eaddr and store_to_eaddr to the kvmppc_ops struct
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:06 +0000 (16:29 +1100)]
KVM: PPC: Add load_from_eaddr and store_to_eaddr to the kvmppc_ops struct

The kvmppc_ops struct is used to store function pointers to kvm
implementation specific functions.

Introduce two new functions load_from_eaddr and store_to_eaddr to be
used to load from and store to a guest effective address respectively.

Also implement these for the kvm-hv module. If we are using the radix
mmu then we can call the functions to access quadrant 1 and 2.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Implement functions to access quadrants 1 & 2
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:05 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Implement functions to access quadrants 1 & 2

The POWER9 radix mmu has the concept of quadrants. The quadrant number
is the two high bits of the effective address and determines the fully
qualified address to be used for the translation. The fully qualified
address consists of the effective lpid, the effective pid and the
effective address. This gives then 4 possible quadrants 0, 1, 2, and 3.

When accessing these quadrants the fully qualified address is obtained
as follows:

Quadrant | Hypervisor | Guest
--------------------------------------------------------------------------
| EA[0:1] = 0b00 | EA[0:1] = 0b00
0 | effLPID = 0 | effLPID = LPIDR
| effPID  = PIDR | effPID  = PIDR
--------------------------------------------------------------------------
| EA[0:1] = 0b01 |
1 | effLPID = LPIDR | Invalid Access
| effPID  = PIDR |
--------------------------------------------------------------------------
| EA[0:1] = 0b10 |
2 | effLPID = LPIDR | Invalid Access
| effPID  = 0 |
--------------------------------------------------------------------------
| EA[0:1] = 0b11 | EA[0:1] = 0b11
3 | effLPID = 0 | effLPID = LPIDR
| effPID  = 0 | effPID  = 0
--------------------------------------------------------------------------

In the Guest;
Quadrant 3 is normally used to address the operating system since this
uses effPID=0 and effLPID=LPIDR, meaning the PID register doesn't need to
be switched.
Quadrant 0 is normally used to address user space since the effLPID and
effPID are taken from the corresponding registers.

In the Host;
Quadrant 0 and 3 are used as above, however the effLPID is always 0 to
address the host.

Quadrants 1 and 2 can be used by the host to address guest memory using
a guest effective address. Since the effLPID comes from the LPID register,
the host loads the LPID of the guest it would like to access (and the
PID of the process) and can perform accesses to a guest effective
address.

This means quadrant 1 can be used to address the guest user space and
quadrant 2 can be used to address the guest operating system from the
hypervisor, using a guest effective address.

Access to the quadrants can cause a Hypervisor Data Storage Interrupt
(HDSI) due to being unable to perform partition scoped translation.
Previously this could only be generated from a guest and so the code
path expects us to take the KVM trampoline in the interrupt handler.
This is no longer the case so we modify the handler to call
bad_page_fault() to check if we were expecting this fault so we can
handle it gracefully and just return with an error code. In the hash mmu
case we still raise an unknown exception since quadrants aren't defined
for the hash mmu.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Add function kvmhv_vcpu_is_radix()
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:04 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Add function kvmhv_vcpu_is_radix()

There exists a function kvm_is_radix() which is used to determine if a
kvm instance is using the radix mmu. However this only applies to the
first level (L1) guest. Add a function kvmhv_vcpu_is_radix() which can
be used to determine if the current execution context of the vcpu is
radix, accounting for if the vcpu is running a nested guest.

Currently all nested guests must be radix but this may change in the
future.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S: Only report KVM_CAP_SPAPR_TCE_VFIO on powernv machines
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:03 +0000 (16:29 +1100)]
KVM: PPC: Book3S: Only report KVM_CAP_SPAPR_TCE_VFIO on powernv machines

The kvm capability KVM_CAP_SPAPR_TCE_VFIO is used to indicate the
availability of in kernel tce acceleration for vfio. However it is
currently the case that this is only available on a powernv machine,
not for a pseries machine.

Thus make this capability dependent on having the cpu feature
CPU_FTR_HVMODE.

[paulus@ozlabs.org - fixed compilation for Book E.]

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Flush guest mappings when turning dirty tracking on/off
Paul Mackerras [Wed, 12 Dec 2018 04:17:17 +0000 (15:17 +1100)]
KVM: PPC: Book3S HV: Flush guest mappings when turning dirty tracking on/off

This adds code to flush the partition-scoped page tables for a radix
guest when dirty tracking is turned on or off for a memslot.  Only the
guest real addresses covered by the memslot are flushed.  The reason
for this is to get rid of any 2M PTEs in the partition-scoped page
tables that correspond to host transparent huge pages, so that page
dirtiness is tracked at a system page (4k or 64k) granularity rather
than a 2M granularity.  The page tables are also flushed when turning
dirty tracking off so that the memslot's address space can be
repopulated with THPs if possible.

To do this, we add a new function kvmppc_radix_flush_memslot().  Since
this does what's needed for kvmppc_core_flush_memslot_hv() on a radix
guest, we now make kvmppc_core_flush_memslot_hv() call the new
kvmppc_radix_flush_memslot() rather than calling kvm_unmap_radix()
for each page in the memslot.  This has the effect of fixing a bug in
that kvmppc_core_flush_memslot_hv() was previously calling
kvm_unmap_radix() without holding the kvm->mmu_lock spinlock, which
is required to be held.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Cleanups - constify memslots, fix comments
Paul Mackerras [Wed, 12 Dec 2018 04:16:48 +0000 (15:16 +1100)]
KVM: PPC: Book3S HV: Cleanups - constify memslots, fix comments

This adds 'const' to the declarations for the struct kvm_memory_slot
pointer parameters of some functions, which will make it possible to
call those functions from kvmppc_core_commit_memory_region_hv()
in the next patch.

This also fixes some comments about locking.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Map single pages when doing dirty page logging
Paul Mackerras [Wed, 12 Dec 2018 04:16:17 +0000 (15:16 +1100)]
KVM: PPC: Book3S HV: Map single pages when doing dirty page logging

For radix guests, this makes KVM map guest memory as individual pages
when dirty page logging is enabled for the memslot corresponding to the
guest real address.  Having a separate partition-scoped PTE for each
system page mapped to the guest means that we have a separate dirty
bit for each page, thus making the reported dirty bitmap more accurate.
Without this, if part of guest memory is backed by transparent huge
pages, the dirty status is reported at a 2MB granularity rather than
a 64kB (or 4kB) granularity for that part, causing userspace to have
to transmit more data when migrating the guest.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Pass change type down to memslot commit function
Bharata B Rao [Wed, 12 Dec 2018 04:15:30 +0000 (15:15 +1100)]
KVM: PPC: Pass change type down to memslot commit function

Currently, kvm_arch_commit_memory_region() gets called with a
parameter indicating what type of change is being made to the memslot,
but it doesn't pass it down to the platform-specific memslot commit
functions.  This adds the `change' parameter to the lower-level
functions so that they can use it in future.

[paulus@ozlabs.org - fix book E also.]

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agokvm: selftests: ucall: improve ucall placement in memory, fix unsigned comparison
Paolo Bonzini [Fri, 14 Dec 2018 11:29:43 +0000 (12:29 +0100)]
kvm: selftests: ucall: improve ucall placement in memory, fix unsigned comparison

Based on a patch by Andrew Jones.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>