openwrt/staging/blogic.git
6 years agokvm: x86: move MSR_IA32_TSC handling to x86.c
Paolo Bonzini [Fri, 13 Apr 2018 09:38:35 +0000 (11:38 +0200)]
kvm: x86: move MSR_IA32_TSC handling to x86.c

This is not specific to Intel/AMD anymore.  The TSC offset is available
in vcpu->arch.tsc_offset.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoX86/KVM: Properly update 'tsc_offset' to represent the running guest
KarimAllah Ahmed [Sat, 14 Apr 2018 03:10:52 +0000 (05:10 +0200)]
X86/KVM: Properly update 'tsc_offset' to represent the running guest

Update 'tsc_offset' on vmentry/vmexit of L2 guests to ensure that it always
captures the TSC_OFFSET of the running guest whether it is the L1 or L2
guest.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Jim Mattson <jmattson@google.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
[AMD changes, fix update_ia32_tsc_adjust_msr. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: add -std=gnu99 cflags
Peng Hao [Fri, 13 Apr 2018 00:36:30 +0000 (08:36 +0800)]
kvm: selftests: add -std=gnu99 cflags

lib/kvm_util.c: In function ‘kvm_memcmp_hva_gva’:
lib/kvm_util.c:332:2: error: ‘for’ loop initial declarations are only allowed in C99 mode

So add -std=gnu99 to CFLAGS

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agox86: Add check for APIC access address for vmentry of L2 guests
Krish Sadhukhan [Wed, 11 Apr 2018 05:10:16 +0000 (01:10 -0400)]
x86: Add check for APIC access address for vmentry of L2 guests

According to the sub-section titled 'VM-Execution Control Fields' in the
section titled 'Basic VM-Entry Checks' in Intel SDM vol. 3C, the following
vmentry check must be enforced:

    If the 'virtualize APIC-accesses' VM-execution control is 1, the
    APIC-access address must satisfy the following checks:

- Bits 11:0 of the address must be 0.
- The address should not set any bits beyond the processor's
  physical-address width.

This patch adds the necessary check to conform to this rule. If the check
fails, we cause the L2 VMENTRY to fail which is what the associated unit
test (following patch) expects.

Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: X86: fix incorrect reference of trace_kvm_pi_irte_update
hu huajun [Wed, 11 Apr 2018 07:16:40 +0000 (15:16 +0800)]
KVM: X86: fix incorrect reference of trace_kvm_pi_irte_update

In arch/x86/kvm/trace.h, this function is declared as host_irq the
first input, and vcpu_id the second, instead of otherwise.

Signed-off-by: hu huajun <huhuajun@linux.alibaba.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoX86/KVM: Do not allow DISABLE_EXITS_MWAIT when LAPIC ARAT is not available
KarimAllah Ahmed [Wed, 11 Apr 2018 09:16:03 +0000 (11:16 +0200)]
X86/KVM: Do not allow DISABLE_EXITS_MWAIT when LAPIC ARAT is not available

If the processor does not have an "Always Running APIC Timer" (aka ARAT),
we should not give guests direct access to MWAIT. The LAPIC timer would
stop ticking in deep C-states, so any host deadlines would not wakeup the
host kernel.

The host kernel intel_idle driver handles this by switching to broadcast
mode when ARAT is not available and MWAIT is issued with a deep C-state
that would stop the LAPIC timer. When MWAIT is passed through, we can not
tell when MWAIT is issued.

So just disable this capability when LAPIC ARAT is not available. I am not
even sure if there are any CPUs with VMX support but no LAPIC ARAT or not.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Reported-by: Wanpeng Li <kernellwp@gmail.com>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: fix spelling mistake: "divisable" and "divisible"
Colin Ian King [Tue, 10 Apr 2018 12:38:56 +0000 (13:38 +0100)]
kvm: selftests: fix spelling mistake: "divisable" and "divisible"

Trivial fix to spelling mistakes in comment and message text

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoX86/VMX: Disable VMX preemption timer if MWAIT is not intercepted
KarimAllah Ahmed [Tue, 10 Apr 2018 12:15:46 +0000 (14:15 +0200)]
X86/VMX: Disable VMX preemption timer if MWAIT is not intercepted

The VMX-preemption timer is used by KVM as a way to set deadlines for the
guest (i.e. timer emulation). That was safe till very recently when
capability KVM_X86_DISABLE_EXITS_MWAIT to disable intercepting MWAIT was
introduced. According to Intel SDM 25.5.1:

"""
The VMX-preemption timer operates in the C-states C0, C1, and C2; it also
operates in the shutdown and wait-for-SIPI states. If the timer counts down
to zero in any state other than the wait-for SIPI state, the logical
processor transitions to the C0 C-state and causes a VM exit; the timer
does not cause a VM exit if it counts down to zero in the wait-for-SIPI
state. The timer is not decremented in C-states deeper than C2.
"""

Now once the guest issues the MWAIT with a c-state deeper than
C2 the preemption timer will never wake it up again since it stopped
ticking! Usually this is compensated by other activities in the system that
would wake the core from the deep C-state (and cause a VMExit). For
example, if the host itself is ticking or it received interrupts, etc!

So disable the VMX-preemption timer if MWAIT is exposed to the guest!

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Fixes: 4d5422cea3b61f158d58924cbb43feada456ba5c
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: fix a prototype warning
Peng Hao [Fri, 6 Apr 2018 21:47:32 +0000 (05:47 +0800)]
kvm: x86: fix a prototype warning

Make the function static to avoid a

    warning: no previous prototype for ‘vmx_enable_tdp’

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: add sync_regs_test
Paolo Bonzini [Wed, 28 Mar 2018 07:45:34 +0000 (09:45 +0200)]
kvm: selftests: add sync_regs_test

This includes the infrastructure to map the test into the guest and
run code from the test program inside a VM.

Signed-off-by: Ken Hofsass <hofsass@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: add API testing infrastructure
Paolo Bonzini [Tue, 27 Mar 2018 09:49:19 +0000 (11:49 +0200)]
kvm: selftests: add API testing infrastructure

Testsuite contributed by Google and cleaned up by myself for
inclusion in Linux.

Signed-off-by: Ken Hofsass <hofsass@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: fix a compile warning
Peng Hao [Mon, 2 Apr 2018 01:15:32 +0000 (09:15 +0800)]
kvm: x86: fix a compile warning

fix a "warning: no previous prototype".

Cc: stable@vger.kernel.org
Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: X86: Add Force Emulation Prefix for "emulate the next instruction"
Wanpeng Li [Tue, 3 Apr 2018 23:28:49 +0000 (16:28 -0700)]
KVM: X86: Add Force Emulation Prefix for "emulate the next instruction"

There is no easy way to force KVM to run an instruction through the emulator
(by design as that will expose the x86 emulator as a significant attack-surface).
However, we do wish to expose the x86 emulator in case we are testing it
(e.g. via kvm-unit-tests). Therefore, this patch adds a "force emulation prefix"
that is designed to raise #UD which KVM will trap and it's #UD exit-handler will
match "force emulation prefix" to run instruction after prefix by the x86 emulator.
To not expose the x86 emulator by default, we add a module parameter that should
be off by default.

A simple testcase here:

    #include <stdio.h>
    #include <string.h>

    #define HYPERVISOR_INFO 0x40000000

    #define CPUID(idx, eax, ebx, ecx, edx) \
        asm volatile (\
        "ud2a; .ascii \"kvm\"; cpuid" \
        :"=b" (*ebx), "=a" (*eax), "=c" (*ecx), "=d" (*edx) \
            :"0"(idx) );

    void main()
    {
        unsigned int eax, ebx, ecx, edx;
        char string[13];

        CPUID(HYPERVISOR_INFO, &eax, &ebx, &ecx, &edx);
        *(unsigned int *)(string + 0) = ebx;
        *(unsigned int *)(string + 4) = ecx;
        *(unsigned int *)(string + 8) = edx;

        string[12] = 0;
        if (strncmp(string, "KVMKVMKVM\0\0\0", 12) == 0)
            printf("kvm guest\n");
        else
            printf("bare hardware\n");
    }

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
[Correctly handle usermode exits. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: X86: Introduce handle_ud()
Wanpeng Li [Tue, 3 Apr 2018 23:28:48 +0000 (16:28 -0700)]
KVM: X86: Introduce handle_ud()

Introduce handle_ud() to handle invalid opcode, this function will be
used by later patches.

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim KrÄmář <rkrcmar@redhat.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: unify adjacent #ifdefs
Paolo Bonzini [Wed, 4 Apr 2018 16:58:59 +0000 (18:58 +0200)]
KVM: vmx: unify adjacent #ifdefs

vmx_save_host_state has multiple ifdefs for CONFIG_X86_64 that have
no other code between them.  Simplify by reducing them to a single
conditional.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agox86: kvm: hide the unused 'cpu' variable
Arnd Bergmann [Wed, 4 Apr 2018 10:44:14 +0000 (12:44 +0200)]
x86: kvm: hide the unused 'cpu' variable

The local variable was newly introduced but is only accessed in one
place on x86_64, but not on 32-bit:

arch/x86/kvm/vmx.c: In function 'vmx_save_host_state':
arch/x86/kvm/vmx.c:2175:6: error: unused variable 'cpu' [-Werror=unused-variable]

This puts it into another #ifdef.

Fixes: 35060ed6a1ff ("x86/kvm/vmx: avoid expensive rdmsr for MSR_GS_BASE")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: remove bogus WARN_ON in handle_ept_misconfig
Sean Christopherson [Thu, 29 Mar 2018 21:48:31 +0000 (14:48 -0700)]
KVM: VMX: remove bogus WARN_ON in handle_ept_misconfig

Remove the WARN_ON in handle_ept_misconfig() as it is unnecessary
and causes false positives.  Return the unmodified result of
kvm_mmu_page_fault() instead of converting a system error code to
KVM_EXIT_UNKNOWN so that userspace sees the error code of the
actual failure, not a generic "we don't know what went wrong".

  * kvm_mmu_page_fault() will WARN if reserved bits are set in the
    SPTEs, i.e. it covers the case where an EPT misconfig occurred
    because of a KVM bug.

  * The WARN_ON will fire on any system error code that is hit while
    handling the fault, e.g. -ENOMEM from mmu_topup_memory_caches()
    while handling a legitmate MMIO EPT misconfig or -EFAULT from
    kvm_handle_bad_page() if the corresponding HVA is invalid.  In
    either case, userspace should receive the original error code
    and firing a warning is incorrect behavior as KVM is operating
    as designed.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoRevert "KVM: X86: Fix SMRAM accessing even if VM is shutdown"
Sean Christopherson [Thu, 29 Mar 2018 21:48:30 +0000 (14:48 -0700)]
Revert "KVM: X86: Fix SMRAM accessing even if VM is shutdown"

The bug that led to commit 95e057e25892eaa48cad1e2d637b80d0f1a4fac5
was a benign warning (no adverse affects other than the warning
itself) that was detected by syzkaller.  Further inspection shows
that the WARN_ON in question, in handle_ept_misconfig(), is
unnecessary and flawed (this was also briefly discussed in the
original patch: https://patchwork.kernel.org/patch/10204649).

  * The WARN_ON is unnecessary as kvm_mmu_page_fault() will WARN
    if reserved bits are set in the SPTEs, i.e. it covers the case
    where an EPT misconfig occurred because of a KVM bug.

  * The WARN_ON is flawed because it will fire on any system error
    code that is hit while handling the fault, e.g. -ENOMEM can be
    returned by mmu_topup_memory_caches() while handling a legitmate
    MMIO EPT misconfig.

The original behavior of returning -EFAULT when userspace munmaps
an HVA without first removing the memslot is correct and desirable,
i.e. KVM is letting userspace know it has generated a bad address.
Returning RET_PF_EMULATE masks the WARN_ON in the EPT misconfig path,
but does not fix the underlying bug, i.e. the WARN_ON is bogus.

Furthermore, returning RET_PF_EMULATE has the unwanted side effect of
causing KVM to attempt to emulate an instruction on any page fault
with an invalid HVA translation, e.g. a not-present EPT violation
on a VM_PFNMAP VMA whose fault handler failed to insert a PFN.

  * There is no guarantee that the fault is directly related to the
    instruction, i.e. the fault could have been triggered by a side
    effect memory access in the guest, e.g. while vectoring a #DB or
    writing a tracing record.  This could cause KVM to effectively
    mask the fault if KVM doesn't model the behavior leading to the
    fault, i.e. emulation could succeed and resume the guest.

  * If emulation does fail, KVM will return EMULATION_FAILED instead
    of -EFAULT, which is a red herring as the user will either debug
    a bogus emulation attempt or scratch their head wondering why we
    were attempting emulation in the first place.

TL;DR: revert to returning -EFAULT and remove the bogus WARN_ON in
handle_ept_misconfig in a future patch.

This reverts commit 95e057e25892eaa48cad1e2d637b80d0f1a4fac5.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: Add emulation for movups/movupd
Stefan Fritsch [Sun, 1 Apr 2018 15:54:44 +0000 (17:54 +0200)]
kvm: Add emulation for movups/movupd

This is very similar to the aligned versions movaps/movapd.

We have seen the corresponding emulation failures with openbsd as guest
and with Windows 10 with intel HD graphics pass through.

Signed-off-by: Christian Ehrhardt <christian_ehrhardt@genua.de>
Signed-off-by: Stefan Fritsch <sf@sfritsch.de>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: raise internal error for exception during invalid protected mode state
Sean Christopherson [Fri, 23 Mar 2018 16:34:00 +0000 (09:34 -0700)]
KVM: VMX: raise internal error for exception during invalid protected mode state

Exit to userspace with KVM_INTERNAL_ERROR_EMULATION if we encounter
an exception in Protected Mode while emulating guest due to invalid
guest state.  Unlike Big RM, KVM doesn't support emulating exceptions
in PM, i.e. PM exceptions are always injected via the VMCS.  Because
we will never do VMRESUME due to emulation_required, the exception is
never realized and we'll keep emulating the faulting instruction over
and over until we receive a signal.

Exit to userspace iff there is a pending exception, i.e. don't exit
simply on a requested event. The purpose of this check and exit is to
aid in debugging a guest that is in all likelihood already doomed.
Invalid guest state in PM is extremely limited in normal operation,
e.g. it generally only occurs for a few instructions early in BIOS,
and any exception at this time is all but guaranteed to be fatal.
Non-vectored interrupts, e.g. INIT, SIPI and SMI, can be cleanly
handled/emulated, while checking for vectored interrupts, e.g. INTR
and NMI, without hitting false positives would add a fair amount of
complexity for almost no benefit (getting hit by lightning seems
more likely than encountering this specific scenario).

Add a WARN_ON_ONCE to vmx_queue_exception() if we try to inject an
exception via the VMCS and emulation_required is true.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoMerge tag 'kvm-ppc-next-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Radim Krčmář [Thu, 29 Mar 2018 18:20:13 +0000 (20:20 +0200)]
Merge tag 'kvm-ppc-next-4.17-1' of git://git./linux/kernel/git/paulus/powerpc

KVM PPC update for 4.17

- Improvements for the radix page fault handler for HV KVM on POWER9.

6 years agoKVM: nVMX: Optimization: Dont set KVM_REQ_EVENT when VMExit with nested_run_pending
Liran Alon [Fri, 23 Mar 2018 00:01:34 +0000 (03:01 +0300)]
KVM: nVMX: Optimization: Dont set KVM_REQ_EVENT when VMExit with nested_run_pending

When vCPU runs L2 and there is a pending event that requires to exit
from L2 to L1 and nested_run_pending=1, vcpu_enter_guest() will request
an immediate-exit from L2 (See req_immediate_exit).

Since now handling of req_immediate_exit also makes sure to set
KVM_REQ_EVENT, there is no need to also set it on vmx_vcpu_run() when
nested_run_pending=1.

This optimizes cases where VMRESUME was executed by L1 to enter L2 and
there is no pending events that require exit from L2 to L1. Previously,
this would have set KVM_REQ_EVENT unnecessarly.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: nVMX: Require immediate-exit when event reinjected to L2 and L1 event pending
Liran Alon [Fri, 23 Mar 2018 00:01:33 +0000 (03:01 +0300)]
KVM: nVMX: Require immediate-exit when event reinjected to L2 and L1 event pending

In case L2 VMExit to L0 during event-delivery, VMCS02 is filled with
IDT-vectoring-info which vmx_complete_interrupts() makes sure to
reinject before next resume of L2.

While handling the VMExit in L0, an IPI could be sent by another L1 vCPU
to the L1 vCPU which currently runs L2 and exited to L0.

When L0 will reach vcpu_enter_guest() and call inject_pending_event(),
it will note that a previous event was re-injected to L2 (by
IDT-vectoring-info) and therefore won't check if there are pending L1
events which require exit from L2 to L1. Thus, L0 enters L2 without
immediate VMExit even though there are pending L1 events!

This commit fixes the issue by making sure to check for L1 pending
events even if a previous event was reinjected to L2 and bailing out
from inject_pending_event() before evaluating a new pending event in
case an event was already reinjected.

The bug was observed by the following setup:
* L0 is a 64CPU machine which runs KVM.
* L1 is a 16CPU machine which runs KVM.
* L0 & L1 runs with APICv disabled.
(Also reproduced with APICv enabled but easier to analyze below info
with APICv disabled)
* L1 runs a 16CPU L2 Windows Server 2012 R2 guest.
During L2 boot, L1 hangs completely and analyzing the hang reveals that
one L1 vCPU is holding KVM's mmu_lock and is waiting forever on an IPI
that he has sent for another L1 vCPU. And all other L1 vCPUs are
currently attempting to grab mmu_lock. Therefore, all L1 vCPUs are stuck
forever (as L1 runs with kernel-preemption disabled).

Observing /sys/kernel/debug/tracing/trace_pipe reveals the following
series of events:
(1) qemu-system-x86-19066 [030] kvm_nested_vmexit: rip:
0xfffff802c5dca82f reason: EPT_VIOLATION ext_inf1: 0x0000000000000182
ext_inf2: 0x00000000800000d2 ext_int: 0x00000000 ext_int_err: 0x00000000
(2) qemu-system-x86-19054 [028] kvm_apic_accept_irq: apicid f
vec 252 (Fixed|edge)
(3) qemu-system-x86-19066 [030] kvm_inj_virq: irq 210
(4) qemu-system-x86-19066 [030] kvm_entry: vcpu 15
(5) qemu-system-x86-19066 [030] kvm_exit: reason EPT_VIOLATION
rip 0xffffe00069202690 info 83 0
(6) qemu-system-x86-19066 [030] kvm_nested_vmexit: rip:
0xffffe00069202690 reason: EPT_VIOLATION ext_inf1: 0x0000000000000083
ext_inf2: 0x0000000000000000 ext_int: 0x00000000 ext_int_err: 0x00000000
(7) qemu-system-x86-19066 [030] kvm_nested_vmexit_inject: reason:
EPT_VIOLATION ext_inf1: 0x0000000000000083 ext_inf2: 0x0000000000000000
ext_int: 0x00000000 ext_int_err: 0x00000000
(8) qemu-system-x86-19066 [030] kvm_entry: vcpu 15

Which can be analyzed as follows:
(1) L2 VMExit to L0 on EPT_VIOLATION during delivery of vector 0xd2.
Therefore, vmx_complete_interrupts() will set KVM_REQ_EVENT and reinject
a pending-interrupt of 0xd2.
(2) L1 sends an IPI of vector 0xfc (CALL_FUNCTION_VECTOR) to destination
vCPU 15. This will set relevant bit in LAPIC's IRR and set KVM_REQ_EVENT.
(3) L0 reach vcpu_enter_guest() which calls inject_pending_event() which
notes that interrupt 0xd2 was reinjected and therefore calls
vmx_inject_irq() and returns. Without checking for pending L1 events!
Note that at this point, KVM_REQ_EVENT was cleared by vcpu_enter_guest()
before calling inject_pending_event().
(4) L0 resumes L2 without immediate-exit even though there is a pending
L1 event (The IPI pending in LAPIC's IRR).

We have already reached the buggy scenario but events could be
furthered analyzed:
(5+6) L2 VMExit to L0 on EPT_VIOLATION.  This time not during
event-delivery.
(7) L0 decides to forward the VMExit to L1 for further handling.
(8) L0 resumes into L1. Note that because KVM_REQ_EVENT is cleared, the
LAPIC's IRR is not examined and therefore the IPI is still not delivered
into L1!

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: Fix misleading comments on handling pending exceptions
Liran Alon [Fri, 23 Mar 2018 00:01:32 +0000 (03:01 +0300)]
KVM: x86: Fix misleading comments on handling pending exceptions

The reason that exception.pending should block re-injection of
NMI/interrupt is not described correctly in comment in code.
Instead, it describes why a pending exception should be injected
before a pending NMI/interrupt.

Therefore, move currently present comment to code-block evaluating
a new pending event which explains why exception.pending is evaluated
first.
In addition, create a new comment describing that exception.pending
blocks re-injection of NMI/interrupt because the exception was
queued by handling vmexit which was due to NMI/interrupt delivery.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@orcle.com>
[Used a comment from Sean J <sean.j.christopherson@intel.com>. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: Rename interrupt.pending to interrupt.injected
Liran Alon [Fri, 23 Mar 2018 00:01:31 +0000 (03:01 +0300)]
KVM: x86: Rename interrupt.pending to interrupt.injected

For exceptions & NMIs events, KVM code use the following
coding convention:
*) "pending" represents an event that should be injected to guest at
some point but it's side-effects have not yet occurred.
*) "injected" represents an event that it's side-effects have already
occurred.

However, interrupts don't conform to this coding convention.
All current code flows mark interrupt.pending when it's side-effects
have already taken place (For example, bit moved from LAPIC IRR to
ISR). Therefore, it makes sense to just rename
interrupt.pending to interrupt.injected.

This change follows logic of previous commit 664f8e26b00c ("KVM: X86:
Fix loss of exception which has not yet been injected") which changed
exception to follow this coding convention as well.

It is important to note that in case !lapic_in_kernel(vcpu),
interrupt.pending usage was and still incorrect.
In this case, interrrupt.pending can only be set using one of the
following ioctls: KVM_INTERRUPT, KVM_SET_VCPU_EVENTS and
KVM_SET_SREGS. Looking at how QEMU uses these ioctls, one can see that
QEMU uses them either to re-set an "interrupt.pending" state it has
received from KVM (via KVM_GET_VCPU_EVENTS interrupt.pending or
via KVM_GET_SREGS interrupt_bitmap) or by dispatching a new interrupt
from QEMU's emulated LAPIC which reset bit in IRR and set bit in ISR
before sending ioctl to KVM. So it seems that indeed "interrupt.pending"
in this case is also suppose to represent "interrupt.injected".
However, kvm_cpu_has_interrupt() & kvm_cpu_has_injectable_intr()
is misusing (now named) interrupt.injected in order to return if
there is a pending interrupt.
This leads to nVMX/nSVM not be able to distinguish if it should exit
from L2 to L1 on EXTERNAL_INTERRUPT on pending interrupt or should
re-inject an injected interrupt.
Therefore, add a FIXME at these functions for handling this issue.

This patch introduce no semantics change.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: VMX: No need to clear pending NMI/interrupt on inject realmode interrupt
Liran Alon [Fri, 23 Mar 2018 00:01:30 +0000 (03:01 +0300)]
KVM: VMX: No need to clear pending NMI/interrupt on inject realmode interrupt

kvm_inject_realmode_interrupt() is called from one of the injection
functions which writes event-injection to VMCS: vmx_queue_exception(),
vmx_inject_irq() and vmx_inject_nmi().

All these functions are called just to cause an event-injection to
guest. They are not responsible of manipulating the event-pending
flag. The only purpose of kvm_inject_realmode_interrupt() should be
to emulate real-mode interrupt-injection.

This was also incorrect when called from vmx_queue_exception().

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/kvm: use Enlightened VMCS when running on Hyper-V
Vitaly Kuznetsov [Tue, 20 Mar 2018 14:02:11 +0000 (15:02 +0100)]
x86/kvm: use Enlightened VMCS when running on Hyper-V

Enlightened VMCS is just a structure in memory, the main benefit
besides avoiding somewhat slower VMREAD/VMWRITE is using clean field
mask: we tell the underlying hypervisor which fields were modified
since VMEXIT so there's no need to inspect them all.

Tight CPUID loop test shows significant speedup:
Before: 18890 cycles
After: 8304 cycles

Static key is being used to avoid performance penalty for non-Hyper-V
deployments.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/hyper-v: detect nested features
Vitaly Kuznetsov [Tue, 20 Mar 2018 14:02:10 +0000 (15:02 +0100)]
x86/hyper-v: detect nested features

TLFS 5.0 says: "Support for an enlightened VMCS interface is reported with
CPUID leaf 0x40000004. If an enlightened VMCS interface is supported,
 additional nested enlightenments may be discovered by reading the CPUID
leaf 0x4000000A (see 2.4.11)."

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/hyper-v: define struct hv_enlightened_vmcs and clean field bits
Vitaly Kuznetsov [Tue, 20 Mar 2018 14:02:09 +0000 (15:02 +0100)]
x86/hyper-v: define struct hv_enlightened_vmcs and clean field bits

The definitions are according to the Hyper-V TLFS v5.0. KVM on Hyper-V will
use these.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/hyper-v: allocate and use Virtual Processor Assist Pages
Vitaly Kuznetsov [Tue, 20 Mar 2018 14:02:08 +0000 (15:02 +0100)]
x86/hyper-v: allocate and use Virtual Processor Assist Pages

Virtual Processor Assist Pages usage allows us to do optimized EOI
processing for APIC, enable Enlightened VMCS support in KVM and more.
struct hv_vp_assist_page is defined according to the Hyper-V TLFS v5.0b.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/kvm: rename HV_X64_MSR_APIC_ASSIST_PAGE to HV_X64_MSR_VP_ASSIST_PAGE
Ladi Prosek [Tue, 20 Mar 2018 14:02:07 +0000 (15:02 +0100)]
x86/kvm: rename HV_X64_MSR_APIC_ASSIST_PAGE to HV_X64_MSR_VP_ASSIST_PAGE

The assist page has been used only for the paravirtual EOI so far, hence
the "APIC" in the MSR name. Renaming to match the Hyper-V TLFS where it's
called "Virtual VP Assist MSR".

Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/hyper-v: move definitions from TLFS to hyperv-tlfs.h
Vitaly Kuznetsov [Tue, 20 Mar 2018 14:02:06 +0000 (15:02 +0100)]
x86/hyper-v: move definitions from TLFS to hyperv-tlfs.h

mshyperv.h now only contains fucntions/variables we define in kernel, all
definitions from TLFS should go to hyperv-tlfs.h.

'enum hv_cpuid_function' is removed as we already have this info in
hyperv-tlfs.h, code in mshyperv.c is adjusted accordingly.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agox86/hyper-v: move hyperv.h out of uapi
Vitaly Kuznetsov [Tue, 20 Mar 2018 14:02:05 +0000 (15:02 +0100)]
x86/hyper-v: move hyperv.h out of uapi

hyperv.h is not part of uapi, there are no (known) users outside of kernel.
We are making changes to this file to match current Hyper-V Hypervisor
Top-Level Functional Specification (TLFS, see:
https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs)
and we don't want to maintain backwards compatibility.

Move the file renaming to hyperv-tlfs.h to avoid confusing it with
mshyperv.h. In future, all definitions from TLFS should go to it and
all kernel objects should go to mshyperv.h or include/linux/hyperv.h.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: trivial documentation cleanups
Andrew Jones [Mon, 26 Mar 2018 12:38:02 +0000 (14:38 +0200)]
KVM: trivial documentation cleanups

Add missing entries to the index and ensure the entries are in
alphabetical order. Also amd-memory-encryption.rst is an .rst
not a .txt.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: X86: Fix setup the virt_spin_lock_key before static key get initialized
Wanpeng Li [Sun, 25 Mar 2018 04:17:24 +0000 (21:17 -0700)]
KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized

 static_key_disable_cpuslocked(): static key 'virt_spin_lock_key+0x0/0x20' used before call to jump_label_init()
 WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:161 static_key_disable_cpuslocked+0x61/0x80
 RIP: 0010:static_key_disable_cpuslocked+0x61/0x80
 Call Trace:
  static_key_disable+0x16/0x20
  start_kernel+0x192/0x4b3
  secondary_startup_64+0xa5/0xb0

Qspinlock will be choosed when dedicated pCPUs are available, however, the
static virt_spin_lock_key is set in kvm_spinlock_init() before jump_label_init()
has been called, which will result in a WARN(). This patch fixes it by delaying
the virt_spin_lock_key setup to .smp_prepare_cpus().

Reported-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Fixes: b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated physical CPUs are available")
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agotools/kvm_stat: Remove unused function
Cole Robinson [Fri, 23 Mar 2018 22:07:18 +0000 (18:07 -0400)]
tools/kvm_stat: Remove unused function

Unused since added in 18e8f4100

Signed-off-by: Cole Robinson <crobinso@redhat.com>
Reviewed-and-tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agotools/kvm_stat: Don't use deprecated file()
Cole Robinson [Fri, 23 Mar 2018 22:07:17 +0000 (18:07 -0400)]
tools/kvm_stat: Don't use deprecated file()

$ python3 tools/kvm/kvm_stat/kvm_stat
Traceback (most recent call last):
  File "tools/kvm/kvm_stat/kvm_stat", line 1668, in <module>
    main()
  File "tools/kvm/kvm_stat/kvm_stat", line 1639, in main
    assign_globals()
  File "tools/kvm/kvm_stat/kvm_stat", line 1618, in assign_globals
    for line in file('/proc/mounts'):
NameError: name 'file' is not defined

open() is the python3 way, and works on python2.6+

Signed-off-by: Cole Robinson <crobinso@redhat.com>
Reviewed-and-tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agotools/kvm_stat: Fix python3 syntax
Cole Robinson [Fri, 23 Mar 2018 22:07:16 +0000 (18:07 -0400)]
tools/kvm_stat: Fix python3 syntax

$ python3 tools/kvm/kvm_stat/kvm_stat
  File "tools/kvm/kvm_stat/kvm_stat", line 1137
    def sortkey((_k, v)):
                ^
SyntaxError: invalid syntax

Fix it in a way that's compatible with python2 and python3

Signed-off-by: Cole Robinson <crobinso@redhat.com>
Tested-by: Stefan Raspl <stefan.raspl@linux.vnet.ibm.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: SVM: Implement pause loop exit logic in SVM
Babu Moger [Fri, 16 Mar 2018 20:37:26 +0000 (16:37 -0400)]
KVM: SVM: Implement pause loop exit logic in SVM

Bring the PLE(pause loop exit) logic to AMD svm driver.

While testing, we found this helping in situations where numerous
pauses are generated. Without these patches we could see continuos
VMEXITS due to pause interceptions. Tested it on AMD EPYC server with
boot parameter idle=poll on a VM with 32 vcpus to simulate extensive
pause behaviour. Here are VMEXITS in 10 seconds interval.

Pauses                  810199                  504
Total                   882184                  325415

Signed-off-by: Babu Moger <babu.moger@amd.com>
[Prevented the window from dropping below the initial value. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: SVM: Add pause filter threshold
Babu Moger [Fri, 16 Mar 2018 20:37:25 +0000 (16:37 -0400)]
KVM: SVM: Add pause filter threshold

This patch adds the support for pause filtering threshold. This feature
support is indicated by CPUID Fn8000_000A_EDX. See AMD APM Vol 2 Section
15.14.4 Pause Intercept Filtering for more details.

In this mode, a 16-bit pause filter threshold field is added in VMCB.
The threshold value is a cycle count that is used to reset the pause
counter.  As with simple pause filtering, VMRUN loads the pause count
value from VMCB into an internal counter. Then, on each pause instruction
the hardware checks the elapsed number of cycles since the most recent
pause instruction against the pause Filter Threshold. If the elapsed cycle
count is greater than the pause filter threshold, then the internal pause
count is reloaded from VMCB and execution continues. If the elapsed cycle
count is less than the pause filter threshold, then the internal pause
count is decremented. If the count value is less than zero and pause
intercept is enabled, a #VMEXIT is triggered. If advanced pause filtering
is supported and pause filter threshold field is set to zero, the filter
will operate in the simpler, count only mode.

Signed-off-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: VMX: Bring the common code to header file
Babu Moger [Fri, 16 Mar 2018 20:37:24 +0000 (16:37 -0400)]
KVM: VMX: Bring the common code to header file

This patch brings some of the code from vmx to x86.h header file. Now, we
can share this code between vmx and svm. Modified couple functions to make
it common.

Signed-off-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: VMX: Remove ple_window_actual_max
Babu Moger [Fri, 16 Mar 2018 20:37:23 +0000 (16:37 -0400)]
KVM: VMX: Remove ple_window_actual_max

Get rid of ple_window_actual_max, because its benefits are really
minuscule and the logic is complicated.

The overflows(and underflow) are controlled in __ple_window_grow
and _ple_window_shrink respectively.

Suggested-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
[Fixed potential wraparound and change the max to UINT_MAX. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: VMX: Fix the module parameters for vmx
Babu Moger [Fri, 16 Mar 2018 20:37:22 +0000 (16:37 -0400)]
KVM: VMX: Fix the module parameters for vmx

The vmx module parameters are supposed to be unsigned variants.

Also fixed the checkpatch errors like the one below.

WARNING: Symbolic permissions 'S_IRUGO' are not preferred. Consider using octal permissions '0444'.
+module_param(ple_gap, uint, S_IRUGO);

Signed-off-by: Babu Moger <babu.moger@amd.com>
[Expanded uint to unsigned int in code. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: Fix perf timer mode IP reporting
Andi Kleen [Wed, 26 Jul 2017 00:20:32 +0000 (17:20 -0700)]
KVM: x86: Fix perf timer mode IP reporting

KVM and perf have a special backdoor mechanism to report the IP for interrupts
re-executed after vm exit. This works for the NMIs that perf normally uses.

However when perf is in timer mode it doesn't work because the timer interrupt
doesn't get this special treatment. This is common when KVM is running
nested in another hypervisor which may not implement the PMU, so only
timer mode is available.

Call the functions to set up the backdoor IP also for non NMI interrupts.

I renamed the functions to set up the backdoor IP reporting to be more
appropiate for their new use.  The SVM change is only compile tested.

v2: Moved the functions inline.
For the normal interrupt case the before/after functions are now
called from x86.c, not arch specific code.
For the NMI case we still need to call it in the architecture
specific code, because it's already needed in the low level *_run
functions.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
[Removed unnecessary calls from arch handle_external_intr. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoMerge tag 'kvm-arm-for-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm...
Radim Krčmář [Wed, 28 Mar 2018 14:09:09 +0000 (16:09 +0200)]
Merge tag 'kvm-arm-for-v4.17' of git://git./linux/kernel/git/kvmarm/kvmarm

KVM/ARM updates for v4.17

- VHE optimizations
- EL2 address space randomization
- Variant 3a mitigation for Cortex-A57 and A72
- The usual vgic fixes
- Various minor tidying-up

6 years agoarm64: Add temporary ERRATA_MIDR_ALL_VERSIONS compatibility macro
Marc Zyngier [Wed, 28 Mar 2018 11:46:07 +0000 (12:46 +0100)]
arm64: Add temporary ERRATA_MIDR_ALL_VERSIONS compatibility macro

MIDR_ALL_VERSIONS is changing, and won't have the same meaning
in 4.17, and the right thing to use will be ERRATA_MIDR_ALL_VERSIONS.

In order to cope with the merge window, let's add a compatibility
macro that will allow a relatively smooth transition, and that
can be removed post 4.17-rc1.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoRevert "arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening"
Marc Zyngier [Wed, 28 Mar 2018 10:59:13 +0000 (11:59 +0100)]
Revert "arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening"

Creates far too many conflicts with arm64/for-next/core, to be
resent post -rc1.

This reverts commit f9f5dc19509bbef6f5e675346f1a7d7b846bdb12.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: PPC: Book3S HV: Use __gfn_to_pfn_memslot() in page fault handler
Paul Mackerras [Thu, 1 Mar 2018 04:14:02 +0000 (15:14 +1100)]
KVM: PPC: Book3S HV: Use __gfn_to_pfn_memslot() in page fault handler

This changes the hypervisor page fault handler for radix guests to use
the generic KVM __gfn_to_pfn_memslot() function instead of using
get_user_pages_fast() and then handling the case of VM_PFNMAP vmas
specially.  The old code missed the case of VM_IO vmas; with this
change, VM_IO vmas will now be handled correctly by code within
__gfn_to_pfn_memslot.

Currently, __gfn_to_pfn_memslot calls hva_to_pfn, which only uses
__get_user_pages_fast for the initial lookup in the cases where
either atomic or async is set.  Since we are not setting either
atomic or async, we do our own __get_user_pages_fast first, for now.

This also adds code to check for the KVM_MEM_READONLY flag on the
memslot.  If it is set and this is a write access, we synthesize a
data storage interrupt for the guest.

In the case where the page is not normal RAM (i.e. page == NULL in
kvmppc_book3s_radix_page_fault(), we read the PTE from the Linux page
tables because we need the mapping attribute bits as well as the PFN.
(The mapping attribute bits indicate whether accesses have to be
non-cacheable and/or guarded.)

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: arm/arm64: vgic-its: Fix potential overrun in vgic_copy_lpi_list
Marc Zyngier [Fri, 23 Mar 2018 14:57:09 +0000 (14:57 +0000)]
KVM: arm/arm64: vgic-its: Fix potential overrun in vgic_copy_lpi_list

vgic_copy_lpi_list() parses the LPI list and picks LPIs targeting
a given vcpu. We allocate the array containing the intids before taking
the lpi_list_lock, which means we can have an array size that is not
equal to the number of LPIs.

This is particularly obvious when looking at the path coming from
vgic_enable_lpis, which is not a command, and thus can run in parallel
with commands:

vcpu 0:                                        vcpu 1:
vgic_enable_lpis
  its_sync_lpi_pending_table
    vgic_copy_lpi_list
      intids = kmalloc_array(irq_count)
                                               MAPI(lpi targeting vcpu 0)
      list_for_each_entry(lpi_list_head)
        intids[i++] = irq->intid;

At that stage, we will happily overrun the intids array. Boo. An easy
fix is is to break once the array is full. The MAPI command will update
the config anyway, and we won't miss a thing. We also make sure that
lpi_list_count is read exactly once, so that further updates of that
value will not affect the array bound check.

Cc: stable@vger.kernel.org
Fixes: ccb1d791ab9e ("KVM: arm64: vgic-its: Fix pending table sync")
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic: Disallow Active+Pending for level interrupts
Marc Zyngier [Fri, 9 Mar 2018 14:59:40 +0000 (14:59 +0000)]
KVM: arm/arm64: vgic: Disallow Active+Pending for level interrupts

It was recently reported that VFIO mediated devices, and anything
that VFIO exposes as level interrupts, do no strictly follow the
expected logic of such interrupts as it only lowers the input
line when the guest has EOId the interrupt at the GIC level, rather
than when it Acked the interrupt at the device level.

THe GIC's Active+Pending state is fundamentally incompatible with
this behaviour, as it prevents KVM from observing the EOI, and in
turn results in VFIO never dropping the line. This results in an
interrupt storm in the guest, which it really never expected.

As we cannot really change VFIO to follow the strict rules of level
signalling, let's forbid the A+P state altogether, as it is in the
end only an optimization. It ensures that we will transition via
an invalid state, which we can use to notify VFIO of the EOI.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agokvm: x86: hyperv: delete dead code in kvm_hv_hypercall()
Dan Carpenter [Sat, 17 Mar 2018 11:48:27 +0000 (14:48 +0300)]
kvm: x86: hyperv: delete dead code in kvm_hv_hypercall()

"rep_done" is always zero so the "(((u64)rep_done & 0xfff) << 32)"
expression is just zero.  We can remove the "res" temporary variable as
well and just use "ret" directly.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: SVM: add struct kvm_svm to hold SVM specific KVM vars
Sean Christopherson [Tue, 20 Mar 2018 19:17:21 +0000 (12:17 -0700)]
KVM: SVM: add struct kvm_svm to hold SVM specific KVM vars

Add struct kvm_svm, which is analagous to struct vcpu_svm, along with
a helper to_kvm_svm() to retrieve kvm_svm from a struct kvm *.  Move
the SVM specific variables and struct definitions out of kvm_arch
and into kvm_svm.

Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: add struct kvm_vmx to hold VMX specific KVM vars
Sean Christopherson [Tue, 20 Mar 2018 19:17:20 +0000 (12:17 -0700)]
KVM: VMX: add struct kvm_vmx to hold VMX specific KVM vars

Add struct kvm_vmx, which wraps struct kvm, and a helper to_kvm_vmx()
that retrieves 'struct kvm_vmx *' from 'struct kvm *'.  Move the VMX
specific variables out of kvm_arch and into kvm_vmx.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: move setting of ept_identity_map_addr to vmx.c
Sean Christopherson [Tue, 20 Mar 2018 19:17:19 +0000 (12:17 -0700)]
KVM: x86: move setting of ept_identity_map_addr to vmx.c

Add kvm_x86_ops->set_identity_map_addr and set ept_identity_map_addr
in VMX specific code so that ept_identity_map_addr can be moved out
of 'struct kvm_arch' in a future patch.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: define SVM/VMX specific kvm_arch_[alloc|free]_vm
Sean Christopherson [Tue, 20 Mar 2018 19:17:18 +0000 (12:17 -0700)]
KVM: x86: define SVM/VMX specific kvm_arch_[alloc|free]_vm

Define kvm_arch_[alloc|free]_vm in x86 as pass through functions
to new kvm_x86_ops vm_alloc and vm_free, and move the current
allocation logic as-is to SVM and VMX.  Vendor specific alloc/free
functions set the stage for SVM/VMX wrappers of 'struct kvm',
which will allow us to move the growing number of SVM/VMX specific
member variables out of 'struct kvm_arch'.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: fix vmentry failure code when L2 state would require emulation
Paolo Bonzini [Wed, 21 Mar 2018 13:20:18 +0000 (14:20 +0100)]
KVM: nVMX: fix vmentry failure code when L2 state would require emulation

Commit 2bb8cafea80b ("KVM: vVMX: signal failure for nested VMEntry if
emulation_required", 2018-03-12) introduces a new error path which does
not set *entry_failure_code.  Fix that to avoid a leak of L0 stack to L1.

Reported-by: Radim Krčmář <rkrcmar@redhat.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Do not load EOI-exitmap while running L2
Liran Alon [Wed, 21 Mar 2018 00:50:31 +0000 (02:50 +0200)]
KVM: nVMX: Do not load EOI-exitmap while running L2

When L1 IOAPIC redirection-table is written, a request of
KVM_REQ_SCAN_IOAPIC is set on all vCPUs. This is done such that
all vCPUs will now recalc their IOAPIC handled vectors and load
it to their EOI-exitmap.

However, it could be that one of the vCPUs is currently running
L2. In this case, load_eoi_exitmap() will be called which would
write to vmcs02->eoi_exit_bitmap, which is wrong because
vmcs02->eoi_exit_bitmap should always be equal to
vmcs12->eoi_exit_bitmap. Furthermore, at this point
KVM_REQ_SCAN_IOAPIC was already consumed and therefore we will
never update vmcs01->eoi_exit_bitmap. This could lead to remote_irr
of some IOAPIC level-triggered entry to remain set forever.

Fix this issue by delaying the load of EOI-exitmap to when vCPU
is running L1.

One may wonder why not just delay entire KVM_REQ_SCAN_IOAPIC
processing to when vCPU is running L1. This is done in order to handle
correctly the case where LAPIC & IO-APIC of L1 is pass-throughed into
L2. In this case, vmcs12->virtual_interrupt_delivery should be 0. In
current nVMX implementation, that results in
vmcs02->virtual_interrupt_delivery to also be 0. Thus,
vmcs02->eoi_exit_bitmap is not used. Therefore, every L2 EOI cause
a #VMExit into L0 (either on MSR_WRITE to x2APIC MSR or
APIC_ACCESS/APIC_WRITE/EPT_MISCONFIG to APIC MMIO page).
In order for such L2 EOI to be broadcasted, if needed, from LAPIC
to IO-APIC, vcpu->arch.ioapic_handled_vectors must be updated
while L2 is running. Therefore, patch makes sure to delay only the
loading of EOI-exitmap but not the update of
vcpu->arch.ioapic_handled_vectors.

Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoarm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening
Shanker Donthineni [Mon, 5 Mar 2018 17:06:43 +0000 (11:06 -0600)]
arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening

The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC
V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses
the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead
of Silicon provider service ID 0xC2001700.

Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm: Reserve bit in KVM_REG_ARM encoding for secure/nonsecure
Peter Maydell [Tue, 6 Mar 2018 19:47:41 +0000 (19:47 +0000)]
KVM: arm: Reserve bit in KVM_REG_ARM encoding for secure/nonsecure

We have a KVM_REG_ARM encoding that we use to expose KVM guest registers
to userspace. Define that bit 28 in this encoding indicates secure vs
nonsecure, so we can distinguish the secure and nonsecure banked versions
of a banked AArch32 register.

For KVM currently, all guest registers are nonsecure, but defining
the bit is useful for userspace. In particular, QEMU uses this
encoding as part of its on-the-wire migration format, and needs to be
able to describe secure-bank registers when it is migrating (fully
emulated) EL3-enabled CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoMerge tag 'kvm-arm-fixes-for-v4.16-2' into HEAD
Marc Zyngier [Mon, 19 Mar 2018 17:43:01 +0000 (17:43 +0000)]
Merge tag 'kvm-arm-fixes-for-v4.16-2' into HEAD

Resolve conflicts with current mainline

6 years agoarm64: Enable ARM64_HARDEN_EL2_VECTORS on Cortex-A57 and A72
Marc Zyngier [Thu, 15 Feb 2018 11:49:20 +0000 (11:49 +0000)]
arm64: Enable ARM64_HARDEN_EL2_VECTORS on Cortex-A57 and A72

Cortex-A57 and A72 are vulnerable to the so-called "variant 3a" of
Meltdown, where an attacker can speculatively obtain the value
of a privileged system register.

By enabling ARM64_HARDEN_EL2_VECTORS on these CPUs, obtaining
VBAR_EL2 is not disclosing the hypervisor mappings anymore.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Allow mapping of vectors outside of the RAM region
Marc Zyngier [Thu, 15 Feb 2018 11:47:14 +0000 (11:47 +0000)]
arm64: KVM: Allow mapping of vectors outside of the RAM region

We're now ready to map our vectors in weird and wonderful locations.
On enabling ARM64_HARDEN_EL2_VECTORS, a vector slot gets allocated
if this hasn't been already done via ARM64_HARDEN_BRANCH_PREDICTOR
and gets mapped outside of the normal RAM region, next to the
idmap.

That way, being able to obtain VBAR_EL2 doesn't reveal the mapping
of the rest of the hypervisor code.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: Make BP hardening slot counter available
Marc Zyngier [Tue, 13 Mar 2018 12:40:39 +0000 (12:40 +0000)]
arm64: Make BP hardening slot counter available

We're about to need to allocate hardening slots from other parts
of the kernel (in order to support ARM64_HARDEN_EL2_VECTORS).

Turn the counter into an atomic_t and make it available to the
rest of the kernel. Also add BP_HARDEN_EL2_SLOTS as the number of
slots instead of the hardcoded 4...

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm/arm64: KVM: Introduce EL2-specific executable mappings
Marc Zyngier [Tue, 13 Feb 2018 11:00:29 +0000 (11:00 +0000)]
arm/arm64: KVM: Introduce EL2-specific executable mappings

Until now, all EL2 executable mappings were derived from their
EL1 VA. Since we want to decouple the vectors mapping from
the rest of the hypervisor, we need to be able to map some
text somewhere else.

The "idmap" region (for lack of a better name) is ideally suited
for this, as we have a huge range that hardly has anything in it.

Let's extend the IO allocator to also deal with executable mappings,
thus providing the required feature.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Allow far branches from vector slots to the main vectors
Marc Zyngier [Tue, 27 Feb 2018 17:38:08 +0000 (17:38 +0000)]
arm64: KVM: Allow far branches from vector slots to the main vectors

So far, the branch from the vector slots to the main vectors can at
most be 4GB from the main vectors (the reach of ADRP), and this
distance is known at compile time. If we were to remap the slots
to an unrelated VA, things would break badly.

A way to achieve VA independence would be to load the absolute
address of the vectors (__kvm_hyp_vector), either using a constant
pool or a series of movs, followed by an indirect branch.

This patches implements the latter solution, using another instance
of a patching callback. Note that since we have to save a register
pair on the stack, we branch to the *second* instruction in the
vectors in order to compensate for it. This also results in having
to adjust this balance in the invalid vector entry point.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Reserve 4 additional instructions in the BPI template
Marc Zyngier [Tue, 13 Mar 2018 12:24:02 +0000 (12:24 +0000)]
arm64: KVM: Reserve 4 additional instructions in the BPI template

So far, we only reserve a single instruction in the BPI template in
order to branch to the vectors. As we're going to stuff a few more
instructions there, let's reserve a total of 5 instructions, which
we're going to patch later on as required.

We also introduce a small refactor of the vectors themselves, so that
we stop carrying the target branch around.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Move BP hardening vectors into .hyp.text section
Marc Zyngier [Wed, 14 Mar 2018 13:28:50 +0000 (13:28 +0000)]
arm64: KVM: Move BP hardening vectors into .hyp.text section

There is no reason why the BP hardening vectors shouldn't be part
of the HYP text at compile time, rather than being mapped at runtime.

Also introduce a new config symbol that controls the compilation
of bpi.S.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Move stashing of x0/x1 into the vector code itself
Marc Zyngier [Mon, 12 Feb 2018 17:53:00 +0000 (17:53 +0000)]
arm64: KVM: Move stashing of x0/x1 into the vector code itself

All our useful entry points into the hypervisor are starting by
saving x0 and x1 on the stack. Let's move those into the vectors
by introducing macros that annotate whether a vector is valid or
not, thus indicating whether we want to stash registers or not.

The only drawback is that we now also stash registers for el2_error,
but this should never happen, and we pop them back right at the
start of the handling sequence.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Move vector offsetting from hyp-init.S to kvm_get_hyp_vector
Marc Zyngier [Mon, 12 Feb 2018 16:50:19 +0000 (16:50 +0000)]
arm64: KVM: Move vector offsetting from hyp-init.S to kvm_get_hyp_vector

We currently provide the hyp-init code with a kernel VA, and expect
it to turn it into a HYP va by itself. As we're about to provide
the hypervisor with mappings that are not necessarily in the memory
range, let's move the kern_hyp_va macro to kvm_get_hyp_vector.

No functionnal change.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: Update the KVM memory map documentation
Marc Zyngier [Tue, 5 Dec 2017 18:45:48 +0000 (18:45 +0000)]
arm64: Update the KVM memory map documentation

Update the documentation to reflect the new tricks we play on the
EL2 mappings...

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Introduce EL2 VA randomisation
Marc Zyngier [Sun, 3 Dec 2017 18:22:49 +0000 (18:22 +0000)]
arm64: KVM: Introduce EL2 VA randomisation

The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.

Those bits could be a bunch of zeroes, and could be useful
to move things around a bit. Of course, the more memory you have,
the less randomisation you get...

Alternatively, these bits could be the result of KASLR, in which
case they are already random. But it would be nice to have a
*different* randomization, just to make the job of a potential
attacker a bit more difficult.

Inserting these random bits is a bit involved. We don't have a spare
register (short of rewriting all the kern_hyp_va call sites), and
the immediate we want to insert is too random to be used with the
ORR instruction. The best option I could come up with is the following
sequence:

and x0, x0, #va_mask
ror x0, x0, #first_random_bit
add x0, x0, #(random & 0xfff)
add x0, x0, #(random >> 12), lsl #12
ror x0, x0, #(63 - first_random_bit)

making it a fairly long sequence, but one that a decent CPU should
be able to execute without breaking a sweat. It is of course NOPed
out on VHE. The last 4 instructions can also be turned into NOPs
if it appears that there is no free bits to use.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Dynamically compute the HYP VA mask
Marc Zyngier [Fri, 8 Dec 2017 14:18:27 +0000 (14:18 +0000)]
arm64: KVM: Dynamically compute the HYP VA mask

As we're moving towards a much more dynamic way to compute our
HYP VA, let's express the mask in a slightly different way.

Instead of comparing the idmap position to the "low" VA mask,
we directly compute the mask by taking into account the idmap's
(VA_BIT-1) bit.

No functionnal change.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: insn: Allow ADD/SUB (immediate) with LSL #12
Marc Zyngier [Sun, 3 Dec 2017 17:50:00 +0000 (17:50 +0000)]
arm64: insn: Allow ADD/SUB (immediate) with LSL #12

The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.

Let's fix this small oversight by allowing the LSL_12 bit to be set.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64; insn: Add encoder for the EXTR instruction
Marc Zyngier [Sun, 3 Dec 2017 17:47:03 +0000 (17:47 +0000)]
arm64; insn: Add encoder for the EXTR instruction

Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move HYP IO VAs to the "idmap" range
Marc Zyngier [Mon, 4 Dec 2017 17:04:38 +0000 (17:04 +0000)]
KVM: arm/arm64: Move HYP IO VAs to the "idmap" range

We so far mapped our HYP IO (which is essentially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:

We compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
and not from the vmalloc region... This could in turn cause some bad
aliasing between the two, amplified by the upcoming VA randomisation.

A solution is to come up with our very own basic VA allocator for
MMIO. Since half of the HYP address space only contains a single
page (the idmap), we have plenty to borrow from. Let's use the idmap
as a base, and allocate downwards from it. GICv2 now lives on the
other side of the great VA barrier.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Fix HYP idmap unmap when using 52bit PA
Marc Zyngier [Wed, 14 Mar 2018 15:17:33 +0000 (15:17 +0000)]
KVM: arm64: Fix HYP idmap unmap when using 52bit PA

Unmapping the idmap range using 52bit PA is quite broken, as we
don't take into account the right number of PGD entries, and rely
on PTRS_PER_PGD. The result is that pgd_index() truncates the
address, and we end-up in the weed.

Let's introduce a new unmap_hyp_idmap_range() that knows about this,
together with a kvm_pgd_index() helper, which hides a bit of the
complexity of the issue.

Fixes: 98732d1b189b ("KVM: arm/arm64: fix HYP ID map extension to 52 bits")
Reported-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Fix idmap size and alignment
Marc Zyngier [Mon, 12 Mar 2018 14:25:10 +0000 (14:25 +0000)]
KVM: arm/arm64: Fix idmap size and alignment

Although the idmap section of KVM can only be at most 4kB and
must be aligned on a 4kB boundary, the rest of the code expects
it to be page aligned. Things get messy when tearing down the
HYP page tables when PAGE_SIZE is 64K, and the idmap section isn't
64K aligned.

Let's fix this by computing aligned boundaries that the HYP code
will use.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: James Morse <james.morse@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
Marc Zyngier [Mon, 4 Dec 2017 16:43:23 +0000 (16:43 +0000)]
KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state

As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.

One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can also return a HYP VA.

We take this opportunity to nuke the vctrl_base field in the emulated
distributor, as it is not used anymore.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
Marc Zyngier [Mon, 4 Dec 2017 16:26:09 +0000 (16:26 +0000)]
KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings

Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Demote HYP VA range display to being a debug feature
Marc Zyngier [Sun, 3 Dec 2017 20:04:51 +0000 (20:04 +0000)]
KVM: arm/arm64: Demote HYP VA range display to being a debug feature

Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
Marc Zyngier [Sun, 3 Dec 2017 19:28:56 +0000 (19:28 +0000)]
KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always PC relative.

So let's bite the bullet and provide our own accessor.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
Marc Zyngier [Sun, 3 Dec 2017 17:42:37 +0000 (17:42 +0000)]
arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag

Now that we can dynamically compute the kernek/hyp VA mask, there
is no need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Dynamically patch the kernel/hyp VA mask
Marc Zyngier [Sun, 3 Dec 2017 17:36:55 +0000 (17:36 +0000)]
arm64: KVM: Dynamically patch the kernel/hyp VA mask

So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.

The newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instead of the mind bending cumulative masking
we have at the moment) or even a single NOP on VHE. This also
adds some initial code that will allow the patching callback
to switch to a more complex patching.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: insn: Add encoder for bitwise operations using literals
Marc Zyngier [Sun, 3 Dec 2017 17:09:08 +0000 (17:09 +0000)]
arm64: insn: Add encoder for bitwise operations using literals

We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().

This has been tested by feeding it all the possible literal values
and comparing the output with that of GAS.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: insn: Add N immediate encoding
Marc Zyngier [Sun, 3 Dec 2017 17:01:39 +0000 (17:01 +0000)]
arm64: insn: Add N immediate encoding

We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: alternatives: Add dynamic patching feature
Marc Zyngier [Sun, 3 Dec 2017 12:02:14 +0000 (12:02 +0000)]
arm64: alternatives: Add dynamic patching feature

We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to provide a range of potential
replacement instructions. For a single feature, this is an all or
nothing thing.

It would be interesting to have a more flexible grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.

In order to achive this, let's introduce a new form of dynamic patching,
assiciating a callback to a patching site. This callback gets source and
target locations of the patching request, as well as the number of
instructions to be patched.

Dynamic patching is declared with the new ALTERNATIVE_CB and alternative_cb
directives:

asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
     : "r" (v));
or
alternative_cb callback
mov x0, #0
alternative_cb_end

where callback is the C function computing the alternative.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Avoid VGICv3 save/restore on VHE with no IRQs
Christoffer Dall [Thu, 5 Oct 2017 15:19:19 +0000 (17:19 +0200)]
KVM: arm/arm64: Avoid VGICv3 save/restore on VHE with no IRQs

We can finally get completely rid of any calls to the VGICv3
save/restore functions when the AP lists are empty on VHE systems.  This
requires carefully factoring out trap configuration from saving and
restoring state, and carefully choosing what to do on the VHE and
non-VHE path.

One of the challenges is that we cannot save/restore the VMCR lazily
because we can only write the VMCR when ICC_SRE_EL1.SRE is cleared when
emulating a GICv2-on-GICv3, since otherwise all Group-0 interrupts end
up being delivered as FIQ.

To solve this problem, and still provide fast performance in the fast
path of exiting a VM when no interrupts are pending (which also
optimized the latency for actually delivering virtual interrupts coming
from physical interrupts), we orchestrate a dance of only doing the
activate/deactivate traps in vgic load/put for VHE systems (which can
have ICC_SRE_EL1.SRE cleared when running in the host), and doing the
configuration on every round-trip on non-VHE systems.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move VGIC APR save/restore to vgic put/load
Christoffer Dall [Wed, 4 Oct 2017 22:18:07 +0000 (00:18 +0200)]
KVM: arm/arm64: Move VGIC APR save/restore to vgic put/load

The APRs can only have bits set when the guest acknowledges an interrupt
in the LR and can only have a bit cleared when the guest EOIs an
interrupt in the LR.  Therefore, if we have no LRs with any
pending/active interrupts, the APR cannot change value and there is no
need to clear it on every exit from the VM (hint: it will have already
been cleared when we exited the guest the last time with the LRs all
EOIed).

The only case we need to take care of is when we migrate the VCPU away
from a CPU or migrate a new VCPU onto a CPU, or when we return to
userspace to capture the state of the VCPU for migration.  To make sure
this works, factor out the APR save/restore functionality into separate
functions called from the VCPU (and by extension VGIC) put/load hooks.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Handle VGICv3 save/restore from the main VGIC code on VHE
Christoffer Dall [Wed, 4 Oct 2017 21:42:32 +0000 (23:42 +0200)]
KVM: arm/arm64: Handle VGICv3 save/restore from the main VGIC code on VHE

Just like we can program the GICv2 hypervisor control interface directly
from the core vgic code, we can do the same for the GICv3 hypervisor
control interface on VHE systems.

We do this by simply calling the save/restore functions when we have VHE
and we can then get rid of the save/restore function calls from the VHE
world switch function.

One caveat is that we now write GICv3 system register state before the
potential early exit path in the run loop, and because we sync back
state in the early exit path, we have to ensure that we read a
consistent GIC state from the sync path, even though we have never
actually run the guest with the newly written GIC state.  We solve this
by inserting an ISB in the early exit path.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move arm64-only vgic-v2-sr.c file to arm64
Christoffer Dall [Wed, 4 Oct 2017 21:25:24 +0000 (23:25 +0200)]
KVM: arm/arm64: Move arm64-only vgic-v2-sr.c file to arm64

The vgic-v2-sr.c file now only contains the logic to replay unaligned
accesses to the virtual CPU interface on 16K and 64K page systems, which
is only relevant on 64-bit platforms.  Therefore move this file to the
arm64 KVM tree, remove the compile directive from the 32-bit side
makefile, and remove the ifdef in the C file.

Since this file also no longer saves/restores anything, rename the file
to vgic-v2-cpuif-proxy.c to more accurately describe the logic in this
file.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Handle VGICv2 save/restore from the main VGIC code
Christoffer Dall [Thu, 22 Dec 2016 19:39:10 +0000 (20:39 +0100)]
KVM: arm/arm64: Handle VGICv2 save/restore from the main VGIC code

We can program the GICv2 hypervisor control interface logic directly
from the core vgic code and can instead do the save/restore directly
from the flush/sync functions, which can lead to a number of future
optimizations.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Get rid of vgic_elrsr
Christoffer Dall [Wed, 4 Oct 2017 22:02:41 +0000 (00:02 +0200)]
KVM: arm/arm64: Get rid of vgic_elrsr

There is really no need to store the vgic_elrsr on the VGIC data
structures as the only need we have for the elrsr is to figure out if an
LR is inactive when we save the VGIC state upon returning from the
guest.  We can might as well store this in a temporary local variable.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Cleanup __activate_traps and __deactive_traps for VHE and non-VHE
Christoffer Dall [Tue, 3 Oct 2017 15:06:15 +0000 (17:06 +0200)]
KVM: arm64: Cleanup __activate_traps and __deactive_traps for VHE and non-VHE

To make the code more readable and to avoid the overhead of a function
call, let's get rid of a pair of the alternative function selectors and
explicitly call the VHE and non-VHE functions using the has_vhe() static
key based selector instead, telling the compiler to try to inline the
static function if it can.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Configure c15, PMU, and debug register traps on cpu load/put for VHE
Christoffer Dall [Fri, 4 Aug 2017 11:47:18 +0000 (13:47 +0200)]
KVM: arm64: Configure c15, PMU, and debug register traps on cpu load/put for VHE

We do not have to change the c15 trap setting on each switch to/from the
guest on VHE systems, because this setting only affects guest EL1/EL0
(and therefore not the VHE host).

The PMU and debug trap configuration can also be done on vcpu load/put
instead, because they don't affect how the VHE host kernel can access the
debug registers while executing KVM kernel code.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Directly call VHE and non-VHE FPSIMD enabled functions
Christoffer Dall [Fri, 4 Aug 2017 07:39:51 +0000 (09:39 +0200)]
KVM: arm64: Directly call VHE and non-VHE FPSIMD enabled functions

There is no longer a need for an alternative to choose the right
function to tell us whether or not FPSIMD was enabled for the VM,
because we can simply can the appropriate functions directly from within
the _vhe and _nvhe run functions.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Move common VHE/non-VHE trap config in separate functions
Christoffer Dall [Fri, 4 Aug 2017 06:50:25 +0000 (08:50 +0200)]
KVM: arm64: Move common VHE/non-VHE trap config in separate functions

As we are about to be more lazy with some of the trap configuration
register read/writes for VHE systems, move the logic that is currently
shared between VHE and non-VHE into a separate function which can be
called from either the world-switch path or from vcpu_load/vcpu_put.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Defer saving/restoring 32-bit sysregs to vcpu load/put
Christoffer Dall [Wed, 27 Dec 2017 21:12:12 +0000 (22:12 +0100)]
KVM: arm64: Defer saving/restoring 32-bit sysregs to vcpu load/put

When running a 32-bit VM (EL1 in AArch32), the AArch32 system registers
can be deferred to vcpu load/put on VHE systems because neither
the host kernel nor host userspace uses these registers.

Note that we can't save DBGVCR32_EL2 conditionally based on the state of
the debug dirty flag on VHE after this change, because during
vcpu_load() we haven't calculated a valid debug flag yet, and when we've
restored the register during vcpu_load() we also have to save it during
vcpu_put().  This means that we'll always restore/save the register for
VHE on load/put, but luckily vcpu load/put are called rarely, so saving
an extra register unconditionally shouldn't significantly hurt
performance.

We can also not defer saving FPEXC32_32 because this register only holds
a guest-valid value for 32-bit guests during the exit path when the
guest has used FPSIMD registers and restored the register in the early
assembly handler from taking the EL2 fault, and therefore we have to
check if fpsimd is enabled for the guest in the exit path and save the
register then, for both VHE and non-VHE guests.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Prepare to handle deferred save/restore of 32-bit registers
Christoffer Dall [Wed, 27 Dec 2017 20:59:09 +0000 (21:59 +0100)]
KVM: arm64: Prepare to handle deferred save/restore of 32-bit registers

32-bit registers are not used by a 64-bit host kernel and can be
deferred, but we need to rework the accesses to these register to access
the latest values depending on whether or not guest system registers are
loaded on the CPU or only reside in memory.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Defer saving/restoring 64-bit sysregs to vcpu load/put on VHE
Christoffer Dall [Tue, 15 Mar 2016 18:43:45 +0000 (19:43 +0100)]
KVM: arm64: Defer saving/restoring 64-bit sysregs to vcpu load/put on VHE

Some system registers do not affect the host kernel's execution and can
therefore be loaded when we are about to run a VCPU and we don't have to
restore the host state to the hardware before the time when we are
actually about to return to userspace or schedule out the VCPU thread.

The EL1 system registers and the userspace state registers only
affecting EL0 execution do not need to be saved and restored on every
switch between the VM and the host, because they don't affect the host
kernel's execution.

We mark all registers which are now deffered as such in the
vcpu_{read,write}_sys_reg accessors in sys-regs.c to ensure the most
up-to-date copy is always accessed.

Note MPIDR_EL1 (controlled via VMPIDR_EL2) is accessed from other vcpu
threads, for example via the GIC emulation, and therefore must be
declared as immediate, which is fine as the guest cannot modify this
value.

The 32-bit sysregs can also be deferred but we do this in a separate
patch as it requires a bit more infrastructure.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Prepare to handle deferred save/restore of ELR_EL1
Christoffer Dall [Wed, 27 Dec 2017 19:51:04 +0000 (20:51 +0100)]
KVM: arm64: Prepare to handle deferred save/restore of ELR_EL1

ELR_EL1 is not used by a VHE host kernel and can be deferred, but we
need to rework the accesses to this register to access the latest value
depending on whether or not guest system registers are loaded on the CPU
or only reside in memory.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>