openwrt/staging/blogic.git
6 years agoKVM: VMX: Remove ple_window_actual_max
Babu Moger [Fri, 16 Mar 2018 20:37:23 +0000 (16:37 -0400)]
KVM: VMX: Remove ple_window_actual_max

Get rid of ple_window_actual_max, because its benefits are really
minuscule and the logic is complicated.

The overflows(and underflow) are controlled in __ple_window_grow
and _ple_window_shrink respectively.

Suggested-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
[Fixed potential wraparound and change the max to UINT_MAX. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: VMX: Fix the module parameters for vmx
Babu Moger [Fri, 16 Mar 2018 20:37:22 +0000 (16:37 -0400)]
KVM: VMX: Fix the module parameters for vmx

The vmx module parameters are supposed to be unsigned variants.

Also fixed the checkpatch errors like the one below.

WARNING: Symbolic permissions 'S_IRUGO' are not preferred. Consider using octal permissions '0444'.
+module_param(ple_gap, uint, S_IRUGO);

Signed-off-by: Babu Moger <babu.moger@amd.com>
[Expanded uint to unsigned int in code. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoKVM: x86: Fix perf timer mode IP reporting
Andi Kleen [Wed, 26 Jul 2017 00:20:32 +0000 (17:20 -0700)]
KVM: x86: Fix perf timer mode IP reporting

KVM and perf have a special backdoor mechanism to report the IP for interrupts
re-executed after vm exit. This works for the NMIs that perf normally uses.

However when perf is in timer mode it doesn't work because the timer interrupt
doesn't get this special treatment. This is common when KVM is running
nested in another hypervisor which may not implement the PMU, so only
timer mode is available.

Call the functions to set up the backdoor IP also for non NMI interrupts.

I renamed the functions to set up the backdoor IP reporting to be more
appropiate for their new use.  The SVM change is only compile tested.

v2: Moved the functions inline.
For the normal interrupt case the before/after functions are now
called from x86.c, not arch specific code.
For the NMI case we still need to call it in the architecture
specific code, because it's already needed in the low level *_run
functions.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
[Removed unnecessary calls from arch handle_external_intr. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
6 years agoMerge tag 'kvm-arm-for-v4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm...
Radim Krčmář [Wed, 28 Mar 2018 14:09:09 +0000 (16:09 +0200)]
Merge tag 'kvm-arm-for-v4.17' of git://git./linux/kernel/git/kvmarm/kvmarm

KVM/ARM updates for v4.17

- VHE optimizations
- EL2 address space randomization
- Variant 3a mitigation for Cortex-A57 and A72
- The usual vgic fixes
- Various minor tidying-up

6 years agoarm64: Add temporary ERRATA_MIDR_ALL_VERSIONS compatibility macro
Marc Zyngier [Wed, 28 Mar 2018 11:46:07 +0000 (12:46 +0100)]
arm64: Add temporary ERRATA_MIDR_ALL_VERSIONS compatibility macro

MIDR_ALL_VERSIONS is changing, and won't have the same meaning
in 4.17, and the right thing to use will be ERRATA_MIDR_ALL_VERSIONS.

In order to cope with the merge window, let's add a compatibility
macro that will allow a relatively smooth transition, and that
can be removed post 4.17-rc1.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoRevert "arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening"
Marc Zyngier [Wed, 28 Mar 2018 10:59:13 +0000 (11:59 +0100)]
Revert "arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening"

Creates far too many conflicts with arm64/for-next/core, to be
resent post -rc1.

This reverts commit f9f5dc19509bbef6f5e675346f1a7d7b846bdb12.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic-its: Fix potential overrun in vgic_copy_lpi_list
Marc Zyngier [Fri, 23 Mar 2018 14:57:09 +0000 (14:57 +0000)]
KVM: arm/arm64: vgic-its: Fix potential overrun in vgic_copy_lpi_list

vgic_copy_lpi_list() parses the LPI list and picks LPIs targeting
a given vcpu. We allocate the array containing the intids before taking
the lpi_list_lock, which means we can have an array size that is not
equal to the number of LPIs.

This is particularly obvious when looking at the path coming from
vgic_enable_lpis, which is not a command, and thus can run in parallel
with commands:

vcpu 0:                                        vcpu 1:
vgic_enable_lpis
  its_sync_lpi_pending_table
    vgic_copy_lpi_list
      intids = kmalloc_array(irq_count)
                                               MAPI(lpi targeting vcpu 0)
      list_for_each_entry(lpi_list_head)
        intids[i++] = irq->intid;

At that stage, we will happily overrun the intids array. Boo. An easy
fix is is to break once the array is full. The MAPI command will update
the config anyway, and we won't miss a thing. We also make sure that
lpi_list_count is read exactly once, so that further updates of that
value will not affect the array bound check.

Cc: stable@vger.kernel.org
Fixes: ccb1d791ab9e ("KVM: arm64: vgic-its: Fix pending table sync")
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: vgic: Disallow Active+Pending for level interrupts
Marc Zyngier [Fri, 9 Mar 2018 14:59:40 +0000 (14:59 +0000)]
KVM: arm/arm64: vgic: Disallow Active+Pending for level interrupts

It was recently reported that VFIO mediated devices, and anything
that VFIO exposes as level interrupts, do no strictly follow the
expected logic of such interrupts as it only lowers the input
line when the guest has EOId the interrupt at the GIC level, rather
than when it Acked the interrupt at the device level.

THe GIC's Active+Pending state is fundamentally incompatible with
this behaviour, as it prevents KVM from observing the EOI, and in
turn results in VFIO never dropping the line. This results in an
interrupt storm in the guest, which it really never expected.

As we cannot really change VFIO to follow the strict rules of level
signalling, let's forbid the A+P state altogether, as it is in the
end only an optimization. It ensures that we will transition via
an invalid state, which we can use to notify VFIO of the EOI.

Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agokvm: x86: hyperv: delete dead code in kvm_hv_hypercall()
Dan Carpenter [Sat, 17 Mar 2018 11:48:27 +0000 (14:48 +0300)]
kvm: x86: hyperv: delete dead code in kvm_hv_hypercall()

"rep_done" is always zero so the "(((u64)rep_done & 0xfff) << 32)"
expression is just zero.  We can remove the "res" temporary variable as
well and just use "ret" directly.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: SVM: add struct kvm_svm to hold SVM specific KVM vars
Sean Christopherson [Tue, 20 Mar 2018 19:17:21 +0000 (12:17 -0700)]
KVM: SVM: add struct kvm_svm to hold SVM specific KVM vars

Add struct kvm_svm, which is analagous to struct vcpu_svm, along with
a helper to_kvm_svm() to retrieve kvm_svm from a struct kvm *.  Move
the SVM specific variables and struct definitions out of kvm_arch
and into kvm_svm.

Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: add struct kvm_vmx to hold VMX specific KVM vars
Sean Christopherson [Tue, 20 Mar 2018 19:17:20 +0000 (12:17 -0700)]
KVM: VMX: add struct kvm_vmx to hold VMX specific KVM vars

Add struct kvm_vmx, which wraps struct kvm, and a helper to_kvm_vmx()
that retrieves 'struct kvm_vmx *' from 'struct kvm *'.  Move the VMX
specific variables out of kvm_arch and into kvm_vmx.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: move setting of ept_identity_map_addr to vmx.c
Sean Christopherson [Tue, 20 Mar 2018 19:17:19 +0000 (12:17 -0700)]
KVM: x86: move setting of ept_identity_map_addr to vmx.c

Add kvm_x86_ops->set_identity_map_addr and set ept_identity_map_addr
in VMX specific code so that ept_identity_map_addr can be moved out
of 'struct kvm_arch' in a future patch.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: define SVM/VMX specific kvm_arch_[alloc|free]_vm
Sean Christopherson [Tue, 20 Mar 2018 19:17:18 +0000 (12:17 -0700)]
KVM: x86: define SVM/VMX specific kvm_arch_[alloc|free]_vm

Define kvm_arch_[alloc|free]_vm in x86 as pass through functions
to new kvm_x86_ops vm_alloc and vm_free, and move the current
allocation logic as-is to SVM and VMX.  Vendor specific alloc/free
functions set the stage for SVM/VMX wrappers of 'struct kvm',
which will allow us to move the growing number of SVM/VMX specific
member variables out of 'struct kvm_arch'.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: fix vmentry failure code when L2 state would require emulation
Paolo Bonzini [Wed, 21 Mar 2018 13:20:18 +0000 (14:20 +0100)]
KVM: nVMX: fix vmentry failure code when L2 state would require emulation

Commit 2bb8cafea80b ("KVM: vVMX: signal failure for nested VMEntry if
emulation_required", 2018-03-12) introduces a new error path which does
not set *entry_failure_code.  Fix that to avoid a leak of L0 stack to L1.

Reported-by: Radim Krčmář <rkrcmar@redhat.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Do not load EOI-exitmap while running L2
Liran Alon [Wed, 21 Mar 2018 00:50:31 +0000 (02:50 +0200)]
KVM: nVMX: Do not load EOI-exitmap while running L2

When L1 IOAPIC redirection-table is written, a request of
KVM_REQ_SCAN_IOAPIC is set on all vCPUs. This is done such that
all vCPUs will now recalc their IOAPIC handled vectors and load
it to their EOI-exitmap.

However, it could be that one of the vCPUs is currently running
L2. In this case, load_eoi_exitmap() will be called which would
write to vmcs02->eoi_exit_bitmap, which is wrong because
vmcs02->eoi_exit_bitmap should always be equal to
vmcs12->eoi_exit_bitmap. Furthermore, at this point
KVM_REQ_SCAN_IOAPIC was already consumed and therefore we will
never update vmcs01->eoi_exit_bitmap. This could lead to remote_irr
of some IOAPIC level-triggered entry to remain set forever.

Fix this issue by delaying the load of EOI-exitmap to when vCPU
is running L1.

One may wonder why not just delay entire KVM_REQ_SCAN_IOAPIC
processing to when vCPU is running L1. This is done in order to handle
correctly the case where LAPIC & IO-APIC of L1 is pass-throughed into
L2. In this case, vmcs12->virtual_interrupt_delivery should be 0. In
current nVMX implementation, that results in
vmcs02->virtual_interrupt_delivery to also be 0. Thus,
vmcs02->eoi_exit_bitmap is not used. Therefore, every L2 EOI cause
a #VMExit into L0 (either on MSR_WRITE to x2APIC MSR or
APIC_ACCESS/APIC_WRITE/EPT_MISCONFIG to APIC MMIO page).
In order for such L2 EOI to be broadcasted, if needed, from LAPIC
to IO-APIC, vcpu->arch.ioapic_handled_vectors must be updated
while L2 is running. Therefore, patch makes sure to delay only the
loading of EOI-exitmap but not the update of
vcpu->arch.ioapic_handled_vectors.

Reviewed-by: Arbel Moshe <arbel.moshe@oracle.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoarm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening
Shanker Donthineni [Mon, 5 Mar 2018 17:06:43 +0000 (11:06 -0600)]
arm64: KVM: Use SMCCC_ARCH_WORKAROUND_1 for Falkor BP hardening

The function SMCCC_ARCH_WORKAROUND_1 was introduced as part of SMC
V1.1 Calling Convention to mitigate CVE-2017-5715. This patch uses
the standard call SMCCC_ARCH_WORKAROUND_1 for Falkor chips instead
of Silicon provider service ID 0xC2001700.

Cc: <stable@vger.kernel.org> # 4.14+
Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm: Reserve bit in KVM_REG_ARM encoding for secure/nonsecure
Peter Maydell [Tue, 6 Mar 2018 19:47:41 +0000 (19:47 +0000)]
KVM: arm: Reserve bit in KVM_REG_ARM encoding for secure/nonsecure

We have a KVM_REG_ARM encoding that we use to expose KVM guest registers
to userspace. Define that bit 28 in this encoding indicates secure vs
nonsecure, so we can distinguish the secure and nonsecure banked versions
of a banked AArch32 register.

For KVM currently, all guest registers are nonsecure, but defining
the bit is useful for userspace. In particular, QEMU uses this
encoding as part of its on-the-wire migration format, and needs to be
able to describe secure-bank registers when it is migrating (fully
emulated) EL3-enabled CPUs.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoMerge tag 'kvm-arm-fixes-for-v4.16-2' into HEAD
Marc Zyngier [Mon, 19 Mar 2018 17:43:01 +0000 (17:43 +0000)]
Merge tag 'kvm-arm-fixes-for-v4.16-2' into HEAD

Resolve conflicts with current mainline

6 years agoarm64: Enable ARM64_HARDEN_EL2_VECTORS on Cortex-A57 and A72
Marc Zyngier [Thu, 15 Feb 2018 11:49:20 +0000 (11:49 +0000)]
arm64: Enable ARM64_HARDEN_EL2_VECTORS on Cortex-A57 and A72

Cortex-A57 and A72 are vulnerable to the so-called "variant 3a" of
Meltdown, where an attacker can speculatively obtain the value
of a privileged system register.

By enabling ARM64_HARDEN_EL2_VECTORS on these CPUs, obtaining
VBAR_EL2 is not disclosing the hypervisor mappings anymore.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Allow mapping of vectors outside of the RAM region
Marc Zyngier [Thu, 15 Feb 2018 11:47:14 +0000 (11:47 +0000)]
arm64: KVM: Allow mapping of vectors outside of the RAM region

We're now ready to map our vectors in weird and wonderful locations.
On enabling ARM64_HARDEN_EL2_VECTORS, a vector slot gets allocated
if this hasn't been already done via ARM64_HARDEN_BRANCH_PREDICTOR
and gets mapped outside of the normal RAM region, next to the
idmap.

That way, being able to obtain VBAR_EL2 doesn't reveal the mapping
of the rest of the hypervisor code.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: Make BP hardening slot counter available
Marc Zyngier [Tue, 13 Mar 2018 12:40:39 +0000 (12:40 +0000)]
arm64: Make BP hardening slot counter available

We're about to need to allocate hardening slots from other parts
of the kernel (in order to support ARM64_HARDEN_EL2_VECTORS).

Turn the counter into an atomic_t and make it available to the
rest of the kernel. Also add BP_HARDEN_EL2_SLOTS as the number of
slots instead of the hardcoded 4...

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm/arm64: KVM: Introduce EL2-specific executable mappings
Marc Zyngier [Tue, 13 Feb 2018 11:00:29 +0000 (11:00 +0000)]
arm/arm64: KVM: Introduce EL2-specific executable mappings

Until now, all EL2 executable mappings were derived from their
EL1 VA. Since we want to decouple the vectors mapping from
the rest of the hypervisor, we need to be able to map some
text somewhere else.

The "idmap" region (for lack of a better name) is ideally suited
for this, as we have a huge range that hardly has anything in it.

Let's extend the IO allocator to also deal with executable mappings,
thus providing the required feature.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Allow far branches from vector slots to the main vectors
Marc Zyngier [Tue, 27 Feb 2018 17:38:08 +0000 (17:38 +0000)]
arm64: KVM: Allow far branches from vector slots to the main vectors

So far, the branch from the vector slots to the main vectors can at
most be 4GB from the main vectors (the reach of ADRP), and this
distance is known at compile time. If we were to remap the slots
to an unrelated VA, things would break badly.

A way to achieve VA independence would be to load the absolute
address of the vectors (__kvm_hyp_vector), either using a constant
pool or a series of movs, followed by an indirect branch.

This patches implements the latter solution, using another instance
of a patching callback. Note that since we have to save a register
pair on the stack, we branch to the *second* instruction in the
vectors in order to compensate for it. This also results in having
to adjust this balance in the invalid vector entry point.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Reserve 4 additional instructions in the BPI template
Marc Zyngier [Tue, 13 Mar 2018 12:24:02 +0000 (12:24 +0000)]
arm64: KVM: Reserve 4 additional instructions in the BPI template

So far, we only reserve a single instruction in the BPI template in
order to branch to the vectors. As we're going to stuff a few more
instructions there, let's reserve a total of 5 instructions, which
we're going to patch later on as required.

We also introduce a small refactor of the vectors themselves, so that
we stop carrying the target branch around.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Move BP hardening vectors into .hyp.text section
Marc Zyngier [Wed, 14 Mar 2018 13:28:50 +0000 (13:28 +0000)]
arm64: KVM: Move BP hardening vectors into .hyp.text section

There is no reason why the BP hardening vectors shouldn't be part
of the HYP text at compile time, rather than being mapped at runtime.

Also introduce a new config symbol that controls the compilation
of bpi.S.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Move stashing of x0/x1 into the vector code itself
Marc Zyngier [Mon, 12 Feb 2018 17:53:00 +0000 (17:53 +0000)]
arm64: KVM: Move stashing of x0/x1 into the vector code itself

All our useful entry points into the hypervisor are starting by
saving x0 and x1 on the stack. Let's move those into the vectors
by introducing macros that annotate whether a vector is valid or
not, thus indicating whether we want to stash registers or not.

The only drawback is that we now also stash registers for el2_error,
but this should never happen, and we pop them back right at the
start of the handling sequence.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Move vector offsetting from hyp-init.S to kvm_get_hyp_vector
Marc Zyngier [Mon, 12 Feb 2018 16:50:19 +0000 (16:50 +0000)]
arm64: KVM: Move vector offsetting from hyp-init.S to kvm_get_hyp_vector

We currently provide the hyp-init code with a kernel VA, and expect
it to turn it into a HYP va by itself. As we're about to provide
the hypervisor with mappings that are not necessarily in the memory
range, let's move the kern_hyp_va macro to kvm_get_hyp_vector.

No functionnal change.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: Update the KVM memory map documentation
Marc Zyngier [Tue, 5 Dec 2017 18:45:48 +0000 (18:45 +0000)]
arm64: Update the KVM memory map documentation

Update the documentation to reflect the new tricks we play on the
EL2 mappings...

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Introduce EL2 VA randomisation
Marc Zyngier [Sun, 3 Dec 2017 18:22:49 +0000 (18:22 +0000)]
arm64: KVM: Introduce EL2 VA randomisation

The main idea behind randomising the EL2 VA is that we usually have
a few spare bits between the most significant bit of the VA mask
and the most significant bit of the linear mapping.

Those bits could be a bunch of zeroes, and could be useful
to move things around a bit. Of course, the more memory you have,
the less randomisation you get...

Alternatively, these bits could be the result of KASLR, in which
case they are already random. But it would be nice to have a
*different* randomization, just to make the job of a potential
attacker a bit more difficult.

Inserting these random bits is a bit involved. We don't have a spare
register (short of rewriting all the kern_hyp_va call sites), and
the immediate we want to insert is too random to be used with the
ORR instruction. The best option I could come up with is the following
sequence:

and x0, x0, #va_mask
ror x0, x0, #first_random_bit
add x0, x0, #(random & 0xfff)
add x0, x0, #(random >> 12), lsl #12
ror x0, x0, #(63 - first_random_bit)

making it a fairly long sequence, but one that a decent CPU should
be able to execute without breaking a sweat. It is of course NOPed
out on VHE. The last 4 instructions can also be turned into NOPs
if it appears that there is no free bits to use.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Dynamically compute the HYP VA mask
Marc Zyngier [Fri, 8 Dec 2017 14:18:27 +0000 (14:18 +0000)]
arm64: KVM: Dynamically compute the HYP VA mask

As we're moving towards a much more dynamic way to compute our
HYP VA, let's express the mask in a slightly different way.

Instead of comparing the idmap position to the "low" VA mask,
we directly compute the mask by taking into account the idmap's
(VA_BIT-1) bit.

No functionnal change.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: insn: Allow ADD/SUB (immediate) with LSL #12
Marc Zyngier [Sun, 3 Dec 2017 17:50:00 +0000 (17:50 +0000)]
arm64: insn: Allow ADD/SUB (immediate) with LSL #12

The encoder for ADD/SUB (immediate) can only cope with 12bit
immediates, while there is an encoding for a 12bit immediate shifted
by 12 bits to the left.

Let's fix this small oversight by allowing the LSL_12 bit to be set.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64; insn: Add encoder for the EXTR instruction
Marc Zyngier [Sun, 3 Dec 2017 17:47:03 +0000 (17:47 +0000)]
arm64; insn: Add encoder for the EXTR instruction

Add an encoder for the EXTR instruction, which also implements the ROR
variant (where Rn == Rm).

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move HYP IO VAs to the "idmap" range
Marc Zyngier [Mon, 4 Dec 2017 17:04:38 +0000 (17:04 +0000)]
KVM: arm/arm64: Move HYP IO VAs to the "idmap" range

We so far mapped our HYP IO (which is essentially the GICv2 control
registers) using the same method as for memory. It recently appeared
that is a bit unsafe:

We compute the HYP VA using the kern_hyp_va helper, but that helper
is only designed to deal with kernel VAs coming from the linear map,
and not from the vmalloc region... This could in turn cause some bad
aliasing between the two, amplified by the upcoming VA randomisation.

A solution is to come up with our very own basic VA allocator for
MMIO. Since half of the HYP address space only contains a single
page (the idmap), we have plenty to borrow from. Let's use the idmap
as a base, and allocate downwards from it. GICv2 now lives on the
other side of the great VA barrier.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Fix HYP idmap unmap when using 52bit PA
Marc Zyngier [Wed, 14 Mar 2018 15:17:33 +0000 (15:17 +0000)]
KVM: arm64: Fix HYP idmap unmap when using 52bit PA

Unmapping the idmap range using 52bit PA is quite broken, as we
don't take into account the right number of PGD entries, and rely
on PTRS_PER_PGD. The result is that pgd_index() truncates the
address, and we end-up in the weed.

Let's introduce a new unmap_hyp_idmap_range() that knows about this,
together with a kvm_pgd_index() helper, which hides a bit of the
complexity of the issue.

Fixes: 98732d1b189b ("KVM: arm/arm64: fix HYP ID map extension to 52 bits")
Reported-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Fix idmap size and alignment
Marc Zyngier [Mon, 12 Mar 2018 14:25:10 +0000 (14:25 +0000)]
KVM: arm/arm64: Fix idmap size and alignment

Although the idmap section of KVM can only be at most 4kB and
must be aligned on a 4kB boundary, the rest of the code expects
it to be page aligned. Things get messy when tearing down the
HYP page tables when PAGE_SIZE is 64K, and the idmap section isn't
64K aligned.

Let's fix this by computing aligned boundaries that the HYP code
will use.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: James Morse <james.morse@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state
Marc Zyngier [Mon, 4 Dec 2017 16:43:23 +0000 (16:43 +0000)]
KVM: arm/arm64: Keep GICv2 HYP VAs in kvm_vgic_global_state

As we're about to change the way we map devices at HYP, we need
to move away from kern_hyp_va on an IO address.

One way of achieving this is to store the VAs in kvm_vgic_global_state,
and use that directly from the HYP code. This requires a small change
to create_hyp_io_mappings so that it can also return a HYP VA.

We take this opportunity to nuke the vctrl_base field in the emulated
distributor, as it is not used anymore.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings
Marc Zyngier [Mon, 4 Dec 2017 16:26:09 +0000 (16:26 +0000)]
KVM: arm/arm64: Move ioremap calls to create_hyp_io_mappings

Both HYP io mappings call ioremap, followed by create_hyp_io_mappings.
Let's move the ioremap call into create_hyp_io_mappings itself, which
simplifies the code a bit and allows for further refactoring.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Demote HYP VA range display to being a debug feature
Marc Zyngier [Sun, 3 Dec 2017 20:04:51 +0000 (20:04 +0000)]
KVM: arm/arm64: Demote HYP VA range display to being a debug feature

Displaying the HYP VA information is slightly counterproductive when
using VA randomization. Turn it into a debug feature only, and adjust
the last displayed value to reflect the top of RAM instead of ~0.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
Marc Zyngier [Sun, 3 Dec 2017 19:28:56 +0000 (19:28 +0000)]
KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always PC relative.

So let's bite the bullet and provide our own accessor.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag
Marc Zyngier [Sun, 3 Dec 2017 17:42:37 +0000 (17:42 +0000)]
arm64: cpufeatures: Drop the ARM64_HYP_OFFSET_LOW feature flag

Now that we can dynamically compute the kernek/hyp VA mask, there
is no need for a feature flag to trigger the alternative patching.
Let's drop the flag and everything that depends on it.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: KVM: Dynamically patch the kernel/hyp VA mask
Marc Zyngier [Sun, 3 Dec 2017 17:36:55 +0000 (17:36 +0000)]
arm64: KVM: Dynamically patch the kernel/hyp VA mask

So far, we're using a complicated sequence of alternatives to
patch the kernel/hyp VA mask on non-VHE, and NOP out the
masking altogether when on VHE.

The newly introduced dynamic patching gives us the opportunity
to simplify that code by patching a single instruction with
the correct mask (instead of the mind bending cumulative masking
we have at the moment) or even a single NOP on VHE. This also
adds some initial code that will allow the patching callback
to switch to a more complex patching.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: insn: Add encoder for bitwise operations using literals
Marc Zyngier [Sun, 3 Dec 2017 17:09:08 +0000 (17:09 +0000)]
arm64: insn: Add encoder for bitwise operations using literals

We lack a way to encode operations such as AND, ORR, EOR that take
an immediate value. Doing so is quite involved, and is all about
reverse engineering the decoding algorithm described in the
pseudocode function DecodeBitMasks().

This has been tested by feeding it all the possible literal values
and comparing the output with that of GAS.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: insn: Add N immediate encoding
Marc Zyngier [Sun, 3 Dec 2017 17:01:39 +0000 (17:01 +0000)]
arm64: insn: Add N immediate encoding

We're missing the a way to generate the encoding of the N immediate,
which is only a single bit used in a number of instruction that take
an immediate.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoarm64: alternatives: Add dynamic patching feature
Marc Zyngier [Sun, 3 Dec 2017 12:02:14 +0000 (12:02 +0000)]
arm64: alternatives: Add dynamic patching feature

We've so far relied on a patching infrastructure that only gave us
a single alternative, without any way to provide a range of potential
replacement instructions. For a single feature, this is an all or
nothing thing.

It would be interesting to have a more flexible grained way of patching
the kernel though, where we could dynamically tune the code that gets
injected.

In order to achive this, let's introduce a new form of dynamic patching,
assiciating a callback to a patching site. This callback gets source and
target locations of the patching request, as well as the number of
instructions to be patched.

Dynamic patching is declared with the new ALTERNATIVE_CB and alternative_cb
directives:

asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback)
     : "r" (v));
or
alternative_cb callback
mov x0, #0
alternative_cb_end

where callback is the C function computing the alternative.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Avoid VGICv3 save/restore on VHE with no IRQs
Christoffer Dall [Thu, 5 Oct 2017 15:19:19 +0000 (17:19 +0200)]
KVM: arm/arm64: Avoid VGICv3 save/restore on VHE with no IRQs

We can finally get completely rid of any calls to the VGICv3
save/restore functions when the AP lists are empty on VHE systems.  This
requires carefully factoring out trap configuration from saving and
restoring state, and carefully choosing what to do on the VHE and
non-VHE path.

One of the challenges is that we cannot save/restore the VMCR lazily
because we can only write the VMCR when ICC_SRE_EL1.SRE is cleared when
emulating a GICv2-on-GICv3, since otherwise all Group-0 interrupts end
up being delivered as FIQ.

To solve this problem, and still provide fast performance in the fast
path of exiting a VM when no interrupts are pending (which also
optimized the latency for actually delivering virtual interrupts coming
from physical interrupts), we orchestrate a dance of only doing the
activate/deactivate traps in vgic load/put for VHE systems (which can
have ICC_SRE_EL1.SRE cleared when running in the host), and doing the
configuration on every round-trip on non-VHE systems.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move VGIC APR save/restore to vgic put/load
Christoffer Dall [Wed, 4 Oct 2017 22:18:07 +0000 (00:18 +0200)]
KVM: arm/arm64: Move VGIC APR save/restore to vgic put/load

The APRs can only have bits set when the guest acknowledges an interrupt
in the LR and can only have a bit cleared when the guest EOIs an
interrupt in the LR.  Therefore, if we have no LRs with any
pending/active interrupts, the APR cannot change value and there is no
need to clear it on every exit from the VM (hint: it will have already
been cleared when we exited the guest the last time with the LRs all
EOIed).

The only case we need to take care of is when we migrate the VCPU away
from a CPU or migrate a new VCPU onto a CPU, or when we return to
userspace to capture the state of the VCPU for migration.  To make sure
this works, factor out the APR save/restore functionality into separate
functions called from the VCPU (and by extension VGIC) put/load hooks.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Handle VGICv3 save/restore from the main VGIC code on VHE
Christoffer Dall [Wed, 4 Oct 2017 21:42:32 +0000 (23:42 +0200)]
KVM: arm/arm64: Handle VGICv3 save/restore from the main VGIC code on VHE

Just like we can program the GICv2 hypervisor control interface directly
from the core vgic code, we can do the same for the GICv3 hypervisor
control interface on VHE systems.

We do this by simply calling the save/restore functions when we have VHE
and we can then get rid of the save/restore function calls from the VHE
world switch function.

One caveat is that we now write GICv3 system register state before the
potential early exit path in the run loop, and because we sync back
state in the early exit path, we have to ensure that we read a
consistent GIC state from the sync path, even though we have never
actually run the guest with the newly written GIC state.  We solve this
by inserting an ISB in the early exit path.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move arm64-only vgic-v2-sr.c file to arm64
Christoffer Dall [Wed, 4 Oct 2017 21:25:24 +0000 (23:25 +0200)]
KVM: arm/arm64: Move arm64-only vgic-v2-sr.c file to arm64

The vgic-v2-sr.c file now only contains the logic to replay unaligned
accesses to the virtual CPU interface on 16K and 64K page systems, which
is only relevant on 64-bit platforms.  Therefore move this file to the
arm64 KVM tree, remove the compile directive from the 32-bit side
makefile, and remove the ifdef in the C file.

Since this file also no longer saves/restores anything, rename the file
to vgic-v2-cpuif-proxy.c to more accurately describe the logic in this
file.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Handle VGICv2 save/restore from the main VGIC code
Christoffer Dall [Thu, 22 Dec 2016 19:39:10 +0000 (20:39 +0100)]
KVM: arm/arm64: Handle VGICv2 save/restore from the main VGIC code

We can program the GICv2 hypervisor control interface logic directly
from the core vgic code and can instead do the save/restore directly
from the flush/sync functions, which can lead to a number of future
optimizations.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Get rid of vgic_elrsr
Christoffer Dall [Wed, 4 Oct 2017 22:02:41 +0000 (00:02 +0200)]
KVM: arm/arm64: Get rid of vgic_elrsr

There is really no need to store the vgic_elrsr on the VGIC data
structures as the only need we have for the elrsr is to figure out if an
LR is inactive when we save the VGIC state upon returning from the
guest.  We can might as well store this in a temporary local variable.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Cleanup __activate_traps and __deactive_traps for VHE and non-VHE
Christoffer Dall [Tue, 3 Oct 2017 15:06:15 +0000 (17:06 +0200)]
KVM: arm64: Cleanup __activate_traps and __deactive_traps for VHE and non-VHE

To make the code more readable and to avoid the overhead of a function
call, let's get rid of a pair of the alternative function selectors and
explicitly call the VHE and non-VHE functions using the has_vhe() static
key based selector instead, telling the compiler to try to inline the
static function if it can.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Configure c15, PMU, and debug register traps on cpu load/put for VHE
Christoffer Dall [Fri, 4 Aug 2017 11:47:18 +0000 (13:47 +0200)]
KVM: arm64: Configure c15, PMU, and debug register traps on cpu load/put for VHE

We do not have to change the c15 trap setting on each switch to/from the
guest on VHE systems, because this setting only affects guest EL1/EL0
(and therefore not the VHE host).

The PMU and debug trap configuration can also be done on vcpu load/put
instead, because they don't affect how the VHE host kernel can access the
debug registers while executing KVM kernel code.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Directly call VHE and non-VHE FPSIMD enabled functions
Christoffer Dall [Fri, 4 Aug 2017 07:39:51 +0000 (09:39 +0200)]
KVM: arm64: Directly call VHE and non-VHE FPSIMD enabled functions

There is no longer a need for an alternative to choose the right
function to tell us whether or not FPSIMD was enabled for the VM,
because we can simply can the appropriate functions directly from within
the _vhe and _nvhe run functions.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Move common VHE/non-VHE trap config in separate functions
Christoffer Dall [Fri, 4 Aug 2017 06:50:25 +0000 (08:50 +0200)]
KVM: arm64: Move common VHE/non-VHE trap config in separate functions

As we are about to be more lazy with some of the trap configuration
register read/writes for VHE systems, move the logic that is currently
shared between VHE and non-VHE into a separate function which can be
called from either the world-switch path or from vcpu_load/vcpu_put.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Defer saving/restoring 32-bit sysregs to vcpu load/put
Christoffer Dall [Wed, 27 Dec 2017 21:12:12 +0000 (22:12 +0100)]
KVM: arm64: Defer saving/restoring 32-bit sysregs to vcpu load/put

When running a 32-bit VM (EL1 in AArch32), the AArch32 system registers
can be deferred to vcpu load/put on VHE systems because neither
the host kernel nor host userspace uses these registers.

Note that we can't save DBGVCR32_EL2 conditionally based on the state of
the debug dirty flag on VHE after this change, because during
vcpu_load() we haven't calculated a valid debug flag yet, and when we've
restored the register during vcpu_load() we also have to save it during
vcpu_put().  This means that we'll always restore/save the register for
VHE on load/put, but luckily vcpu load/put are called rarely, so saving
an extra register unconditionally shouldn't significantly hurt
performance.

We can also not defer saving FPEXC32_32 because this register only holds
a guest-valid value for 32-bit guests during the exit path when the
guest has used FPSIMD registers and restored the register in the early
assembly handler from taking the EL2 fault, and therefore we have to
check if fpsimd is enabled for the guest in the exit path and save the
register then, for both VHE and non-VHE guests.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Prepare to handle deferred save/restore of 32-bit registers
Christoffer Dall [Wed, 27 Dec 2017 20:59:09 +0000 (21:59 +0100)]
KVM: arm64: Prepare to handle deferred save/restore of 32-bit registers

32-bit registers are not used by a 64-bit host kernel and can be
deferred, but we need to rework the accesses to these register to access
the latest values depending on whether or not guest system registers are
loaded on the CPU or only reside in memory.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Defer saving/restoring 64-bit sysregs to vcpu load/put on VHE
Christoffer Dall [Tue, 15 Mar 2016 18:43:45 +0000 (19:43 +0100)]
KVM: arm64: Defer saving/restoring 64-bit sysregs to vcpu load/put on VHE

Some system registers do not affect the host kernel's execution and can
therefore be loaded when we are about to run a VCPU and we don't have to
restore the host state to the hardware before the time when we are
actually about to return to userspace or schedule out the VCPU thread.

The EL1 system registers and the userspace state registers only
affecting EL0 execution do not need to be saved and restored on every
switch between the VM and the host, because they don't affect the host
kernel's execution.

We mark all registers which are now deffered as such in the
vcpu_{read,write}_sys_reg accessors in sys-regs.c to ensure the most
up-to-date copy is always accessed.

Note MPIDR_EL1 (controlled via VMPIDR_EL2) is accessed from other vcpu
threads, for example via the GIC emulation, and therefore must be
declared as immediate, which is fine as the guest cannot modify this
value.

The 32-bit sysregs can also be deferred but we do this in a separate
patch as it requires a bit more infrastructure.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Prepare to handle deferred save/restore of ELR_EL1
Christoffer Dall [Wed, 27 Dec 2017 19:51:04 +0000 (20:51 +0100)]
KVM: arm64: Prepare to handle deferred save/restore of ELR_EL1

ELR_EL1 is not used by a VHE host kernel and can be deferred, but we
need to rework the accesses to this register to access the latest value
depending on whether or not guest system registers are loaded on the CPU
or only reside in memory.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Prepare to handle deferred save/restore of SPSR_EL1
Christoffer Dall [Wed, 27 Dec 2017 19:01:52 +0000 (20:01 +0100)]
KVM: arm/arm64: Prepare to handle deferred save/restore of SPSR_EL1

SPSR_EL1 is not used by a VHE host kernel and can be deferred, but we
need to rework the accesses to this register to access the latest value
depending on whether or not guest system registers are loaded on the CPU
or only reside in memory.

The handling of accessing the various banked SPSRs for 32-bit VMs is a
bit clunky, but this will be improved in following patches which will
first prepare and subsequently implement deferred save/restore of the
32-bit registers, including the 32-bit SPSRs.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Introduce framework for accessing deferred sysregs
Christoffer Dall [Sat, 23 Dec 2017 20:53:48 +0000 (21:53 +0100)]
KVM: arm64: Introduce framework for accessing deferred sysregs

We are about to defer saving and restoring some groups of system
registers to vcpu_put and vcpu_load on supported systems.  This means
that we need some infrastructure to access system registes which
supports either accessing the memory backing of the register or directly
accessing the system registers, depending on the state of the system
when we access the register.

We do this by defining read/write accessor functions, which can handle
both "immediate" and "deferrable" system registers.  Immediate registers
are always saved/restored in the world-switch path, but deferrable
registers are only saved/restored in vcpu_put/vcpu_load when supported
and sysregs_loaded_on_cpu will be set in that case.

Note that we don't use the deferred mechanism yet in this patch, but only
introduce infrastructure.  This is to improve convenience of review in
the subsequent patches where it is clear which registers become
deferred.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Rewrite system register accessors to read/write functions
Christoffer Dall [Wed, 16 Mar 2016 14:38:53 +0000 (15:38 +0100)]
KVM: arm64: Rewrite system register accessors to read/write functions

Currently we access the system registers array via the vcpu_sys_reg()
macro.  However, we are about to change the behavior to some times
modify the register file directly, so let's change this to two
primitives:

 * Accessor macros vcpu_write_sys_reg() and vcpu_read_sys_reg()
 * Direct array access macro __vcpu_sys_reg()

The accessor macros should be used in places where the code needs to
access the currently loaded VCPU's state as observed by the guest.  For
example, when trapping on cache related registers, a write to a system
register should go directly to the VCPU version of the register.

The direct array access macro can be used in places where the VCPU is
known to never be running (for example userspace access) or for
registers which are never context switched (for example all the PMU
system registers).

This rewrites all users of vcpu_sys_regs to one of the macros described
above.

No functional change.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Change 32-bit handling of VM system registers
Christoffer Dall [Wed, 11 Oct 2017 13:20:41 +0000 (15:20 +0200)]
KVM: arm64: Change 32-bit handling of VM system registers

We currently handle 32-bit accesses to trapped VM system registers using
the 32-bit index into the coproc array on the vcpu structure, which is a
union of the coproc array and the sysreg array.

Since all the 32-bit coproc indices are created to correspond to the
architectural mapping between 64-bit system registers and 32-bit
coprocessor registers, and because the AArch64 system registers are the
double in size of the AArch32 coprocessor registers, we can always find
the system register entry that we must update by dividing the 32-bit
coproc index by 2.

This is going to make our lives much easier when we have to start
accessing system registers that use deferred save/restore and might
have to be read directly from the physical CPU.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Don't save the host ELR_EL2 and SPSR_EL2 on VHE systems
Christoffer Dall [Tue, 10 Oct 2017 20:54:57 +0000 (22:54 +0200)]
KVM: arm64: Don't save the host ELR_EL2 and SPSR_EL2 on VHE systems

On non-VHE systems we need to save the ELR_EL2 and SPSR_EL2 so that we can
return to the host in EL1 in the same state and location where we issued a
hypercall to EL2, but on VHE ELR_EL2 and SPSR_EL2 are not useful because we
never enter a guest as a result of an exception entry that would be directly
handled by KVM. The kernel entry code already saves ELR_EL1/SPSR_EL1 on
exception entry, which is enough.  Therefore, factor out these registers into
separate save/restore functions, making it easy to exclude them from the VHE
world-switch path later on.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Unify non-VHE host/guest sysreg save and restore functions
Christoffer Dall [Tue, 10 Oct 2017 20:40:13 +0000 (22:40 +0200)]
KVM: arm64: Unify non-VHE host/guest sysreg save and restore functions

There is no need to have multiple identical functions with different
names for saving host and guest state.  When saving and restoring state
for the host and guest, the state is the same for both contexts, and
that's why we have the kvm_cpu_context structure.  Delete one
version and rename the other into simply save/restore.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Remove leftover comment from kvm_vcpu_run_vhe
Christoffer Dall [Sun, 3 Dec 2017 19:38:52 +0000 (20:38 +0100)]
KVM: arm/arm64: Remove leftover comment from kvm_vcpu_run_vhe

The comment only applied to SPE on non-VHE systems, so we simply remove
it.

Suggested-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Introduce separate VHE/non-VHE sysreg save/restore functions
Christoffer Dall [Tue, 10 Oct 2017 20:19:31 +0000 (22:19 +0200)]
KVM: arm64: Introduce separate VHE/non-VHE sysreg save/restore functions

As we are about to handle system registers quite differently between VHE
and non-VHE systems.  In preparation for that, we need to split some of
the handling functions between VHE and non-VHE functionality.

For now, we simply copy the non-VHE functions, but we do change the use
of static keys for VHE and non-VHE functionality now that we have
separate functions.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Rewrite sysreg alternatives to static keys
Christoffer Dall [Tue, 10 Oct 2017 20:01:06 +0000 (22:01 +0200)]
KVM: arm64: Rewrite sysreg alternatives to static keys

As we are about to move calls around in the sysreg save/restore logic,
let's first rewrite the alternative function callers, because it is
going to make the next patches much easier to read.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Move userspace system registers into separate function
Christoffer Dall [Tue, 15 Mar 2016 20:41:55 +0000 (21:41 +0100)]
KVM: arm64: Move userspace system registers into separate function

There's a semantic difference between the EL1 registers that control
operation of a kernel running in EL1 and EL1 registers that only control
userspace execution in EL0.  Since we can defer saving/restoring the
latter, move them into their own function.

The ARMv8 ARM (ARM DDI 0487C.a) Section D10.2.1 recommends that
ACTLR_EL1 has no effect on the processor when running the VHE host, and
we can therefore move this register into the EL1 state which is only
saved/restored on vcpu_put/load for a VHE host.

We also take this chance to rename the function saving/restoring the
remaining system register to make it clear this function deals with
the EL1 system registers.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Remove noop calls to timer save/restore from VHE switch
Christoffer Dall [Sat, 5 Aug 2017 20:51:35 +0000 (22:51 +0200)]
KVM: arm64: Remove noop calls to timer save/restore from VHE switch

The VHE switch function calls __timer_enable_traps and
__timer_disable_traps which don't do anything on VHE systems.
Therefore, simply remove these calls from the VHE switch function and
make the functions non-conditional as they are now only called from the
non-VHE switch path.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Don't deactivate VM on VHE systems
Christoffer Dall [Tue, 10 Oct 2017 11:25:21 +0000 (13:25 +0200)]
KVM: arm64: Don't deactivate VM on VHE systems

There is no need to reset the VTTBR to zero when exiting the guest on
VHE systems.  VHE systems don't use stage 2 translations for the EL2&0
translation regime used by the host.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Remove kern_hyp_va() use in VHE switch function
Christoffer Dall [Thu, 22 Dec 2016 23:20:38 +0000 (00:20 +0100)]
KVM: arm64: Remove kern_hyp_va() use in VHE switch function

VHE kernels run completely in EL2 and therefore don't have a notion of
kernel and hyp addresses, they are all just kernel addresses.  Therefore
don't call kern_hyp_va() in the VHE switch function.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Introduce VHE-specific kvm_vcpu_run
Christoffer Dall [Tue, 3 Oct 2017 12:02:12 +0000 (14:02 +0200)]
KVM: arm64: Introduce VHE-specific kvm_vcpu_run

So far this is mostly (see below) a copy of the legacy non-VHE switch
function, but we will start reworking these functions in separate
directions to work on VHE and non-VHE in the most optimal way in later
patches.

The only difference after this patch between the VHE and non-VHE run
functions is that we omit the branch-predictor variant-2 hardening for
QC Falkor CPUs, because this workaround is specific to a series of
non-VHE ARMv8.0 CPUs.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Factor out fault info population and gic workarounds
Christoffer Dall [Tue, 3 Oct 2017 11:16:04 +0000 (13:16 +0200)]
KVM: arm64: Factor out fault info population and gic workarounds

The current world-switch function has functionality to detect a number
of cases where we need to fixup some part of the exit condition and
possibly run the guest again, before having restored the host state.

This includes populating missing fault info, emulating GICv2 CPU
interface accesses when mapped at unaligned addresses, and emulating
the GICv3 CPU interface on systems that need it.

As we are about to have an alternative switch function for VHE systems,
but VHE systems still need the same early fixup logic, factor out this
logic into a separate function that can be shared by both switch
functions.

No functional change.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Improve debug register save/restore flow
Christoffer Dall [Tue, 10 Oct 2017 18:10:08 +0000 (20:10 +0200)]
KVM: arm64: Improve debug register save/restore flow

Instead of having multiple calls from the world switch path to the debug
logic, each figuring out if the dirty bit is set and if we should
save/restore the debug registers, let's just provide two hooks to the
debug save/restore functionality, one for switching to the guest
context, and one for switching to the host context, and we get the
benefit of only having to evaluate the dirty flag once on each path,
plus we give the compiler some more room to inline some of this
functionality.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Slightly improve debug save/restore functions
Christoffer Dall [Tue, 10 Oct 2017 17:55:56 +0000 (19:55 +0200)]
KVM: arm64: Slightly improve debug save/restore functions

The debug save/restore functions can be improved by using the has_vhe()
static key instead of the instruction alternative.  Using the static key
uses the same paradigm as we're going to use elsewhere, it makes the
code more readable, and it generates slightly better code (no
stack setups and function calls unless necessary).

We also use a static key on the restore path, because it will be
marginally faster than loading a value from memory.

Finally, we don't have to conditionally clear the debug dirty flag if
it's set, we can just clear it.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Move debug dirty flag calculation out of world switch
Christoffer Dall [Tue, 10 Oct 2017 17:31:33 +0000 (19:31 +0200)]
KVM: arm64: Move debug dirty flag calculation out of world switch

There is no need to figure out inside the world-switch if we should
save/restore the debug registers or not, we might as well do that in the
higher level debug setup code, making it easier to optimize down the
line.

Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Introduce vcpu_el1_is_32bit
Christoffer Dall [Wed, 13 Dec 2017 21:56:48 +0000 (22:56 +0100)]
KVM: arm/arm64: Introduce vcpu_el1_is_32bit

We have numerous checks around that checks if the HCR_EL2 has the RW bit
set to figure out if we're running an AArch64 or AArch32 VM.  In some
cases, directly checking the RW bit (given its unintuitive name), is a
bit confusing, and that's not going to improve as we move logic around
for the following patches that optimize KVM on AArch64 hosts with VHE.

Therefore, introduce a helper, vcpu_el1_is_32bit, and replace existing
direct checks of HCR_EL2.RW with the helper.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Add kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs
Christoffer Dall [Tue, 10 Oct 2017 08:21:18 +0000 (10:21 +0200)]
KVM: arm/arm64: Add kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs

As we are about to move a bunch of save/restore logic for VHE kernels to
the load and put functions, we need some infrastructure to do this.

Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Get rid of vcpu->arch.irq_lines
Christoffer Dall [Thu, 3 Aug 2017 10:09:05 +0000 (12:09 +0200)]
KVM: arm/arm64: Get rid of vcpu->arch.irq_lines

We currently have a separate read-modify-write of the HCR_EL2 on entry
to the guest for the sole purpose of setting the VF and VI bits, if set.
Since this is most rarely the case (only when using userspace IRQ chip
and interrupts are in flight), let's get rid of this operation and
instead modify the bits in the vcpu->arch.hcr[_el2] directly when
needed.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Move HCR_INT_OVERRIDE to default HCR_EL2 guest flag
Shih-Wei Li [Thu, 3 Aug 2017 09:45:21 +0000 (11:45 +0200)]
KVM: arm64: Move HCR_INT_OVERRIDE to default HCR_EL2 guest flag

We always set the IMO and FMO bits in the HCR_EL2 when running the
guest, regardless if we use the vgic or not.  By moving these flags to
HCR_GUEST_FLAGS we can avoid one of the extra save/restore operations of
HCR_EL2 in the world switch code, and we can also soon get rid of the
other one.

This is safe, because even though the IMO and FMO bits control both
taking the interrupts to EL2 and remapping ICC_*_EL1 to ICV_*_EL1 when
executed at EL1, as long as we ensure that these bits are clear when
running the EL1 host, we're OK, because we reset the HCR_EL2 to only
have the HCR_RW bit set when returning to EL1 on non-VHE systems.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Shih-Wei Li <shihwei@cs.columbia.edu>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Rework hyp_panic for VHE and non-VHE
Christoffer Dall [Mon, 9 Oct 2017 19:43:50 +0000 (21:43 +0200)]
KVM: arm64: Rework hyp_panic for VHE and non-VHE

VHE actually doesn't rely on clearing the VTTBR when returning to the
host kernel, and that is the current key mechanism of hyp_panic to
figure out how to attempt to return to a state good enough to print a
panic statement.

Therefore, we split the hyp_panic function into two functions, a VHE and
a non-VHE, keeping the non-VHE version intact, but changing the VHE
behavior.

The vttbr_el2 check on VHE doesn't really make that much sense, because
the only situation where we can get here on VHE is when the hypervisor
assembly code actually called into hyp_panic, which only happens when
VBAR_EL2 has been set to the KVM exception vectors.  On VHE, we can
always safely disable the traps and restore the host registers at this
point, so we simply do that unconditionally and call into the panic
function directly.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm64: Avoid storing the vcpu pointer on the stack
Christoffer Dall [Sun, 8 Oct 2017 15:01:56 +0000 (17:01 +0200)]
KVM: arm64: Avoid storing the vcpu pointer on the stack

We already have the percpu area for the host cpu state, which points to
the VCPU, so there's no need to store the VCPU pointer on the stack on
every context switch.  We can be a little more clever and just use
tpidr_el2 for the percpu offset and load the VCPU pointer from the host
context.

This has the benefit of being able to retrieve the host context even
when our stack is corrupted, and it has a potential performance benefit
because we trade a store plus a load for an mrs and a load on a round
trip to the guest.

This does require us to calculate the percpu offset without including
the offset from the kernel mapping of the percpu array to the linear
mapping of the array (which is what we store in tpidr_el1), because a
PC-relative generated address in EL2 is already giving us the hyp alias
of the linear mapping of a kernel address.  We do this in
__cpu_init_hyp_mode() by using kvm_ksym_ref().

The code that accesses ESR_EL2 was previously using an alternative to
use the _EL1 accessor on VHE systems, but this was actually unnecessary
as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2
accessor does the same thing on both systems.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Move vcpu_load call after kvm_vcpu_first_run_init
Christoffer Dall [Wed, 29 Nov 2017 15:37:53 +0000 (16:37 +0100)]
KVM: arm/arm64: Move vcpu_load call after kvm_vcpu_first_run_init

Moving the call to vcpu_load() in kvm_arch_vcpu_ioctl_run() to after
we've called kvm_vcpu_first_run_init() simplifies some of the vgic and
there is also no need to do vcpu_load() for things such as handling the
immediate_exit flag.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agoKVM: arm/arm64: Avoid vcpu_load for other vcpu ioctls than KVM_RUN
Christoffer Dall [Sat, 25 Nov 2017 19:25:00 +0000 (20:25 +0100)]
KVM: arm/arm64: Avoid vcpu_load for other vcpu ioctls than KVM_RUN

Calling vcpu_load() registers preempt notifiers for this vcpu and calls
kvm_arch_vcpu_load().  The latter will soon be doing a lot of heavy
lifting on arm/arm64 and will try to do things such as enabling the
virtual timer and setting us up to handle interrupts from the timer
hardware.

Loading state onto hardware registers and enabling hardware to signal
interrupts can be problematic when we're not actually about to run the
VCPU, because it makes it difficult to establish the right context when
handling interrupts from the timer, and it makes the register access
code difficult to reason about.

Luckily, now when we call vcpu_load in each ioctl implementation, we can
simply remove the call from the non-KVM_RUN vcpu ioctls, and our
kvm_arch_vcpu_load() is only used for loading vcpu content to the
physical CPU when we're actually going to run the vcpu.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
6 years agox86/kvm/vmx: avoid expensive rdmsr for MSR_GS_BASE
Vitaly Kuznetsov [Tue, 13 Mar 2018 17:48:05 +0000 (18:48 +0100)]
x86/kvm/vmx: avoid expensive rdmsr for MSR_GS_BASE

vmx_save_host_state() is only called from kvm_arch_vcpu_ioctl_run() so
the context is pretty well defined and as we're past 'swapgs' MSR_GS_BASE
should contain kernel's GS base which we point to irq_stack_union.

Add new kernelmode_gs_base() API, irq_stack_union needs to be exported
as KVM can be build as module.

Acked-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agox86/kvm/vmx: read MSR_{FS,KERNEL_GS}_BASE from current->thread
Vitaly Kuznetsov [Tue, 13 Mar 2018 17:48:04 +0000 (18:48 +0100)]
x86/kvm/vmx: read MSR_{FS,KERNEL_GS}_BASE from current->thread

vmx_save_host_state() is only called from kvm_arch_vcpu_ioctl_run() so
the context is pretty well defined. Read MSR_{FS,KERNEL_GS}_BASE from
current->thread after calling save_fsgs() which takes care of
X86_BUG_NULL_SEG case now and will do RD[FG,GS]BASE when FSGSBASE
extensions are exposed to userspace (currently they are not).

Acked-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: X86: Provide a capability to disable PAUSE intercepts
Wanpeng Li [Mon, 12 Mar 2018 11:53:04 +0000 (04:53 -0700)]
KVM: X86: Provide a capability to disable PAUSE intercepts

Allow to disable pause loop exit/pause filtering on a per VM basis.

If some VMs have dedicated host CPUs, they won't be negatively affected
due to needlessly intercepted PAUSE instructions.

Thanks to Jan H. Schönherr's initial patch.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Jan H. Schönherr <jschoenh@amazon.de>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: X86: Provide a capability to disable HLT intercepts
Wanpeng Li [Mon, 12 Mar 2018 11:53:03 +0000 (04:53 -0700)]
KVM: X86: Provide a capability to disable HLT intercepts

If host CPUs are dedicated to a VM, we can avoid VM exits on HLT.
This patch adds the per-VM capability to disable them.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Jan H. Schönherr <jschoenh@amazon.de>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: X86: Provide a capability to disable MWAIT intercepts
Wanpeng Li [Mon, 12 Mar 2018 11:53:02 +0000 (04:53 -0700)]
KVM: X86: Provide a capability to disable MWAIT intercepts

Allowing a guest to execute MWAIT without interception enables a guest
to put a (physical) CPU into a power saving state, where it takes
longer to return from than what may be desired by the host.

Don't give a guest that power over a host by default. (Especially,
since nothing prevents a guest from using MWAIT even when it is not
advertised via CPUID.)

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Jan H. Schönherr <jschoenh@amazon.de>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoMerge tag 'kvm-s390-next-4.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Fri, 16 Mar 2018 21:03:18 +0000 (22:03 +0100)]
Merge tag 'kvm-s390-next-4.17-1' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: fixes and features

- more kvm stat counters
- virtio gpu plumbing. The 3 non-KVM/s390 patches have Acks from
  Bartlomiej Zolnierkiewicz, Heiko Carstens and Greg Kroah-Hartman
  but all belong together to make virtio-gpu work as a tty. So
  I carried them in the KVM/s390 tree.
- document some KVM_CAPs
- cpu-model only facilities
- cleanups

6 years agoKVM: x86: Add support for VMware backdoor Pseudo-PMCs
Arbel Moshe [Mon, 12 Mar 2018 11:12:53 +0000 (13:12 +0200)]
KVM: x86: Add support for VMware backdoor Pseudo-PMCs

VMware exposes the following Pseudo PMCs:
0x10000: Physical host TSC
0x10001: Elapsed real time in ns
0x10002: Elapsed apparent time in ns

For more info refer to:
https://www.vmware.com/files/pdf/techpaper/Timekeeping-In-VirtualMachines.pdf

VMware allows access to these Pseduo-PMCs even when read via RDPMC
in Ring3 and CR4.PCE=0. Therefore, commit modifies x86 emulator
to allow access to these PMCs in this situation. In addition,
emulation of these PMCs were added to kvm_pmu_rdpmc().

Signed-off-by: Arbel Moshe <arbel.moshe@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: SVM: Intercept #GP to support access to VMware backdoor ports
Liran Alon [Mon, 12 Mar 2018 11:12:52 +0000 (13:12 +0200)]
KVM: x86: SVM: Intercept #GP to support access to VMware backdoor ports

If KVM enable_vmware_backdoor module parameter is set,
the commit change VMX to now intercept #GP instead of being directly
deliviered from CPU to guest.

It is done to support access to VMware Backdoor I/O ports
even if TSS I/O permission denies it.
In that case:
1. A #GP will be raised and intercepted.
2. #GP intercept handler will simulate I/O port access instruction.
3. I/O port access instruction simulation will allow access to VMware
backdoor ports specifically even if TSS I/O permission bitmap denies it.

Note that the above change introduce slight performance hit as now #GPs
are now not deliviered directly from CPU to guest but instead
cause #VMExit and instruction emulation.
However, this behavior is introduced only when enable_vmware_backdoor
KVM module parameter is set.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: VMX: Intercept #GP to support access to VMware backdoor ports
Liran Alon [Mon, 12 Mar 2018 11:12:51 +0000 (13:12 +0200)]
KVM: x86: VMX: Intercept #GP to support access to VMware backdoor ports

If KVM enable_vmware_backdoor module parameter is set,
the commit change VMX to now intercept #GP instead of being directly
deliviered from CPU to guest.

It is done to support access to VMware backdoor I/O ports
even if TSS I/O permission denies it.
In that case:
1. A #GP will be raised and intercepted.
2. #GP intercept handler will simulate I/O port access instruction.
3. I/O port access instruction simulation will allow access to VMware
backdoor ports specifically even if TSS I/O permission bitmap denies it.

Note that the above change introduce slight performance hit as now #GPs
are not deliviered directly from CPU to guest but instead
cause #VMExit and instruction emulation.
However, this behavior is introduced only when enable_vmware_backdoor
KVM module parameter is set.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Emulate only IN/OUT instructions when accessing VMware backdoor
Liran Alon [Mon, 12 Mar 2018 11:12:50 +0000 (13:12 +0200)]
KVM: x86: Emulate only IN/OUT instructions when accessing VMware backdoor

Access to VMware backdoor ports is done by one of the IN/OUT/INS/OUTS
instructions. These ports must be allowed access even if TSS I/O
permission bitmap don't allow it.

To handle this, VMX/SVM will be changed in future commits
to intercept #GP which was raised by such access and
handle it by calling x86 emulator to emulate instruction.
If it was one of these instructions, the x86 emulator already handles
it correctly (Since commit "KVM: x86: Always allow access to VMware
backdoor I/O ports") by not checking these ports against TSS I/O
permission bitmap.

One may wonder why checking for specific instructions is necessary
as we can just forward all #GPs to the x86 emulator.
There are multiple reasons for doing so:

1. We don't want the x86 emulator to be reached easily
by guest by just executing an instruction that raises #GP as that
exposes the x86 emulator as a bigger attack surface.

2. The x86 emulator is incomplete and therefore certain instructions
that can cause #GP cannot be emulated. Such an example is "INT x"
(opcode 0xcd) which reaches emulate_int() which can only emulate
the instruction if vCPU is in real-mode.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Add emulation_type to not raise #UD on emulation failure
Liran Alon [Mon, 12 Mar 2018 11:12:49 +0000 (13:12 +0200)]
KVM: x86: Add emulation_type to not raise #UD on emulation failure

Next commits are going introduce support for accessing VMware backdoor
ports even though guest's TSS I/O permissions bitmap doesn't allow
access. This mimic VMware hypervisor behavior.

In order to support this, next commits will change VMX/SVM to
intercept #GP which was raised by such access and handle it by calling
the x86 emulator to emulate instruction. Since commit "KVM: x86:
Always allow access to VMware backdoor I/O ports", the x86 emulator
handles access to these I/O ports by not checking these ports against
the TSS I/O permission bitmap.

However, there could be cases that CPU rasies a #GP on instruction
that fails to be disassembled by the x86 emulator (Because of
incomplete implementation for example).

In those cases, we would like the #GP intercept to just forward #GP
as-is to guest as if there was no intercept to begin with.
However, current emulator code always queues #UD exception in case
emulator fails (including disassembly failures) which is not what is
wanted in this flow.

This commit addresses this issue by adding a new emulation_type flag
that will allow the #GP intercept handler to specify that it wishes
to be aware when instruction emulation fails and doesn't want #UD
exception to be queued.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Always allow access to VMware backdoor I/O ports
Liran Alon [Mon, 12 Mar 2018 11:12:48 +0000 (13:12 +0200)]
KVM: x86: Always allow access to VMware backdoor I/O ports

VMware allows access to these ports even if denied
by TSS I/O permission bitmap. Mimic behavior.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Add module parameter for supporting VMware backdoor
Liran Alon [Mon, 12 Mar 2018 11:12:47 +0000 (13:12 +0200)]
KVM: x86: Add module parameter for supporting VMware backdoor

Support access to VMware backdoor requires KVM to intercept #GP
exceptions from guest which introduce slight performance hit.
Therefore, control this support by module parameter.

Note that module parameter is exported as it should be consumed by
kvm_intel & kvm_amd to determine if they should intercept #GP or not.

This commit doesn't change semantics.
It is done as a preparation for future commits.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: add kvm_fast_pio() to consolidate fast PIO code
Sean Christopherson [Thu, 8 Mar 2018 16:57:27 +0000 (08:57 -0800)]
KVM: x86: add kvm_fast_pio() to consolidate fast PIO code

Add kvm_fast_pio() to consolidate duplicate code in VMX and SVM.
Unexport kvm_fast_pio_in() and kvm_fast_pio_out().

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: use kvm_fast_pio_in for handling IN I/O
Sean Christopherson [Thu, 8 Mar 2018 16:57:26 +0000 (08:57 -0800)]
KVM: VMX: use kvm_fast_pio_in for handling IN I/O

Fast emulation of processor I/O for IN was disabled on x86 (both VMX
and SVM) some years ago due to a buggy implementation.  The addition
of kvm_fast_pio_in(), used by SVM, re-introduced (functional!) fast
emulation of IN.  Piggyback SVM's work and use kvm_fast_pio_in() on
VMX instead of performing full emulation of IN.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vVMX: signal failure for nested VMEntry if emulation_required
Sean Christopherson [Mon, 12 Mar 2018 17:56:13 +0000 (10:56 -0700)]
KVM: vVMX: signal failure for nested VMEntry if emulation_required

Fail a nested VMEntry with EXIT_REASON_INVALID_STATE if L2 guest state
is invalid, i.e. vmcs12 contained invalid guest state, and unrestricted
guest is disabled in L0 (and by extension disabled in L1).

WARN_ON_ONCE in handle_invalid_guest_state() if we're attempting to
emulate L2, i.e. nested_run_pending is true, to aid debug in the
(hopefully unlikely) scenario that we somehow skip the nested VMEntry
consistency check, e.g. due to a L0 bug.

Note: KVM relies on hardware to detect the scenario where unrestricted
guest is enabled in L0 but disabled in L1 and vmcs12 contains invalid
guest state, i.e. checking emulation_required in prepare_vmcs02 is
required only to handle the case were unrestricted guest is disabled
in L0 since L0 never actually attempts VMLAUNCH/VMRESUME with vmcs02.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>