openwrt/staging/blogic.git
10 years agoKVM: PPC: Book3S HV: Improve H_CONFER implementation
Sam Bobroff [Wed, 3 Dec 2014 02:30:40 +0000 (13:30 +1100)]
KVM: PPC: Book3S HV: Improve H_CONFER implementation

Currently the H_CONFER hcall is implemented in kernel virtual mode,
meaning that whenever a guest thread does an H_CONFER, all the threads
in that virtual core have to exit the guest.  This is bad for
performance because it interrupts the other threads even if they
are doing useful work.

The H_CONFER hcall is called by a guest VCPU when it is spinning on a
spinlock and it detects that the spinlock is held by a guest VCPU that
is currently not running on a physical CPU.  The idea is to give this
VCPU's time slice to the holder VCPU so that it can make progress
towards releasing the lock.

To avoid having the other threads exit the guest unnecessarily,
we add a real-mode implementation of H_CONFER that checks whether
the other threads are doing anything.  If all the other threads
are idle (i.e. in H_CEDE) or trying to confer (i.e. in H_CONFER),
it returns H_TOO_HARD which causes a guest exit and allows the
H_CONFER to be handled in virtual mode.

Otherwise it spins for a short time (up to 10 microseconds) to give
other threads the chance to observe that this thread is trying to
confer.  The spin loop also terminates when any thread exits the guest
or when all other threads are idle or trying to confer.  If the
timeout is reached, the H_CONFER returns H_SUCCESS.  In this case the
guest VCPU will recheck the spinlock word and most likely call
H_CONFER again.

This also improves the implementation of the H_CONFER virtual mode
handler.  If the VCPU is part of a virtual core (vcore) which is
runnable, there will be a 'runner' VCPU which has taken responsibility
for running the vcore.  In this case we yield to the runner VCPU
rather than the target VCPU.

We also introduce a check on the target VCPU's yield count: if it
differs from the yield count passed to H_CONFER, the target VCPU
has run since H_CONFER was called and may have already released
the lock.  This check is required by PAPR.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Fix endianness of instruction obtained from HEIR register
Paul Mackerras [Wed, 3 Dec 2014 02:30:39 +0000 (13:30 +1100)]
KVM: PPC: Book3S HV: Fix endianness of instruction obtained from HEIR register

There are two ways in which a guest instruction can be obtained from
the guest in the guest exit code in book3s_hv_rmhandlers.S.  If the
exit was caused by a Hypervisor Emulation interrupt (i.e. an illegal
instruction), the offending instruction is in the HEIR register
(Hypervisor Emulation Instruction Register).  If the exit was caused
by a load or store to an emulated MMIO device, we load the instruction
from the guest by turning data relocation on and loading the instruction
with an lwz instruction.

Unfortunately, in the case where the guest has opposite endianness to
the host, these two methods give results of different endianness, but
both get put into vcpu->arch.last_inst.  The HEIR value has been loaded
using guest endianness, whereas the lwz will load the instruction using
host endianness.  The rest of the code that uses vcpu->arch.last_inst
assumes it was loaded using host endianness.

To fix this, we define a new vcpu field to store the HEIR value.  Then,
in kvmppc_handle_exit_hv(), we transfer the value from this new field to
vcpu->arch.last_inst, doing a byte-swap if the guest and host endianness
differ.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Remove code for PPC970 processors
Paul Mackerras [Wed, 3 Dec 2014 02:30:38 +0000 (13:30 +1100)]
KVM: PPC: Book3S HV: Remove code for PPC970 processors

This removes the code that was added to enable HV KVM to work
on PPC970 processors.  The PPC970 is an old CPU that doesn't
support virtualizing guest memory.  Removing PPC970 support also
lets us remove the code for allocating and managing contiguous
real-mode areas, the code for the !kvm->arch.using_mmu_notifiers
case, the code for pinning pages of guest memory when first
accessed and keeping track of which pages have been pinned, and
the code for handling H_ENTER hypercalls in virtual mode.

Book3S HV KVM is now supported only on POWER7 and POWER8 processors.
The KVM_CAP_PPC_RMA capability now always returns 0.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions
Suresh E. Warrier [Thu, 4 Dec 2014 00:48:10 +0000 (18:48 -0600)]
KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

This patch adds trace points in the guest entry and exit code and also
for exceptions handled by the host in kernel mode - hypercalls and page
faults. The new events are added to /sys/kernel/debug/tracing/events
under a new subsystem called kvm_hv.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Suresh Warrier <warrier@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Simplify locking around stolen time calculations
Paul Mackerras [Thu, 4 Dec 2014 05:43:28 +0000 (16:43 +1100)]
KVM: PPC: Book3S HV: Simplify locking around stolen time calculations

Currently the calculations of stolen time for PPC Book3S HV guests
uses fields in both the vcpu struct and the kvmppc_vcore struct.  The
fields in the kvmppc_vcore struct are protected by the
vcpu->arch.tbacct_lock of the vcpu that has taken responsibility for
running the virtual core.  This works correctly but confuses lockdep,
because it sees that the code takes the tbacct_lock for a vcpu in
kvmppc_remove_runnable() and then takes another vcpu's tbacct_lock in
vcore_stolen_time(), and it thinks there is a possibility of deadlock,
causing it to print reports like this:

=============================================
[ INFO: possible recursive locking detected ]
3.18.0-rc7-kvm-00016-g8db4bc6 #89 Not tainted
---------------------------------------------
qemu-system-ppc/6188 is trying to acquire lock:
 (&(&vcpu->arch.tbacct_lock)->rlock){......}, at: [<d00000000ecb1fe8>] .vcore_stolen_time+0x48/0xd0 [kvm_hv]

but task is already holding lock:
 (&(&vcpu->arch.tbacct_lock)->rlock){......}, at: [<d00000000ecb25a0>] .kvmppc_remove_runnable.part.3+0x30/0xd0 [kvm_hv]

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&(&vcpu->arch.tbacct_lock)->rlock);
  lock(&(&vcpu->arch.tbacct_lock)->rlock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

3 locks held by qemu-system-ppc/6188:
 #0:  (&vcpu->mutex){+.+.+.}, at: [<d00000000eb93f98>] .vcpu_load+0x28/0xe0 [kvm]
 #1:  (&(&vcore->lock)->rlock){+.+...}, at: [<d00000000ecb41b0>] .kvmppc_vcpu_run_hv+0x530/0x1530 [kvm_hv]
 #2:  (&(&vcpu->arch.tbacct_lock)->rlock){......}, at: [<d00000000ecb25a0>] .kvmppc_remove_runnable.part.3+0x30/0xd0 [kvm_hv]

stack backtrace:
CPU: 40 PID: 6188 Comm: qemu-system-ppc Not tainted 3.18.0-rc7-kvm-00016-g8db4bc6 #89
Call Trace:
[c000000b2754f3f0] [c000000000b31b6c] .dump_stack+0x88/0xb4 (unreliable)
[c000000b2754f470] [c0000000000faeb8] .__lock_acquire+0x1878/0x2190
[c000000b2754f600] [c0000000000fbf0c] .lock_acquire+0xcc/0x1a0
[c000000b2754f6d0] [c000000000b2954c] ._raw_spin_lock_irq+0x4c/0x70
[c000000b2754f760] [d00000000ecb1fe8] .vcore_stolen_time+0x48/0xd0 [kvm_hv]
[c000000b2754f7f0] [d00000000ecb25b4] .kvmppc_remove_runnable.part.3+0x44/0xd0 [kvm_hv]
[c000000b2754f880] [d00000000ecb43ec] .kvmppc_vcpu_run_hv+0x76c/0x1530 [kvm_hv]
[c000000b2754f9f0] [d00000000eb9f46c] .kvmppc_vcpu_run+0x2c/0x40 [kvm]
[c000000b2754fa60] [d00000000eb9c9a4] .kvm_arch_vcpu_ioctl_run+0x54/0x160 [kvm]
[c000000b2754faf0] [d00000000eb94538] .kvm_vcpu_ioctl+0x498/0x760 [kvm]
[c000000b2754fcb0] [c000000000267eb4] .do_vfs_ioctl+0x444/0x770
[c000000b2754fd90] [c0000000002682a4] .SyS_ioctl+0xc4/0xe0
[c000000b2754fe30] [c0000000000092e4] syscall_exit+0x0/0x98

In order to make the locking easier to analyse, we change the code to
use a spinlock in the kvmppc_vcore struct to protect the stolen_tb and
preempt_tb fields.  This lock needs to be an irq-safe lock since it is
used in the kvmppc_core_vcpu_load_hv() and kvmppc_core_vcpu_put_hv()
functions, which are called with the scheduler rq lock held, which is
an irq-safe lock.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoarch: powerpc: kvm: book3s_paired_singles.c: Remove unused function
Rickard Strandqvist [Sun, 7 Dec 2014 22:29:14 +0000 (23:29 +0100)]
arch: powerpc: kvm: book3s_paired_singles.c: Remove unused function

Remove the function inst_set_field() that is not used anywhere.

This was partially found by using a static code analysis program called cppcheck.

Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoarch: powerpc: kvm: book3s_pr.c: Remove unused function
Rickard Strandqvist [Sun, 7 Dec 2014 18:11:48 +0000 (19:11 +0100)]
arch: powerpc: kvm: book3s_pr.c: Remove unused function

Remove the function get_fpr_index() that is not used anywhere.

This was partially found by using a static code analysis program called cppcheck.

Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoarch: powerpc: kvm: book3s.c: Remove some unused functions
Rickard Strandqvist [Sun, 7 Dec 2014 17:28:54 +0000 (18:28 +0100)]
arch: powerpc: kvm: book3s.c: Remove some unused functions

Removes some functions that are not used anywhere:
kvmppc_core_load_guest_debugstate() kvmppc_core_load_host_debugstate()

This was partially found by using a static code analysis program called cppcheck.

Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoarch: powerpc: kvm: book3s_32_mmu.c: Remove unused function
Rickard Strandqvist [Sun, 7 Dec 2014 17:20:46 +0000 (18:20 +0100)]
arch: powerpc: kvm: book3s_32_mmu.c: Remove unused function

Remove the function sr_nx() that is not used anywhere.

This was partially found by using a static code analysis program called cppcheck.

Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Check wait conditions before sleeping in kvmppc_vcore_blocked
Suresh E. Warrier [Mon, 3 Nov 2014 04:52:00 +0000 (15:52 +1100)]
KVM: PPC: Book3S HV: Check wait conditions before sleeping in kvmppc_vcore_blocked

The kvmppc_vcore_blocked() code does not check for the wait condition
after putting the process on the wait queue. This means that it is
possible for an external interrupt to become pending, but the vcpu to
remain asleep until the next decrementer interrupt.  The fix is to
make one last check for pending exceptions and ceded state before
calling schedule().

Signed-off-by: Suresh Warrier <warrier@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: ptes are big endian
Cédric Le Goater [Thu, 20 Nov 2014 23:45:59 +0000 (00:45 +0100)]
KVM: PPC: Book3S HV: ptes are big endian

When being restored from qemu, the kvm_get_htab_header are in native
endian, but the ptes are big endian.

This patch fixes restore on a KVM LE host. Qemu also needs a fix for
this :

     http://lists.nongnu.org/archive/html/qemu-ppc/2014-11/msg00008.html

Signed-off-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Fix inaccuracies in ICP emulation for H_IPI
Suresh E. Warrier [Mon, 3 Nov 2014 04:51:59 +0000 (15:51 +1100)]
KVM: PPC: Book3S HV: Fix inaccuracies in ICP emulation for H_IPI

This fixes some inaccuracies in the state machine for the virtualized
ICP when implementing the H_IPI hcall (Set_MFFR and related states):

1. The old code wipes out any pending interrupts when the new MFRR is
   more favored than the CPPR but less favored than a pending
   interrupt (by always modifying xisr and the pending_pri). This can
   cause us to lose a pending external interrupt.

   The correct code here is to only modify the pending_pri and xisr in
   the ICP if the MFRR is equal to or more favored than the current
   pending pri (since in this case, it is guaranteed that that there
   cannot be a pending external interrupt). The code changes are
   required in both kvmppc_rm_h_ipi and kvmppc_h_ipi.

2. Again, in both kvmppc_rm_h_ipi and kvmppc_h_ipi, there is a check
   for whether MFRR is being made less favored AND further if new MFFR
   is also less favored than the current CPPR, we check for any
   resends pending in the ICP. These checks look like they are
   designed to cover the case where if the MFRR is being made less
   favored, we opportunistically trigger a resend of any interrupts
   that had been previously rejected. Although, this is not a state
   described by PAPR, this is an action we actually need to do
   especially if the CPPR is already at 0xFF.  Because in this case,
   the resend bit will stay on until another ICP state change which
   may be a long time coming and the interrupt stays pending until
   then. The current code which checks for MFRR < CPPR is broken when
   CPPR is 0xFF since it will not get triggered in that case.

   Ideally, we would want to do a resend only if

    prio(pending_interrupt) < mfrr && prio(pending_interrupt) < cppr

   where pending interrupt is the one that was rejected. But we don't
   have the priority of the pending interrupt state saved, so we
   simply trigger a resend whenever the MFRR is made less favored.

3. In kvmppc_rm_h_ipi, where we save state to pass resends to the
   virtual mode, we also need to save the ICP whose need_resend we
   reset since this does not need to be my ICP (vcpu->arch.icp) as is
   incorrectly assumed by the current code. A new field rm_resend_icp
   is added to the kvmppc_icp structure for this purpose.

Signed-off-by: Suresh Warrier <warrier@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Fix KSM memory corruption
Paul Mackerras [Mon, 3 Nov 2014 04:51:58 +0000 (15:51 +1100)]
KVM: PPC: Book3S HV: Fix KSM memory corruption

Testing with KSM active in the host showed occasional corruption of
guest memory.  Typically a page that should have contained zeroes
would contain values that look like the contents of a user process
stack (values such as 0x0000_3fff_xxxx_xxx).

Code inspection in kvmppc_h_protect revealed that there was a race
condition with the possibility of granting write access to a page
which is read-only in the host page tables.  The code attempts to keep
the host mapping read-only if the host userspace PTE is read-only, but
if that PTE had been temporarily made invalid for any reason, the
read-only check would not trigger and the host HPTE could end up
read-write.  Examination of the guest HPT in the failure situation
revealed that there were indeed shared pages which should have been
read-only that were mapped read-write.

To close this race, we don't let a page go from being read-only to
being read-write, as far as the real HPTE mapping the page is
concerned (the guest view can go to read-write, but the actual mapping
stays read-only).  When the guest tries to write to the page, we take
an HDSI and let kvmppc_book3s_hv_page_fault take care of providing a
writable HPTE for the page.

This eliminates the occasional corruption of shared pages
that was previously seen with KSM active.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Fix an issue where guest is paused on receiving HMI
Mahesh Salgaonkar [Mon, 3 Nov 2014 04:51:57 +0000 (15:51 +1100)]
KVM: PPC: Book3S HV: Fix an issue where guest is paused on receiving HMI

When we get an HMI (hypervisor maintenance interrupt) while in a
guest, we see that guest enters into paused state.  The reason is, in
kvmppc_handle_exit_hv it falls through default path and returns to
host instead of resuming guest.  This causes guest to enter into
paused state.  HMI is a hypervisor only interrupt and it is safe to
resume the guest since the host has handled it already.  This patch
adds a switch case to resume the guest.

Without this patch we see guest entering into paused state with following
console messages:

[ 3003.329351] Severe Hypervisor Maintenance interrupt [Recovered]
[ 3003.329356]  Error detail: Timer facility experienced an error
[ 3003.329359]  HMER: 0840000000000000
[ 3003.329360]  TFMR: 4a12000980a84000
[ 3003.329366] vcpu c0000007c35094c0 (40):
[ 3003.329368] pc  = c0000000000c2ba0  msr = 8000000000009032  trap = e60
[ 3003.329370] r 0 = c00000000021ddc0  r16 = 0000000000000046
[ 3003.329372] r 1 = c00000007a02bbd0  r17 = 00003ffff27d5d98
[ 3003.329375] r 2 = c0000000010980b8  r18 = 00001fffffc9a0b0
[ 3003.329377] r 3 = c00000000142d6b8  r19 = c00000000142d6b8
[ 3003.329379] r 4 = 0000000000000002  r20 = 0000000000000000
[ 3003.329381] r 5 = c00000000524a110  r21 = 0000000000000000
[ 3003.329383] r 6 = 0000000000000001  r22 = 0000000000000000
[ 3003.329386] r 7 = 0000000000000000  r23 = c00000000524a110
[ 3003.329388] r 8 = 0000000000000000  r24 = 0000000000000001
[ 3003.329391] r 9 = 0000000000000001  r25 = c00000007c31da38
[ 3003.329393] r10 = c0000000014280b8  r26 = 0000000000000002
[ 3003.329395] r11 = 746f6f6c2f68656c  r27 = c00000000524a110
[ 3003.329397] r12 = 0000000028004484  r28 = c00000007c31da38
[ 3003.329399] r13 = c00000000fe01400  r29 = 0000000000000002
[ 3003.329401] r14 = 0000000000000046  r30 = c000000003011e00
[ 3003.329403] r15 = ffffffffffffffba  r31 = 0000000000000002
[ 3003.329404] ctr = c00000000041a670  lr  = c000000000272520
[ 3003.329405] srr0 = c00000000007e8d8 srr1 = 9000000000001002
[ 3003.329406] sprg0 = 0000000000000000 sprg1 = c00000000fe01400
[ 3003.329407] sprg2 = c00000000fe01400 sprg3 = 0000000000000005
[ 3003.329408] cr = 48004482  xer = 2000000000000000  dsisr = 42000000
[ 3003.329409] dar = 0000010015020048
[ 3003.329410] fault dar = 0000010015020048 dsisr = 42000000
[ 3003.329411] SLB (8 entries):
[ 3003.329412]   ESID = c000000008000000 VSID = 40016e7779000510
[ 3003.329413]   ESID = d000000008000001 VSID = 400142add1000510
[ 3003.329414]   ESID = f000000008000004 VSID = 4000eb1a81000510
[ 3003.329415]   ESID = 00001f000800000b VSID = 40004fda0a000d90
[ 3003.329416]   ESID = 00003f000800000c VSID = 400039f536000d90
[ 3003.329417]   ESID = 000000001800000d VSID = 0001251b35150d90
[ 3003.329417]   ESID = 000001000800000e VSID = 4001e46090000d90
[ 3003.329418]   ESID = d000080008000019 VSID = 40013d349c000400
[ 3003.329419] lpcr = c048800001847001 sdr1 = 0000001b19000006 last_inst = ffffffff
[ 3003.329421] trap=0xe60 | pc=0xc0000000000c2ba0 | msr=0x8000000000009032
[ 3003.329524] Severe Hypervisor Maintenance interrupt [Recovered]
[ 3003.329526]  Error detail: Timer facility experienced an error
[ 3003.329527]  HMER: 0840000000000000
[ 3003.329527]  TFMR: 4a12000980a94000
[ 3006.359786] Severe Hypervisor Maintenance interrupt [Recovered]
[ 3006.359792]  Error detail: Timer facility experienced an error
[ 3006.359795]  HMER: 0840000000000000
[ 3006.359797]  TFMR: 4a12000980a84000

 Id    Name                           State
----------------------------------------------------
 2     guest2                         running
 3     guest3                         paused
 4     guest4                         running

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Fix computation of tlbie operand
Paul Mackerras [Mon, 3 Nov 2014 04:51:56 +0000 (15:51 +1100)]
KVM: PPC: Book3S HV: Fix computation of tlbie operand

The B (segment size) field in the RB operand for the tlbie
instruction is two bits, which we get from the top two bits of
the first doubleword of the HPT entry to be invalidated.  These
bits go in bits 8 and 9 of the RB operand (bits 54 and 55 in IBM
bit numbering).

The compute_tlbie_rb() function gets these bits as v >> (62 - 8),
which is not correct as it will bring in the top 10 bits, not
just the top two.  These extra bits could corrupt the AP, AVAL
and L fields in the RB value.  To fix this we shift right 62 bits
and then shift left 8 bits, so we only get the two bits of the
B field.

The first doubleword of the HPT entry is under the control of the
guest kernel.  In fact, Linux guests will always put zeroes in bits
54 -- 61 (IBM bits 2 -- 9), but we should not rely on guests doing
this.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: Book3S HV: Add missing HPTE unlock
Aneesh Kumar K.V [Wed, 5 Nov 2014 01:21:13 +0000 (12:21 +1100)]
KVM: PPC: Book3S HV: Add missing HPTE unlock

In kvm_test_clear_dirty(), if we find an invalid HPTE we move on to the
next HPTE without unlocking the invalid one.  In fact we should never
find an invalid and unlocked HPTE in the rmap chain, but for robustness
we should unlock it.  This adds the missing unlock.

Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: PPC: BookE: Improve irq inject tracepoint
Alexander Graf [Mon, 10 Nov 2014 23:15:53 +0000 (00:15 +0100)]
KVM: PPC: BookE: Improve irq inject tracepoint

When injecting an IRQ, we only document which IRQ priority (which translates
to IRQ type) gets injected. However, when reading traces you don't necessarily
have all the numbers in your head to know which IRQ really is meant.

This patch converts the IRQ number field to a symbolic name that is in sync
with the respective define. That way it's a lot easier for readers to figure
out what interrupt gets injected.

Signed-off-by: Alexander Graf <agraf@suse.de>
10 years agoKVM: cpuid: recompute CPUID 0xD.0:EBX,ECX
Radim Krčmář [Thu, 4 Dec 2014 17:30:41 +0000 (18:30 +0100)]
KVM: cpuid: recompute CPUID 0xD.0:EBX,ECX

We reused host EBX and ECX, but KVM might not support all features;
emulated XSAVE size should be smaller.

EBX depends on unknown XCR0, so we default to ECX.

SDM CPUID (EAX = 0DH, ECX = 0):
 EBX Bits 31-00: Maximum size (bytes, from the beginning of the
     XSAVE/XRSTOR save area) required by enabled features in XCR0. May
     be different than ECX if some features at the end of the XSAVE save
     area are not enabled.

 ECX Bit 31-00: Maximum size (bytes, from the beginning of the
     XSAVE/XRSTOR save area) of the XSAVE/XRSTOR save area required by
     all supported features in the processor, i.e all the valid bit
     fields in XCR0.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: vmx: add nested virtualization support for xsaves
Wanpeng Li [Thu, 4 Dec 2014 11:11:07 +0000 (19:11 +0800)]
kvm: vmx: add nested virtualization support for xsaves

Add nested virtualization support for xsaves.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: vmx: add MSR logic for XSAVES
Wanpeng Li [Tue, 2 Dec 2014 11:14:59 +0000 (19:14 +0800)]
kvm: vmx: add MSR logic for XSAVES

Add logic to get/set the XSS model-specific register.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: handle XSAVES vmcs and vmexit
Wanpeng Li [Tue, 2 Dec 2014 11:14:58 +0000 (19:14 +0800)]
kvm: x86: handle XSAVES vmcs and vmexit

Initialize the XSS exit bitmap.  It is zero so there should be no XSAVES
or XRSTORS exits.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: cpuid: mask more bits in leaf 0xd and subleaves
Paolo Bonzini [Thu, 4 Dec 2014 14:11:11 +0000 (15:11 +0100)]
KVM: cpuid: mask more bits in leaf 0xd and subleaves

- EAX=0Dh, ECX=1: output registers EBX/ECX/EDX are reserved.

- EAX=0Dh, ECX>1: output register ECX bit 0 is clear for all the CPUID
leaves we support, because variable "supported" comes from XCR0 and not
XSS.  Bits above 0 are reserved, so ECX is overall zero.  Output register
EDX is reserved.

Source: Intel Architecture Instruction Set Extensions Programming
Reference, ref. number 319433-022

Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly
Paolo Bonzini [Wed, 3 Dec 2014 13:38:01 +0000 (14:38 +0100)]
KVM: cpuid: set CPUID(EAX=0xd,ECX=1).EBX correctly

This is the size of the XSAVES area.  This starts providing guest support
for XSAVES (with no support yet for supervisor states, i.e. XSS == 0
always in guests for now).

Wanpeng Li suggested testing XSAVEC as well as XSAVES, since in practice
no real processor exists that only has one of them, and there is no
other way for userspace programs to compute the area of the XSAVEC
save area.  CPUID(EAX=0xd,ECX=1).EBX provides an upper bound.

Suggested-by: Radim Krčmář <rkrcmar@redhat.com>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Tested-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest
Wanpeng Li [Tue, 2 Dec 2014 11:21:30 +0000 (19:21 +0800)]
kvm: x86: Add kvm_x86_ops hook that enables XSAVES for guest

Expose the XSAVES feature to the guest if the kvm_x86_ops say it is
available.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: use F() macro throughout cpuid.c
Paolo Bonzini [Wed, 3 Dec 2014 13:34:47 +0000 (14:34 +0100)]
KVM: x86: use F() macro throughout cpuid.c

For code that deals with cpuid, this makes things a bit more readable.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: support XSAVES usage in the host
Paolo Bonzini [Fri, 21 Nov 2014 18:05:07 +0000 (19:05 +0100)]
KVM: x86: support XSAVES usage in the host

Userspace is expecting non-compacted format for KVM_GET_XSAVE, but
struct xsave_struct might be using the compacted format.  Convert
in order to preserve userspace ABI.

Likewise, userspace is passing non-compacted format for KVM_SET_XSAVE
but the kernel will pass it to XRSTORS, and we need to convert back.

Fixes: f31a9f7c71691569359fa7fb8b0acaa44bce0324
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: stable@vger.kernel.org
Cc: H. Peter Anvin <hpa@linux.intel.com>
Tested-by: Nadav Amit <namit@cs.technion.ac.il>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agox86: export get_xsave_addr
Paolo Bonzini [Mon, 24 Nov 2014 09:57:42 +0000 (10:57 +0100)]
x86: export get_xsave_addr

get_xsave_addr is the API to access XSAVE states, and KVM would
like to use it.  Export it.

Cc: stable@vger.kernel.org
Cc: x86@kernel.org
Cc: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoMerge tag 'kvm-s390-next-20141204' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Fri, 5 Dec 2014 12:55:40 +0000 (13:55 +0100)]
Merge tag 'kvm-s390-next-20141204' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Fixups for kvm/next (3.19)

Here we have two fixups of the latest interrupt rework and
one architectural fixup.

10 years agoKVM: s390: clean up return code handling in irq delivery code
Jens Freimann [Mon, 1 Dec 2014 16:05:39 +0000 (17:05 +0100)]
KVM: s390: clean up return code handling in irq delivery code

Instead of returning a possibly random or'ed together value, let's
always return -EFAULT if rc is set.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: use atomic bitops to access pending_irqs bitmap
Jens Freimann [Mon, 1 Dec 2014 15:43:40 +0000 (16:43 +0100)]
KVM: s390: use atomic bitops to access pending_irqs bitmap

Currently we use a mixture of atomic/non-atomic bitops
and the local_int spin lock to protect the pending_irqs bitmap
and interrupt payload data.

We need to use atomic bitops for the pending_irqs bitmap everywhere
and in addition acquire the local_int lock where interrupt data needs
to be protected.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: some ext irqs have to clear the ext cpu addr
David Hildenbrand [Mon, 1 Dec 2014 11:02:44 +0000 (12:02 +0100)]
KVM: s390: some ext irqs have to clear the ext cpu addr

The cpu address of a source cpu (responsible for an external irq) is only to
be stored if bit 6 of the ext irq code is set.

If bit 6 is not set, it is to be zeroed out.

The special external irq code used for virtio and pfault uses the cpu addr as a
parameter field. As bit 6 is set, this implementation is correct.

Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: track pid for VCPU only on KVM_RUN ioctl
Christian Borntraeger [Tue, 5 Aug 2014 14:44:14 +0000 (16:44 +0200)]
KVM: track pid for VCPU only on KVM_RUN ioctl

We currently track the pid of the task that runs the VCPU in vcpu_load.
If a yield to that VCPU is triggered while the PID of the wrong thread
is active, the wrong thread might receive a yield, but this will most
likely not help the executing thread at all.  Instead, if we only track
the pid on the KVM_RUN ioctl, there are two possibilities:

1) the thread that did a non-KVM_RUN ioctl is holding a mutex that
the VCPU thread is waiting for.  In this case, the VCPU thread is not
runnable, but we also do not do a wrong yield.

2) the thread that did a non-KVM_RUN ioctl is sleeping, or doing
something that does not block the VCPU thread.  In this case, the
VCPU thread can receive the directed yield correctly.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: Rik van Riel <riel@redhat.com>
CC: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
CC: Michael Mueller <mimu@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: don't check for PF_VCPU when yielding
David Hildenbrand [Tue, 25 Nov 2014 16:04:08 +0000 (17:04 +0100)]
KVM: don't check for PF_VCPU when yielding

kvm_enter_guest() has to be called with preemption disabled and will
set PF_VCPU.  Current code takes PF_VCPU as a hint that the VCPU thread
is running and therefore needs no yield.

However, the check on PF_VCPU is wrong on s390, where preemption has
to stay enabled in order to correctly process page faults.  Thus,
s390 reenables preemption and starts to execute the guest.  The thread
might be scheduled out between kvm_enter_guest() and kvm_exit_guest(),
resulting in PF_VCPU being set but not being run.  When this happens,
the opportunity for directed yield is missed.

However, this check is done already in kvm_vcpu_on_spin before calling
kvm_vcpu_yield_loop:

        if (!ACCESS_ONCE(vcpu->preempted))
                continue;

so the check on PF_VCPU is superfluous in general, and this patch
removes it.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: optimize GFN to memslot lookup with large slots amount
Igor Mammedov [Mon, 1 Dec 2014 17:29:27 +0000 (17:29 +0000)]
kvm: optimize GFN to memslot lookup with large slots amount

Current linear search doesn't scale well when
large amount of memslots is used and looked up slot
is not in the beginning memslots array.
Taking in account that memslots don't overlap, it's
possible to switch sorting order of memslots array from
'npages' to 'base_gfn' and use binary search for
memslot lookup by GFN.

As result of switching to binary search lookup times
are reduced with large amount of memslots.

Following is a table of search_memslot() cycles
during WS2008R2 guest boot.

                         boot,          boot + ~10 min
                         mostly same    of using it,
                         slot lookup    randomized lookup
                max      average        average
                cycles   cycles         cycles

13 slots      : 1450       28           30

13 slots      : 1400       30           40
binary search

117 slots     : 13000      30           460

117 slots     : 2000       35           180
binary search

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: change memslot sorting rule from size to GFN
Igor Mammedov [Mon, 1 Dec 2014 17:29:26 +0000 (17:29 +0000)]
kvm: change memslot sorting rule from size to GFN

it will allow to use binary search for GFN -> memslot
lookups, reducing lookup cost with large slots amount.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: search_memslots: add simple LRU memslot caching
Igor Mammedov [Mon, 1 Dec 2014 17:29:25 +0000 (17:29 +0000)]
kvm: search_memslots: add simple LRU memslot caching

In typical guest boot workload only 2-3 memslots are used
extensively, and at that it's mostly the same memslot
lookup operation.

Adding LRU cache improves average lookup time from
46 to 28 cycles (~40%) for this workload.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: update_memslots: drop not needed check for the same slot
Igor Mammedov [Mon, 1 Dec 2014 17:29:24 +0000 (17:29 +0000)]
kvm: update_memslots: drop not needed check for the same slot

UP/DOWN shift loops will shift array in needed
direction and stop at place where new slot should
be placed regardless of old slot size.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: update_memslots: drop not needed check for the same number of pages
Igor Mammedov [Mon, 1 Dec 2014 17:29:23 +0000 (17:29 +0000)]
kvm: update_memslots: drop not needed check for the same number of pages

if number of pages haven't changed sorting algorithm
will do nothing, so there is no need to do extra check
to avoid entering sorting logic.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: allow 256 logical x2APICs again
Radim Krčmář [Thu, 27 Nov 2014 19:03:13 +0000 (20:03 +0100)]
KVM: x86: allow 256 logical x2APICs again

While fixing an x2apic bug,
 17d68b7 KVM: x86: fix guest-initiated crash with x2apic (CVE-2013-6376)
we've made only one cluster available.  This means that the amount of
logically addressible x2APICs was reduced to 16 and VCPUs kept
overwriting themselves in that region, so even the first cluster wasn't
set up correctly.

This patch extends x2APIC support back to the logical_map's limit, and
keeps the CVE fixed as messages for non-present APICs are dropped.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: check bounds of APIC maps
Radim Krčmář [Thu, 27 Nov 2014 22:30:19 +0000 (23:30 +0100)]
KVM: x86: check bounds of APIC maps

They can't be violated now, but play it safe for the future.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: fix APIC physical destination wrapping
Radim Krčmář [Thu, 27 Nov 2014 19:03:12 +0000 (20:03 +0100)]
KVM: x86: fix APIC physical destination wrapping

x2apic allows destinations > 0xff and we don't want them delivered to
lower APICs.  They are correctly handled by doing nothing.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: deliver phys lowest-prio
Radim Krčmář [Thu, 27 Nov 2014 19:03:11 +0000 (20:03 +0100)]
KVM: x86: deliver phys lowest-prio

Physical mode can't address more than one APIC, but lowest-prio is
allowed, so we just reuse our paths.

SDM 10.6.2.1 Physical Destination:
  Also, for any non-broadcast IPI or I/O subsystem initiated interrupt
  with lowest priority delivery mode, software must ensure that APICs
  defined in the interrupt address are present and enabled to receive
  interrupts.

We could warn on top of that.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: don't retry hopeless APIC delivery
Radim Krčmář [Thu, 27 Nov 2014 19:03:14 +0000 (20:03 +0100)]
KVM: x86: don't retry hopeless APIC delivery

False from kvm_irq_delivery_to_apic_fast() means that we don't handle it
in the fast path, but we still return false in cases that were perfectly
handled, fix that.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: use MSR_ICR instead of a number
Radim Krčmář [Wed, 26 Nov 2014 16:07:05 +0000 (17:07 +0100)]
KVM: x86: use MSR_ICR instead of a number

0x830 MSR is 0x300 xAPIC MMIO, which is MSR_ICR.

Signed-off-by: Radim KrÄ\8dmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Fix reserved x2apic registers
Nadav Amit [Wed, 26 Nov 2014 15:56:25 +0000 (17:56 +0200)]
KVM: x86: Fix reserved x2apic registers

x2APIC has no registers for DFR and ICR2 (see Intel SDM 10.12.1.2 "x2APIC
Register Address Space"). KVM needs to cause #GP on such accesses.

Fix it (DFR and ICR2 on read, ICR2 on write, DFR already handled on writes).

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Generate #UD when memory operand is required
Nadav Amit [Wed, 26 Nov 2014 13:47:18 +0000 (15:47 +0200)]
KVM: x86: Generate #UD when memory operand is required

Certain x86 instructions that use modrm operands only allow memory operand
(i.e., mod012), and cause a #UD exception otherwise. KVM ignores this fact.
Currently, the instructions that are such and are emulated by KVM are MOVBE,
MOVNTPS, MOVNTPD and MOVNTI.  MOVBE is the most blunt example, since it may be
emulated by the host regardless of MMIO.

The fix introduces a new group for handling such instructions, marking mod3 as
illegal instruction.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoMerge tag 'kvm-s390-next-20141128' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Wed, 3 Dec 2014 14:20:11 +0000 (15:20 +0100)]
Merge tag 'kvm-s390-next-20141128' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Several fixes,cleanups and reworks

Here is a bunch of fixes that deal mostly with architectural compliance:
- interrupt priorities
- interrupt handling
- intruction exit handling

We also provide a helper function for getting the guest visible storage key.

10 years agoKVM: s390: allow injecting all kinds of machine checks
Jens Freimann [Wed, 13 Aug 2014 08:09:04 +0000 (10:09 +0200)]
KVM: s390: allow injecting all kinds of machine checks

Allow to specify CR14, logout area, external damage code
and failed storage address.

Since more then one machine check can be indicated to the guest at
a time we need to combine all indication bits with already pending
requests.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: handle pending local interrupts via bitmap
Jens Freimann [Tue, 29 Jul 2014 13:11:49 +0000 (15:11 +0200)]
KVM: s390: handle pending local interrupts via bitmap

This patch adapts handling of local interrupts to be more compliant with
the z/Architecture Principles of Operation and introduces a data
structure
which allows more efficient handling of interrupts.

* get rid of li->active flag, use bitmap instead
* Keep interrupts in a bitmap instead of a list
* Deliver interrupts in the order of their priority as defined in the
  PoP
* Use a second bitmap for sigp emergency requests, as a CPU can have
  one request pending from every other CPU in the system.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: add bitmap for handling cpu-local interrupts
Jens Freimann [Mon, 29 Jul 2013 18:54:15 +0000 (20:54 +0200)]
KVM: s390: add bitmap for handling cpu-local interrupts

Adds a bitmap to the vcpu structure which is used to keep track
of local pending interrupts. Also add enum with all interrupt
types sorted in order of priority (highest to lowest)

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: refactor interrupt delivery code
Jens Freimann [Tue, 29 Jul 2014 11:45:21 +0000 (13:45 +0200)]
KVM: s390: refactor interrupt delivery code

Move delivery code for cpu-local interrupt from the huge do_deliver_interrupt()
to smaller functions which handle one type of interrupt.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: add defines for virtio and pfault interrupt code
Jens Freimann [Mon, 10 Nov 2014 16:20:07 +0000 (17:20 +0100)]
KVM: s390: add defines for virtio and pfault interrupt code

Get rid of open coded value for virtio and pfault completion interrupts.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: external param not valid for cpu timer and ckc
David Hildenbrand [Fri, 7 Nov 2014 13:35:55 +0000 (14:35 +0100)]
KVM: s390: external param not valid for cpu timer and ckc

The 32bit external interrupt parameter is only valid for timing-alert and
service-signal interrupts.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: refactor interrupt injection code
Jens Freimann [Mon, 28 Jul 2014 13:37:58 +0000 (15:37 +0200)]
KVM: s390: refactor interrupt injection code

In preparation for the rework of the local interrupt injection code,
factor out injection routines from kvm_s390_inject_vcpu().

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: S390: Create helper function get_guest_storage_key
Jason J. Herne [Tue, 23 Sep 2014 13:18:57 +0000 (09:18 -0400)]
KVM: S390: Create helper function get_guest_storage_key

Define get_guest_storage_key which can be used to get the value of a guest
storage key. This compliments the functionality provided by the helper function
set_guest_storage_key. Both functions are needed for live migration of s390
guests that use storage keys.

Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: trigger the right CPU exit for floating interrupts
Christian Borntraeger [Fri, 21 Nov 2014 08:38:12 +0000 (09:38 +0100)]
KVM: s390: trigger the right CPU exit for floating interrupts

When injecting a floating interrupt and no CPU is idle we
kick one CPU to do an external exit. In case of I/O we
should trigger an I/O exit instead. This does not matter
for Linux guests as external and I/O interrupts are
enabled/disabled at the same time, but play safe anyway.

The same holds true for machine checks. Since there is no
special exit, just reuse the generic stop exit. The injection
code inside the VCPU loop will recheck anyway and rearm the
proper exits (e.g. control registers) if necessary.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
10 years agoKVM: s390: Fix rewinding of the PSW pointing to an EXECUTE instruction
Thomas Huth [Wed, 12 Nov 2014 16:13:29 +0000 (17:13 +0100)]
KVM: s390: Fix rewinding of the PSW pointing to an EXECUTE instruction

A couple of our interception handlers rewind the PSW to the beginning
of the instruction to run the intercepted instruction again during the
next SIE entry. This normally works fine, but there is also the
possibility that the instruction did not get run directly but via an
EXECUTE instruction.
In this case, the PSW does not point to the instruction that caused the
interception, but to the EXECUTE instruction! So we've got to rewind the
PSW to the beginning of the EXECUTE instruction instead.
This is now accomplished with a new helper function kvm_s390_rewind_psw().

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: Small fixes for the PFMF handler
Thomas Huth [Mon, 10 Nov 2014 14:59:32 +0000 (15:59 +0100)]
KVM: s390: Small fixes for the PFMF handler

This patch includes two small fixes for the PFMF handler: First, the
start address for PFMF has to be masked according to the current
addressing mode, which is now done with kvm_s390_logical_to_effective().
Second, the protection exceptions have a lower priority than the
specification exceptions, so the check for low-address protection
has to be moved after the last spot where we inject a specification
exception.

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agokvm: x86: avoid warning about potential shift wrapping bug
Paolo Bonzini [Mon, 24 Nov 2014 13:35:24 +0000 (14:35 +0100)]
kvm: x86: avoid warning about potential shift wrapping bug

cs.base is declared as a __u64 variable and vector is a u32 so this
causes a static checker warning.  The user indeed can set "sipi_vector"
to any u32 value in kvm_vcpu_ioctl_x86_set_vcpu_events(), but the
value should really have 8-bit precision only.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: move device assignment out of kvm_host.h
Paolo Bonzini [Mon, 24 Nov 2014 14:27:17 +0000 (15:27 +0100)]
KVM: x86: move device assignment out of kvm_host.h

Create a new header, and hide the device assignment functions there.
Move struct kvm_assigned_dev_kernel to assigned-dev.c by modifying
arch/x86/kvm/iommu.c to take a PCI device struct.

Based on a patch by Radim Krcmar <rkrcmark@redhat.com>.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: mask out XSAVES
Paolo Bonzini [Fri, 21 Nov 2014 17:13:26 +0000 (18:13 +0100)]
kvm: x86: mask out XSAVES

This feature is not supported inside KVM guests yet, because we do not emulate
MSR_IA32_XSS.  Mask it out.

Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: move assigned-dev.c and iommu.c to arch/x86/
Radim Krčmář [Fri, 21 Nov 2014 21:21:50 +0000 (22:21 +0100)]
kvm: x86: move assigned-dev.c and iommu.c to arch/x86/

Now that ia64 is gone, we can hide deprecated device assignment in x86.

Notable changes:
 - kvm_vm_ioctl_assigned_device() was moved to x86/kvm_arch_vm_ioctl()

The easy parts were removed from generic kvm code, remaining
 - kvm_iommu_(un)map_pages() would require new code to be moved
 - struct kvm_assigned_dev_kernel depends on struct kvm_irq_ack_notifier

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: remove IA64 ioctls
Radim Krčmář [Thu, 20 Nov 2014 13:43:18 +0000 (14:43 +0100)]
kvm: remove IA64 ioctls

KVM ia64 is no longer present so new applications shouldn't use them.
The main problem is that they most likely didn't work even before,
because of a conflict in the #defines:

  #define KVM_SET_GUEST_DEBUG       _IOW(KVMIO,  0x9b, struct kvm_guest_debug)
  #define KVM_IA64_VCPU_SET_STACK   _IOW(KVMIO,  0x9b, void *)

The argument to KVM_SET_GUEST_DEBUG is:

  struct kvm_guest_debug {
   __u32 control;
   __u32 pad;
   struct kvm_guest_debug_arch arch;
  };

  struct kvm_guest_debug_arch {
  };

meaning that sizeof(struct kvm_guest_debug) == sizeof(void *) == 8
and KVM_SET_GUEST_DEBUG == KVM_IA64_VCPU_SET_STACK.

KVM_SET_GUEST_DEBUG is handled in virt/kvm/kvm_main.c before even calling
kvm_arch_vcpu_ioctl (which would have handled KVM_IA64_VCPU_SET_STACK),
so KVM_IA64_VCPU_SET_STACK would just return -EINVAL.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: remove CONFIG_X86 #ifdefs from files formerly shared with ia64
Radim Krcmar [Fri, 21 Nov 2014 17:06:19 +0000 (18:06 +0100)]
kvm: remove CONFIG_X86 #ifdefs from files formerly shared with ia64

Signed-off-by: Radim Krcmar <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: move ioapic.c and irq_comm.c back to arch/x86/
Paolo Bonzini [Thu, 20 Nov 2014 12:45:31 +0000 (13:45 +0100)]
kvm: x86: move ioapic.c and irq_comm.c back to arch/x86/

ia64 does not need them anymore.  Ack notifiers become x86-specific
too.

Suggested-by: Gleb Natapov <gleb@kernel.org>
Reviewed-by: Radim Krcmar <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: Documentation: remove ia64
Tiejun Chen [Thu, 20 Nov 2014 10:07:18 +0000 (11:07 +0100)]
kvm: Documentation: remove ia64

kvm/ia64 is gone, clean up Documentation too.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: ia64: remove
Paolo Bonzini [Wed, 19 Nov 2014 15:22:52 +0000 (16:22 +0100)]
KVM: ia64: remove

KVM for ia64 has been marked as broken not just once, but twice even,
and the last patch from the maintainer is now roughly 5 years old.
Time for it to rest in peace.

Acked-by: Gleb Natapov <gleb@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Remove FIXMEs in emulate.c
Nicholas Krause [Wed, 19 Nov 2014 12:48:18 +0000 (07:48 -0500)]
KVM: x86: Remove FIXMEs in emulate.c

Remove FIXME comments about needing fault addresses to be returned.  These
are propaagated from walk_addr_generic to gva_to_gpa and from there to
ops->read_std and ops->write_std.

Signed-off-by: Nicholas Krause <xerofoify@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: emulator: remove duplicated limit check
Paolo Bonzini [Wed, 19 Nov 2014 17:33:38 +0000 (18:33 +0100)]
KVM: emulator: remove duplicated limit check

The check on the higher limit of the segment, and the check on the
maximum accessible size, is the same for both expand-up and
expand-down segments.  Only the computation of "lim" varies.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: emulator: remove code duplication in register_address{,_increment}
Paolo Bonzini [Wed, 19 Nov 2014 17:25:08 +0000 (18:25 +0100)]
KVM: emulator: remove code duplication in register_address{,_increment}

register_address has been a duplicate of address_mask ever since the
ancestor of __linearize was born in 90de84f50b42 (KVM: x86 emulator:
preserve an operand's segment identity, 2010-11-17).

However, we can put it to a better use by including the call to reg_read
in register_address.  Similarly, the call to reg_rmw can be moved to
register_address_increment.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Move __linearize masking of la into switch
Nadav Amit [Wed, 19 Nov 2014 15:43:13 +0000 (17:43 +0200)]
KVM: x86: Move __linearize masking of la into switch

In __linearize there is check of the condition whether to check if masking of
the linear address is needed.  It occurs immediately after switch that
evaluates the same condition.  Merge them.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Non-canonical access using SS should cause #SS
Nadav Amit [Wed, 19 Nov 2014 15:43:12 +0000 (17:43 +0200)]
KVM: x86: Non-canonical access using SS should cause #SS

When SS is used using a non-canonical address, an #SS exception is generated on
real hardware.  KVM emulator causes a #GP instead. Fix it to behave as real x86
CPU.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Perform limit checks when assigning EIP
Nadav Amit [Wed, 19 Nov 2014 15:43:11 +0000 (17:43 +0200)]
KVM: x86: Perform limit checks when assigning EIP

If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch.  In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.

To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Emulator performs privilege checks on __linearize
Nadav Amit [Wed, 19 Nov 2014 15:43:10 +0000 (17:43 +0200)]
KVM: x86: Emulator performs privilege checks on __linearize

When segment is accessed, real hardware does not perform any privilege level
checks.  In contrast, KVM emulator does. This causes some discrepencies from
real hardware. For instance, reading from readable code segment may fail due to
incorrect segment checks. In addition, it introduces unnecassary overhead.

To reference Intel SDM 5.5 ("Privilege Levels"): "Privilege levels are checked
when the segment selector of a segment descriptor is loaded into a segment
register." The SDM never mentions privilege level checks during memory access,
except for loading far pointers in section 5.10 ("Pointer Validation"). Those
are actually segment selector loads and are emulated in the similarily (i.e.,
regardless to __linearize checks).

This behavior was also checked using sysexit. A data-segment whose DPL=0 was
loaded, and after sysexit (CPL=3) it is still accessible.

Therefore, all the privilege level checks in __linearize are removed.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Stack size is overridden by __linearize
Nadav Amit [Wed, 19 Nov 2014 15:43:09 +0000 (17:43 +0200)]
KVM: x86: Stack size is overridden by __linearize

When performing segmented-read/write in the emulator for stack operations, it
ignores the stack size, and uses the ad_bytes as indication for the pointer
size. As a result, a wrong address may be accessed.

To fix this behavior, we can remove the masking of address in __linearize and
perform it beforehand.  It is already done for the operands (so currently it is
inefficiently done twice). It is missing in two cases:
1. When using rip_relative
2. On fetch_bit_operand that changes the address.

This patch masks the address on these two occassions, and removes the masking
from __linearize.

Note that it does not mask EIP during fetch. In protected/legacy mode code
fetch when RIP >= 2^32 should result in #GP and not wrap-around. Since we make
limit checks within __linearize, this is the expected behavior.

Partial revert of commit 518547b32ab4 (KVM: x86: Emulator does not
calculate address correctly, 2014-09-30).

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Revert NoBigReal patch in the emulator
Nadav Amit [Wed, 19 Nov 2014 15:43:08 +0000 (17:43 +0200)]
KVM: x86: Revert NoBigReal patch in the emulator

Commit 10e38fc7cab6 ("KVM: x86: Emulator flag for instruction that only support
16-bit addresses in real mode") introduced NoBigReal for instructions such as
MONITOR. Apparetnly, the Intel SDM description that led to this patch is
misleading.  Since no instruction is using NoBigReal, it is safe to remove it,
we fully understand what the SDM means.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: vmx: remove MMIO_MAX_GEN
Tiejun Chen [Tue, 18 Nov 2014 09:14:38 +0000 (17:14 +0800)]
kvm: x86: vmx: remove MMIO_MAX_GEN

MMIO_MAX_GEN is the same as MMIO_GEN_MASK.  Use only one.

Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: vmx: cleanup handle_ept_violation
Tiejun Chen [Tue, 18 Nov 2014 09:12:56 +0000 (17:12 +0800)]
kvm: x86: vmx: cleanup handle_ept_violation

Instead, just use PFERR_{FETCH, PRESENT, WRITE}_MASK
inside handle_ept_violation() for slightly better code.

Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Fix lost interrupt on irr_pending race
Nadav Amit [Sun, 16 Nov 2014 21:49:07 +0000 (23:49 +0200)]
KVM: x86: Fix lost interrupt on irr_pending race

apic_find_highest_irr assumes irr_pending is set if any vector in APIC_IRR is
set.  If this assumption is broken and apicv is disabled, the injection of
interrupts may be deferred until another interrupt is delivered to the guest.
Ultimately, if no other interrupt should be injected to that vCPU, the pending
interrupt may be lost.

commit 56cc2406d68c ("KVM: nVMX: fix "acknowledge interrupt on exit" when APICv
is in use") changed the behavior of apic_clear_irr so irr_pending is cleared
after setting APIC_IRR vector. After this commit, if apic_set_irr and
apic_clear_irr run simultaneously, a race may occur, resulting in APIC_IRR
vector set, and irr_pending cleared. In the following example, assume a single
vector is set in IRR prior to calling apic_clear_irr:

apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic_clear_vector(...);
vec = apic_search_irr(apic);
// => vec == -1
apic_set_vector(...);
apic->irr_pending = (vec != -1);
// => apic->irr_pending == false

Nonetheless, it appears the race might even occur prior to this commit:

apic_set_irr apic_clear_irr
------------ --------------
apic->irr_pending = true;
apic->irr_pending = false;
apic_clear_vector(...);
if (apic_search_irr(apic) != -1)
apic->irr_pending = true;
// => apic->irr_pending == false
apic_set_vector(...);

Fixing this issue by:
1. Restoring the previous behavior of apic_clear_irr: clear irr_pending, call
   apic_clear_vector, and then if APIC_IRR is non-zero, set irr_pending.
2. On apic_set_irr: first call apic_set_vector, then set irr_pending.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: compute correct map even if all APICs are software disabled
Paolo Bonzini [Thu, 6 Nov 2014 09:51:45 +0000 (10:51 +0100)]
KVM: compute correct map even if all APICs are software disabled

Logical destination mode can be used to send NMI IPIs even when all
APICs are software disabled, so if all APICs are software disabled we
should still look at the DFRs.

So the DFRs should all be the same, even if some or all APICs are
software disabled.  However, the SDM does not say this, so tweak
the logic as follows:

- if one APIC is enabled and has LDR != 0, use that one to build the map.
This picks the right DFR in case an OS is only setting it for the
software-enabled APICs, or in case an OS is using logical addressing
on some APICs while leaving the rest in reset state (using LDR was
suggested by Radim).

- if all APICs are disabled, pick a random one to build the map.
We use the last one with LDR != 0 for simplicity.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Software disabled APIC should still deliver NMIs
Nadav Amit [Sun, 2 Nov 2014 09:54:54 +0000 (11:54 +0200)]
KVM: x86: Software disabled APIC should still deliver NMIs

Currently, the APIC logical map does not consider VCPUs whose local-apic is
software-disabled.  However, NMIs, INIT, etc. should still be delivered to such
VCPUs. Therefore, the APIC mode should first be determined, and then the map,
considering all VCPUs should be constructed.

To address this issue, first find the APIC mode, and only then construct the
logical map.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: simplify update_memslots invocation
Paolo Bonzini [Fri, 14 Nov 2014 09:55:31 +0000 (10:55 +0100)]
kvm: simplify update_memslots invocation

The update_memslots invocation is only needed in one case.  Make
the code clearer by moving it to __kvm_set_memory_region, and
removing the wrapper around insert_memslot.

Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: commonize allocation of the new memory slots
Paolo Bonzini [Fri, 14 Nov 2014 09:46:45 +0000 (10:46 +0100)]
kvm: commonize allocation of the new memory slots

The two kmemdup invocations can be unified.  I find that the new
placement of the comment makes it easier to see what happens.

Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: memslots: track id_to_index changes during the insertion sort
Paolo Bonzini [Fri, 14 Nov 2014 09:22:07 +0000 (10:22 +0100)]
kvm: memslots: track id_to_index changes during the insertion sort

This completes the optimization from the previous patch, by
removing the KVM_MEM_SLOTS_NUM-iteration loop from insert_memslot.

Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: memslots: replace heap sort with an insertion sort pass
Igor Mammedov [Thu, 13 Nov 2014 23:00:13 +0000 (23:00 +0000)]
kvm: memslots: replace heap sort with an insertion sort pass

memslots is a sorted array.  When a slot is changed, heapsort (lib/sort.c)
would take O(n log n) time to update it; an optimized insertion sort will
only cost O(n) on an array with just one item out of order.

Replace sort() with a custom sort that takes advantage of memslots usage
pattern and the known position of the changed slot.

performance change of 128 memslots insertions with gradually increasing
size (the worst case):

      heap sort   custom sort
max:  249747      2500 cycles

with custom sort alg taking ~98% less then original
update time.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: increase user memory slots to 509
Igor Mammedov [Thu, 6 Nov 2014 15:52:47 +0000 (15:52 +0000)]
kvm: x86: increase user memory slots to 509

With the 3 private slots, this gives us 512 slots total.
Motivation for this is in addition to assigned devices
support more memory hotplug slots, where 1 slot is
used by a hotplugged memory stick.
It will allow to support upto 256 hotplug memory
slots and leave 253 slots for assigned devices and
other devices that use them.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: svm: move WARN_ON in svm_adjust_tsc_offset
Chris J Arges [Thu, 13 Nov 2014 03:00:39 +0000 (21:00 -0600)]
kvm: svm: move WARN_ON in svm_adjust_tsc_offset

When running the tsc_adjust kvm-unit-test on an AMD processor with the
IA32_TSC_ADJUST feature enabled, the WARN_ON in svm_adjust_tsc_offset can be
triggered. This WARN_ON checks for a negative adjustment in case __scale_tsc
is called; however it may trigger unnecessary warnings.

This patch moves the WARN_ON to trigger only if __scale_tsc will actually be
called from svm_adjust_tsc_offset. In addition make adj in kvm_set_msr_common
s64 since this can have signed values.

Signed-off-by: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agox86, kvm, vmx: Don't set LOAD_IA32_EFER when host and guest match
Andy Lutomirski [Mon, 10 Nov 2014 19:19:15 +0000 (11:19 -0800)]
x86, kvm, vmx: Don't set LOAD_IA32_EFER when host and guest match

There's nothing to switch if the host and guest values are the same.
I am unable to find evidence that this makes any difference
whatsoever.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
[I could see a difference on Nehalem.  From 5 runs:

 userspace exit, guest!=host   12200 11772 12130 12164 12327
 userspace exit, guest=host    11983 11780 11920 11919 12040
 lightweight exit, guest!=host  3214  3220  3238  3218  3337
 lightweight exit, guest=host   3178  3193  3193  3187  3220

 This passes the t-test with 99% confidence for userspace exit,
 98.5% confidence for lightweight exit. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agox86, kvm, vmx: Always use LOAD_IA32_EFER if available
Andy Lutomirski [Sat, 8 Nov 2014 02:25:18 +0000 (18:25 -0800)]
x86, kvm, vmx: Always use LOAD_IA32_EFER if available

At least on Sandy Bridge, letting the CPU switch IA32_EFER is much
faster than switching it manually.

I benchmarked this using the vmexit kvm-unit-test (single run, but
GOAL multiplied by 5 to do more iterations):

Test                                  Before      After    Change
cpuid                                   2000       1932    -3.40%
vmcall                                  1914       1817    -5.07%
mov_from_cr8                              13         13     0.00%
mov_to_cr8                                19         19     0.00%
inl_from_pmtimer                       19164      10619   -44.59%
inl_from_qemu                          15662      10302   -34.22%
inl_from_kernel                         3916       3802    -2.91%
outl_to_kernel                          2230       2194    -1.61%
mov_dr                                   172        176     2.33%
ipi                                (skipped)  (skipped)
ipi+halt                           (skipped)  (skipped)
ple-round-robin                           13         13     0.00%
wr_tsc_adjust_msr                       1920       1845    -3.91%
rd_tsc_adjust_msr                       1892       1814    -4.12%
mmio-no-eventfd:pci-mem                16394      11165   -31.90%
mmio-wildcard-eventfd:pci-mem           4607       4645     0.82%
mmio-datamatch-eventfd:pci-mem          4601       4610     0.20%
portio-no-eventfd:pci-io               11507       7942   -30.98%
portio-wildcard-eventfd:pci-io          2239       2225    -0.63%
portio-datamatch-eventfd:pci-io         2250       2234    -0.71%

I haven't explicitly computed the significance of these numbers,
but this isn't subtle.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
[The results were reproducible on all of Nehalem, Sandy Bridge and
 Ivy Bridge.  The slowness of manual switching is because writing
 to EFER with WRMSR triggers a TLB flush, even if the only bit you're
 touching is SCE (so the page table format is not affected).  Doing
 the write as part of vmentry/vmexit, instead, does not flush the TLB,
 probably because all processors that have EPT also have VPID. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: fix warning on 32-bit compilation
Paolo Bonzini [Mon, 10 Nov 2014 12:53:25 +0000 (13:53 +0100)]
KVM: x86: fix warning on 32-bit compilation

PCIDs are only supported in 64-bit mode.  No need to clear bit 63
of CR3 unless the host is 64-bit.

Reported by Fengguang Wu's autobuilder.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: add trace event for pvclock updates
David Matlack [Wed, 5 Nov 2014 19:46:42 +0000 (11:46 -0800)]
kvm: x86: add trace event for pvclock updates

The new trace event records:
  * the id of vcpu being updated
  * the pvclock_vcpu_time_info struct being written to guest memory

This is useful for debugging pvclock bugs, such as the bug fixed by
"[PATCH] kvm: x86: Fix kvm clock versioning.".

Signed-off-by: David Matlack <dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: Fix kvm clock versioning.
Owen Hofmann [Tue, 4 Nov 2014 00:57:18 +0000 (16:57 -0800)]
kvm: x86: Fix kvm clock versioning.

kvm updates the version number for the guest paravirt clock structure by
incrementing the version of its private copy. It does not read the guest
version, so will write version = 2 in the first update for every new VM,
including after restoring a saved state. If guest state is saved during
reading the clock, it could read and accept struct fields and guest TSC
from two different updates. This changes the code to increment the guest
version and write it back.

Signed-off-by: Owen Hofmann <osh@google.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: MOVNTI emulation min opsize is not respected
Nadav Amit [Sun, 2 Nov 2014 09:55:00 +0000 (11:55 +0200)]
KVM: x86: MOVNTI emulation min opsize is not respected

Commit 3b32004a66e9 ("KVM: x86: movnti minimum op size of 32-bit is not kept")
did not fully fix the minimum operand size of MONTI emulation. Still, MOVNTI
may be mistakenly performed using 16-bit opsize.

This patch add No16 flag to mark an instruction does not support 16-bits
operand size.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: update masterclock values on TSC writes
Marcelo Tosatti [Tue, 4 Nov 2014 23:30:44 +0000 (21:30 -0200)]
KVM: x86: update masterclock values on TSC writes

When the guest writes to the TSC, the masterclock TSC copy must be
updated as well along with the TSC_OFFSET update, otherwise a negative
tsc_timestamp is calculated at kvm_guest_time_update.

Once "if (!vcpus_matched && ka->use_master_clock)" is simplified to
"if (ka->use_master_clock)", the corresponding "if (!ka->use_master_clock)"
becomes redundant, so remove the do_request boolean and collapse
everything into a single condition.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Return UNHANDLABLE on unsupported SYSENTER
Nadav Amit [Sun, 2 Nov 2014 09:55:01 +0000 (11:55 +0200)]
KVM: x86: Return UNHANDLABLE on unsupported SYSENTER

Now that KVM injects #UD on "unhandlable" error, it makes better sense to
return such error on sysenter instead of directly injecting #UD to the guest.
This allows to track more easily the unhandlable cases the emulator does not
support.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Warn on APIC base relocation
Nadav Amit [Sun, 2 Nov 2014 09:54:59 +0000 (11:54 +0200)]
KVM: x86: Warn on APIC base relocation

APIC base relocation is unsupported by KVM. If anyone uses it, the least should
be to report a warning in the hypervisor.

Note that KVM-unit-tests uses this feature for some reason, so running the
tests triggers the warning.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Emulator mis-decodes VEX instructions on real-mode
Nadav Amit [Sun, 2 Nov 2014 09:54:58 +0000 (11:54 +0200)]
KVM: x86: Emulator mis-decodes VEX instructions on real-mode

Commit 7fe864dc942c (KVM: x86: Mark VEX-prefix instructions emulation as
unimplemented, 2014-06-02) marked VEX instructions as such in protected
mode.  VEX-prefix instructions are not supported relevant on real-mode
and VM86, but should cause #UD instead of being decoded as LES/LDS.

Fix this behaviour to be consistent with real hardware.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
[Check for mod == 3, rather than 2 or 3. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Remove redundant and incorrect cpl check on task-switch
Nadav Amit [Sun, 2 Nov 2014 09:54:57 +0000 (11:54 +0200)]
KVM: x86: Remove redundant and incorrect cpl check on task-switch

Task-switch emulation checks the privilege level prior to performing the
task-switch.  This check is incorrect in the case of task-gates, in which the
tss.dpl is ignored, and can cause superfluous exceptions.  Moreover this check
is unnecassary, since the CPU checks the privilege levels prior to exiting.
Intel SDM 25.4.2 says "If CALL or JMP accesses a TSS descriptor directly
outside IA-32e mode, privilege levels are checked on the TSS descriptor" prior
to exiting.  AMD 15.14.1 says "The intercept is checked before the task switch
takes place but after the incoming TSS and task gate (if one was involved) have
been checked for correctness."

This patch removes the CPL checks for CALL and JMP.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Inject #GP when loading system segments with non-canonical base
Nadav Amit [Sun, 2 Nov 2014 09:54:56 +0000 (11:54 +0200)]
KVM: x86: Inject #GP when loading system segments with non-canonical base

When emulating LTR/LDTR/LGDT/LIDT, #GP should be injected if the base is
non-canonical. Otherwise, VM-entry will fail.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Combine the lgdt and lidt emulation logic
Nadav Amit [Sun, 2 Nov 2014 09:54:55 +0000 (11:54 +0200)]
KVM: x86: Combine the lgdt and lidt emulation logic

LGDT and LIDT emulation logic is almost identical. Merge the logic into a
single point to avoid redundancy. This will be used by the next patch that
will ensure the bases of the loaded GDTR and IDTR are canonical.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>