openwrt/staging/blogic.git
10 years agoarm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort
Steve Capper [Tue, 14 Oct 2014 14:02:15 +0000 (15:02 +0100)]
arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort

Commit:
b886576 ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping

introduced some code in user_mem_abort that failed to compile if
STRICT_MM_TYPECHECKS was enabled.

This patch fixes up the failing comparison.

Signed-off-by: Steve Capper <steve.capper@linaro.org>
Reviewed-by: Kim Phillips <kim.phillips@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE
Christoffer Dall [Fri, 10 Oct 2014 10:14:29 +0000 (12:14 +0200)]
arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE

When creating or moving a memslot, make sure the IPA space is within the
addressable range of the guest.  Otherwise, user space can create too
large a memslot and KVM would try to access potentially unallocated page
table entries when inserting entries in the Stage-2 page tables.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2
Christoffer Dall [Fri, 10 Oct 2014 10:14:28 +0000 (12:14 +0200)]
arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2

This patch adds the necessary support for all host kernel PGSIZE and
VA_SPACE configuration options for both EL2 and the Stage-2 page tables.

However, for 40bit and 42bit PARange systems, the architecture mandates
that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
pagge tables than levels of host kernel page tables.  At the same time,
systems with a PARange > 42bit, we limit the IPA range by always setting
VTCR_EL2.T0SZ to 24.

To solve the situation with different levels of page tables for Stage-2
translation than the host kernel page tables, we allocate a dummy PGD
with pointers to our actual inital level Stage-2 page table, in order
for us to reuse the kernel pgtable manipulation primitives.  Reproducing
all these in KVM does not look pretty and unnecessarily complicates the
32-bit side.

Systems with a PARange < 40bits are not yet supported.

 [ I have reworked this patch from its original form submitted by
   Jungseok to take the architecture constraints into consideration.
   There were too many changes from the original patch for me to
   preserve the authorship.  Thanks to Catalin Marinas for his help in
   figuring out a good solution to this challenge.  I have also fixed
   various bugs and missing error code handling from the original
   patch. - Christoffer ]

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: map MMIO regions at creation time
Ard Biesheuvel [Fri, 10 Oct 2014 15:00:32 +0000 (17:00 +0200)]
arm/arm64: KVM: map MMIO regions at creation time

There is really no point in faulting in memory regions page by page
if they are not backed by demand paged system RAM but by a linear
passthrough mapping of a host MMIO region. So instead, detect such
regions at setup time and install the mappings for the backing all
at once.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm64: kvm: define PAGE_S2_DEVICE as read-only by default
Ard Biesheuvel [Wed, 17 Sep 2014 21:56:20 +0000 (14:56 -0700)]
arm64: kvm: define PAGE_S2_DEVICE as read-only by default

Now that we support read-only memslots, we need to make sure that
pass-through device mappings are not mapped writable if the guest
has requested them to be read-only. The existing implementation
already honours this by calling kvm_set_s2pte_writable() on the new
pte in case of writable mappings, so all we need to do is define
the default pgprot_t value used for devices to be PTE_S2_RDONLY.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoARM: kvm: define PAGE_S2_DEVICE as read-only by default
Ard Biesheuvel [Wed, 17 Sep 2014 21:56:19 +0000 (14:56 -0700)]
ARM: kvm: define PAGE_S2_DEVICE as read-only by default

Now that we support read-only memslots, we need to make sure that
pass-through device mappings are not mapped writable if the guest
has requested them to be read-only. The existing implementation
already honours this by calling kvm_set_s2pte_writable() on the new
pte in case of writable mappings, so all we need to do is define
the default pgprot_t value used for devices to be PTE_S2_RDONLY.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: add 'writable' parameter to kvm_phys_addr_ioremap
Ard Biesheuvel [Wed, 17 Sep 2014 21:56:18 +0000 (14:56 -0700)]
arm/arm64: KVM: add 'writable' parameter to kvm_phys_addr_ioremap

Add support for read-only MMIO passthrough mappings by adding a
'writable' parameter to kvm_phys_addr_ioremap. For the moment,
mappings will be read-write even if 'writable' is false, but once
the definition of PAGE_S2_DEVICE gets changed, those mappings will
be created read-only.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: fix potential NULL dereference in user_mem_abort()
Ard Biesheuvel [Wed, 17 Sep 2014 21:56:17 +0000 (14:56 -0700)]
arm/arm64: KVM: fix potential NULL dereference in user_mem_abort()

Handle the potential NULL return value of find_vma_intersection()
before dereferencing it.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: use __GFP_ZERO not memset() to get zeroed pages
Ard Biesheuvel [Wed, 17 Sep 2014 21:56:16 +0000 (14:56 -0700)]
arm/arm64: KVM: use __GFP_ZERO not memset() to get zeroed pages

Pass __GFP_ZERO to __get_free_pages() instead of calling memset()
explicitly.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoARM: KVM: fix vgic-disabled build
Arnd Bergmann [Tue, 30 Sep 2014 11:38:20 +0000 (13:38 +0200)]
ARM: KVM: fix vgic-disabled build

The vgic code can be disabled in Kconfig and there are dummy implementations
of most of the provided API functions for the disabled case.

However, the newly introduced kvm_vgic_destroy/kvm_vgic_vcpu_destroy
functions are lacking those dummies, resulting in this build error:

arch/arm/kvm/arm.c: In function 'kvm_arch_destroy_vm':
arch/arm/kvm/arm.c:165:2: error: implicit declaration of function 'kvm_vgic_destroy' [-Werror=implicit-function-declaration]
  kvm_vgic_destroy(kvm);
  ^
arch/arm/kvm/arm.c: In function 'kvm_arch_vcpu_free':
arch/arm/kvm/arm.c:248:2: error: implicit declaration of function 'kvm_vgic_vcpu_destroy' [-Werror=implicit-function-declaration]
  kvm_vgic_vcpu_destroy(vcpu);
  ^

This adds two inline helpers to get it to build again in this configuration.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: c1bfb577add ("arm/arm64: KVM: vgic: switch to dynamic allocation")
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm: kvm: fix CPU hotplug
Vladimir Murzin [Mon, 22 Sep 2014 14:52:48 +0000 (15:52 +0100)]
arm: kvm: fix CPU hotplug

On some platforms with no power management capabilities, the hotplug
implementation is allowed to return from a smp_ops.cpu_die() call as a
function return. Upon a CPU onlining event, the KVM CPU notifier tries
to reinstall the hyp stub, which fails on platform where no reset took
place following a hotplug event, with the message:

CPU1: smp_ops.cpu_die() returned, trying to resuscitate
CPU1: Booted secondary processor
Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x80409540
unexpected data abort in Hyp mode at: 0x80401fe8
unexpected HVC/SVC trap in Hyp mode at: 0x805c6170

since KVM code is trying to reinstall the stub on a system where it is
already configured.

To prevent this issue, this patch adds a check in the KVM hotplug
notifier that detects if the HYP stub really needs re-installing when a
CPU is onlined and skips the installation call if the stub is already in
place, which means that the CPU has not been reset.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Report correct FSC for unsupported fault types
Christoffer Dall [Fri, 26 Sep 2014 10:29:34 +0000 (12:29 +0200)]
arm/arm64: KVM: Report correct FSC for unsupported fault types

When we catch something that's not a permission fault or a translation
fault, we log the unsupported FSC in the kernel log, but we were masking
off the bottom bits of the FSC which was not very helpful.

Also correctly report the FSC for data and instruction faults rather
than telling people it was a DFCS, which doesn't exist in the ARM ARM.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
Joel Schopp [Wed, 9 Jul 2014 16:17:04 +0000 (11:17 -0500)]
arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc

The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
and not all the bits in the PA range. This is clearly a bug that
manifests itself on systems that allocate memory in the higher address
space range.

 [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
   instead of a hard-coded value and to move the alignment check of the
   allocation to mmu.c.  Also added a comment explaining why we hardcode
   the IPA range and changed the stage-2 pgd allocation to be based on
   the 40 bit IPA range instead of the maximum possible 48 bit PA range.
   - Christoffer ]

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Joel Schopp <joel.schopp@amd.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Fix set_clear_sgi_pend_reg offset
Christoffer Dall [Thu, 25 Sep 2014 16:41:07 +0000 (18:41 +0200)]
arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset

The sgi values calculated in read_set_clear_sgi_pend_reg() and
write_set_clear_sgi_pend_reg() were horribly incorrectly multiplied by 4
with catastrophic results in that subfunctions ended up overwriting
memory not allocated for the expected purpose.

This showed up as bugs in kfree() and the kernel complaining a lot of
you turn on memory debugging.

This addresses: http://marc.info/?l=kvm&m=141164910007868&w=2

Reported-by: Shannon Zhao <zhaoshenglong@huawei.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: vgic: make number of irqs a configurable attribute
Marc Zyngier [Tue, 8 Jul 2014 11:09:07 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: make number of irqs a configurable attribute

In order to make the number of interrupts configurable, use the new
fancy device management API to add KVM_DEV_ARM_VGIC_GRP_NR_IRQS as
a VGIC configurable attribute.

Userspace can now specify the exact size of the GIC (by increments
of 32 interrupts).

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoarm/arm64: KVM: vgic: delay vgic allocation until init time
Marc Zyngier [Tue, 8 Jul 2014 11:09:06 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: delay vgic allocation until init time

It is now quite easy to delay the allocation of the vgic tables
until we actually require it to be up and running (when the first
vcpu is kicking around, or someones tries to access the GIC registers).

This allow us to allocate memory for the exact number of CPUs we
have. As nobody configures the number of interrupts just yet,
use a fallback to VGIC_NR_IRQS_LEGACY.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoarm/arm64: KVM: vgic: kill VGIC_NR_IRQS
Marc Zyngier [Tue, 8 Jul 2014 11:09:05 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: kill VGIC_NR_IRQS

Nuke VGIC_NR_IRQS entierly, now that the distributor instance
contains the number of IRQ allocated to this GIC.

Also add VGIC_NR_IRQS_LEGACY to preserve the current API.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoarm/arm64: KVM: vgic: handle out-of-range MMIO accesses
Marc Zyngier [Tue, 8 Jul 2014 11:09:04 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: handle out-of-range MMIO accesses

Now that we can (almost) dynamically size the number of interrupts,
we're facing an interesting issue:

We have to evaluate at runtime whether or not an access hits a valid
register, based on the sizing of this particular instance of the
distributor. Furthermore, the GIC spec says that accessing a reserved
register is RAZ/WI.

For this, add a new field to our range structure, indicating the number
of bits a single interrupts uses. That allows us to find out whether or
not the access is in range.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoarm/arm64: KVM: vgic: kill VGIC_MAX_CPUS
Marc Zyngier [Tue, 8 Jul 2014 11:09:03 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: kill VGIC_MAX_CPUS

We now have the information about the number of CPU interfaces in
the distributor itself. Let's get rid of VGIC_MAX_CPUS, and just
rely on KVM_MAX_VCPUS where we don't have the choice. Yet.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoarm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS
Marc Zyngier [Tue, 8 Jul 2014 11:09:02 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: Parametrize VGIC_NR_SHARED_IRQS

Having a dynamic number of supported interrupts means that we
cannot relly on VGIC_NR_SHARED_IRQS being fixed anymore.

Instead, make it take the distributor structure as a parameter,
so it can return the right value.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoarm/arm64: KVM: vgic: switch to dynamic allocation
Marc Zyngier [Tue, 8 Jul 2014 11:09:01 +0000 (12:09 +0100)]
arm/arm64: KVM: vgic: switch to dynamic allocation

So far, all the VGIC data structures are statically defined by the
*maximum* number of vcpus and interrupts it supports. It means that
we always have to oversize it to cater for the worse case.

Start by changing the data structures to be dynamically sizeable,
and allocate them at runtime.

The sizes are still very static though.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoKVM: ARM: vgic: plug irq injection race
Marc Zyngier [Tue, 8 Jul 2014 11:09:00 +0000 (12:09 +0100)]
KVM: ARM: vgic: plug irq injection race

As it stands, nothing prevents userspace from injecting an interrupt
before the guest's GIC is actually initialized.

This goes unnoticed so far (as everything is pretty much statically
allocated), but ends up exploding in a spectacular way once we switch
to a more dynamic allocation (the GIC data structure isn't there yet).

The fix is to test for the "ready" flag in the VGIC distributor before
trying to inject the interrupt. Note that in order to avoid breaking
userspace, we have to ignore what is essentially an error.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: vgic: Clarify and correct vgic documentation
Christoffer Dall [Sat, 14 Jun 2014 20:34:04 +0000 (22:34 +0200)]
arm/arm64: KVM: vgic: Clarify and correct vgic documentation

The VGIC virtual distributor implementation documentation was written a
very long time ago, before the true nature of the beast had been
partially absorbed into my bloodstream.  Clarify the docs.

Plus, it fixes an actual bug.  ICFRn, pfff.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: vgic: Fix SGI writes to GICD_I{CS}PENDR0
Christoffer Dall [Sat, 14 Jun 2014 20:30:45 +0000 (22:30 +0200)]
arm/arm64: KVM: vgic: Fix SGI writes to GICD_I{CS}PENDR0

Writes to GICD_ISPENDR0 and GICD_ICPENDR0 ignore all settings of the
pending state for SGIs.  Make sure the implementation handles this
correctly.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: vgic: Improve handling of GICD_I{CS}PENDRn
Christoffer Dall [Sat, 14 Jun 2014 19:54:51 +0000 (21:54 +0200)]
arm/arm64: KVM: vgic: Improve handling of GICD_I{CS}PENDRn

Writes to GICD_ISPENDRn and GICD_ICPENDRn are currently not handled
correctly for level-triggered interrupts.  The spec states that for
level-triggered interrupts, writes to the GICD_ISPENDRn activate the
output of a flip-flop which is in turn or'ed with the actual input
interrupt signal.  Correspondingly, writes to GICD_ICPENDRn simply
deactivates the output of that flip-flop, but does not (of course) affect
the external input signal.  Reads from GICC_IAR will also deactivate the
flip-flop output.

This requires us to track the state of the level-input separately from
the state in the flip-flop.  We therefore introduce two new variables on
the distributor struct to track these two states.  Astute readers may
notice that this is introducing more state than required (because an OR
of the two states gives you the pending state), but the remaining vgic
code uses the pending bitmap for optimized operations to figure out, at
the end of the day, if an interrupt is pending or not on the distributor
side.  Refactoring the code to consider the two state variables all the
places where we currently access the precomputed pending value, did not
look pretty.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: vgic: Clear queued flags on unqueue
Christoffer Dall [Sat, 14 Jun 2014 20:37:33 +0000 (22:37 +0200)]
arm/arm64: KVM: vgic: Clear queued flags on unqueue

If we unqueue a level-triggered interrupt completely, and the LR does
not stick around in the active state (and will therefore no longer
generate a maintenance interrupt), then we should clear the queued flag
so that the vgic can actually queue this level-triggered interrupt at a
later time and deal with its pending state then.

Note: This should actually be properly fixed to handle the active state
on the distributor.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Rename irq_active to irq_queued
Christoffer Dall [Mon, 9 Jun 2014 10:55:13 +0000 (12:55 +0200)]
arm/arm64: KVM: Rename irq_active to irq_queued

We have a special bitmap on the distributor struct to keep track of when
level-triggered interrupts are queued on the list registers.  This was
named irq_active, which is confusing, because the active state of an
interrupt as per the GIC spec is a different thing, not specifically
related to edge-triggered/level-triggered configurations but rather
indicates an interrupt which has been ack'ed but not yet eoi'ed.

Rename the bitmap and the corresponding accessor functions to irq_queued
to clarify what this is actually used for.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Rename irq_state to irq_pending
Christoffer Dall [Mon, 9 Jun 2014 10:27:18 +0000 (12:27 +0200)]
arm/arm64: KVM: Rename irq_state to irq_pending

The irq_state field on the distributor struct is ambiguous in its
meaning; the comment says it's the level of the input put, but that
doesn't make much sense for edge-triggered interrupts.  The code
actually uses this state variable to check if the interrupt is in the
pending state on the distributor so clarify the comment and rename the
actual variable and accessor methods.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoMerge remote-tracking branch 'kvm/next' into queue
Christoffer Dall [Fri, 19 Sep 2014 01:15:32 +0000 (18:15 -0700)]
Merge remote-tracking branch 'kvm/next' into queue

Conflicts:
arch/arm64/include/asm/kvm_host.h
virt/kvm/arm/vgic.c

10 years agokvm: Make init_rmode_identity_map() return 0 on success.
Tang Chen [Tue, 16 Sep 2014 10:41:59 +0000 (18:41 +0800)]
kvm: Make init_rmode_identity_map() return 0 on success.

In init_rmode_identity_map(), there two variables indicating the return
value, r and ret, and it return 0 on error, 1 on success. The function
is only called by vmx_create_vcpu(), and ret is redundant.

This patch removes the redundant variable, and makes init_rmode_identity_map()
return 0 on success, -errno on failure.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: Remove ept_identity_pagetable from struct kvm_arch.
Tang Chen [Tue, 16 Sep 2014 10:41:58 +0000 (18:41 +0800)]
kvm: Remove ept_identity_pagetable from struct kvm_arch.

kvm_arch->ept_identity_pagetable holds the ept identity pagetable page. But
it is never used to refer to the page at all.

In vcpu initialization, it indicates two things:
1. indicates if ept page is allocated
2. indicates if a memory slot for identity page is initialized

Actually, kvm_arch->ept_identity_pagetable_done is enough to tell if the ept
identity pagetable is initialized. So we can remove ept_identity_pagetable.

NOTE: In the original code, ept identity pagetable page is pinned in memroy.
      As a result, it cannot be migrated/hot-removed. After this patch, since
      kvm_arch->ept_identity_pagetable is removed, ept identity pagetable page
      is no longer pinned in memory. And it can be migrated/hot-removed.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Reviewed-by: Gleb Natapov <gleb@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: VFIO: register kvm_device_ops dynamically
Will Deacon [Tue, 2 Sep 2014 09:27:36 +0000 (10:27 +0100)]
KVM: VFIO: register kvm_device_ops dynamically

Now that we have a dynamic means to register kvm_device_ops, use that
for the VFIO kvm device, instead of relying on the static table.

This is achieved by a module_init call to register the ops with KVM.

Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Alex Williamson <Alex.Williamson@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: s390: register flic ops dynamically
Cornelia Huck [Tue, 2 Sep 2014 09:27:35 +0000 (10:27 +0100)]
KVM: s390: register flic ops dynamically

Using the new kvm_register_device_ops() interface makes us get rid of
an #ifdef in common code.

Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: ARM: vgic: register kvm_device_ops dynamically
Will Deacon [Tue, 2 Sep 2014 09:27:34 +0000 (10:27 +0100)]
KVM: ARM: vgic: register kvm_device_ops dynamically

Now that we have a dynamic means to register kvm_device_ops, use that
for the ARM VGIC, instead of relying on the static table.

Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: device: add simple registration mechanism for kvm_device_ops
Will Deacon [Tue, 2 Sep 2014 09:27:33 +0000 (10:27 +0100)]
KVM: device: add simple registration mechanism for kvm_device_ops

kvm_ioctl_create_device currently has knowledge of all the device types
and their associated ops. This is fairly inflexible when adding support
for new in-kernel device emulations, so move what we currently have out
into a table, which can support dynamic registration of ops by new
drivers for virtual hardware.

Cc: Alex Williamson <Alex.Williamson@redhat.com>
Cc: Alex Graf <agraf@suse.de>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: ioapic: conditionally delay irq delivery duringeoi broadcast
Zhang Haoyu [Thu, 11 Sep 2014 08:47:04 +0000 (16:47 +0800)]
kvm: ioapic: conditionally delay irq delivery duringeoi broadcast

Currently, we call ioapic_service() immediately when we find the irq is still
active during eoi broadcast. But for real hardware, there's some delay between
the EOI writing and irq delivery.  If we do not emulate this behavior, and
re-inject the interrupt immediately after the guest sends an EOI and re-enables
interrupts, a guest might spend all its time in the ISR if it has a broken
handler for a level-triggered interrupt.

Such livelock actually happens with Windows guests when resuming from
hibernation.

As there's no way to recognize the broken handle from new raised ones, this patch
delays an interrupt if 10.000 consecutive EOIs found that the interrupt was
still high.  The guest can then make a little forward progress, until a proper
IRQ handler is set or until some detection routine in the guest (such as
Linux's note_interrupt()) recognizes the situation.

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: Use kvm_make_request when applicable
Guo Hui Liu [Fri, 12 Sep 2014 05:43:19 +0000 (13:43 +0800)]
KVM: x86: Use kvm_make_request when applicable

This patch replace the set_bit method by kvm_make_request
to make code more readable and consistent.

Signed-off-by: Guo Hui Liu <liuguohui@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: EVENTFD: remove inclusion of irq.h
Eric Auger [Mon, 1 Sep 2014 08:36:08 +0000 (09:36 +0100)]
KVM: EVENTFD: remove inclusion of irq.h

No more needed. irq.h would be void on ARM.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()
Ard Biesheuvel [Tue, 9 Sep 2014 10:27:09 +0000 (11:27 +0100)]
ARM/arm64: KVM: fix use of WnR bit in kvm_is_write_fault()

The ISS encoding for an exception from a Data Abort has a WnR
bit[6] that indicates whether the Data Abort was caused by a
read or a write instruction. While there are several fields
in the encoding that are only valid if the ISV bit[24] is set,
WnR is not one of them, so we can read it unconditionally.

Instead of fixing both implementations of kvm_is_write_fault()
in place, reimplement it just once using kvm_vcpu_dabt_iswrite(),
which already does the right thing with respect to the WnR bit.
Also fix up the callers to pass 'vcpu'

Acked-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
10 years agoKVM: x86: make apic_accept_irq tracepoint more generic
Paolo Bonzini [Thu, 11 Sep 2014 09:51:02 +0000 (11:51 +0200)]
KVM: x86: make apic_accept_irq tracepoint more generic

Initially the tracepoint was added only to the APIC_DM_FIXED case,
also because it reported coalesced interrupts that only made sense
for that case.  However, the coalesced argument is not used anymore
and tracing other delivery modes is useful, so hoist the call out
of the switch statement.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.
Tang Chen [Thu, 11 Sep 2014 05:38:00 +0000 (13:38 +0800)]
kvm: Use APIC_DEFAULT_PHYS_BASE macro as the apic access page address.

We have APIC_DEFAULT_PHYS_BASE defined as 0xfee00000, which is also the address of
apic access page. So use this macro.

Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Reviewed-by: Gleb Natapov <gleb@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoMerge tag 'kvm-s390-next-20140910' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Thu, 11 Sep 2014 09:09:33 +0000 (11:09 +0200)]
Merge tag 'kvm-s390-next-20140910' of git://git./linux/kernel/git/kvms390/linux into kvm-next

KVM: s390: Fixes and features for next (3.18)

1. Crypto/CPACF support: To enable the MSA4 instructions we have to
   provide a common control structure for each SIE control block
2. Two cleanups found by a static code checker: one redundant assignment
   and one useless if
3. Fix the page handling of the diag10 ballooning interface. If the
   guest freed the pages at absolute 0 some checks and frees were
   incorrect
4. Limit guests to 16TB
5. Add __must_check to interrupt injection code

10 years agoKVM: s390/interrupt: remove double assignment
Christian Borntraeger [Wed, 3 Sep 2014 14:16:47 +0000 (16:16 +0200)]
KVM: s390/interrupt: remove double assignment

r is already initialized to 0.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
10 years agoKVM: s390/cmm: Fix prefix handling for diag 10 balloon
Christian Borntraeger [Wed, 3 Sep 2014 19:23:13 +0000 (21:23 +0200)]
KVM: s390/cmm: Fix prefix handling for diag 10 balloon

The old handling of prefix pages was broken in the diag10 ballooner.
We now rely on gmap_discard to check for start > end and do a
slow path if the prefix swap pages are affected:
1. discard the pages from start to prefix
2. discard the absolute 0 pages
3. discard the pages after prefix swap to end

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
10 years agoKVM: s390: get rid of constant condition in ipte_unlock_simple
Christian Borntraeger [Wed, 3 Sep 2014 19:17:03 +0000 (21:17 +0200)]
KVM: s390: get rid of constant condition in ipte_unlock_simple

Due to the earlier check we know that ipte_lock_count must be 0.
No need to add a useless if. Let's make clear that we are going
to always wakeup when we execute that code.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
10 years agoKVM: s390: unintended fallthrough for external call
Christian Borntraeger [Wed, 3 Sep 2014 14:21:32 +0000 (16:21 +0200)]
KVM: s390: unintended fallthrough for external call

We must not fallthrough if the conditions for external call are not met.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org
10 years agoKVM: s390: Limit guest size to 16TB
Christian Borntraeger [Mon, 25 Aug 2014 10:38:57 +0000 (12:38 +0200)]
KVM: s390: Limit guest size to 16TB

Currently we fill up a full 5 level page table to hold the guest
mapping. Since commit "support gmap page tables with less than 5
levels" we can do better.
Having more than 4 TB might be useful for some testing scenarios,
so let's just limit ourselves to 16TB guest size.
Having more than that is totally untested as I do not have enough
swap space/memory.

We continue to allow ucontrol the full size.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
10 years agoKVM: s390: add __must_check to interrupt deliver functions
Christian Borntraeger [Mon, 25 Aug 2014 10:27:29 +0000 (12:27 +0200)]
KVM: s390: add __must_check to interrupt deliver functions

We now propagate interrupt injection errors back to the ioctl. We
should mark functions that might fail with __must_check.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
10 years agoKVM: CPACF: Enable MSA4 instructions for kvm guest
Tony Krowiak [Fri, 27 Jun 2014 18:46:01 +0000 (14:46 -0400)]
KVM: CPACF: Enable MSA4 instructions for kvm guest

We have to provide a per guest crypto block for the CPUs to
enable MSA4 instructions. According to icainfo on z196 or
later this enables CCM-AES-128, CMAC-AES-128, CMAC-AES-192
and CMAC-AES-256.

Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
[split MSA4/protected key into two patches]

10 years agoKVM: fix api documentation of KVM_GET_EMULATED_CPUID
Alex Bennée [Tue, 9 Sep 2014 16:27:19 +0000 (17:27 +0100)]
KVM: fix api documentation of KVM_GET_EMULATED_CPUID

It looks like when this was initially merged it got accidentally included
in the following section. I've just moved it back in the correct section
and re-numbered it as other ioctls have been added since.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: document KVM_SET_GUEST_DEBUG api
Alex Bennée [Tue, 9 Sep 2014 16:27:18 +0000 (17:27 +0100)]
KVM: document KVM_SET_GUEST_DEBUG api

In preparation for working on the ARM implementation I noticed the debug
interface was missing from the API document. I've pieced together the
expected behaviour from the code and commit messages written it up as
best I can.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: remove redundant assignments in __kvm_set_memory_region
Christian Borntraeger [Thu, 4 Sep 2014 19:13:33 +0000 (21:13 +0200)]
KVM: remove redundant assignments in __kvm_set_memory_region

__kvm_set_memory_region sets r to EINVAL very early.
Doing it again is not necessary. The same is true later on, where
r is assigned -ENOMEM twice.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: remove redundant assigment of return value in kvm_dev_ioctl
Christian Borntraeger [Thu, 4 Sep 2014 19:13:32 +0000 (21:13 +0200)]
KVM: remove redundant assigment of return value in kvm_dev_ioctl

The first statement of kvm_dev_ioctl is
        long r = -EINVAL;

No need to reassign the same value.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: remove redundant check of in_spin_loop
Christian Borntraeger [Thu, 4 Sep 2014 19:13:31 +0000 (21:13 +0200)]
KVM: remove redundant check of in_spin_loop

The expression `vcpu->spin_loop.in_spin_loop' is always true,
because it is evaluated only when the condition
`!vcpu->spin_loop.in_spin_loop' is false.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: propagate exception from permission checks on the nested page fault
Paolo Bonzini [Tue, 2 Sep 2014 11:23:06 +0000 (13:23 +0200)]
KVM: x86: propagate exception from permission checks on the nested page fault

Currently, if a permission error happens during the translation of
the final GPA to HPA, walk_addr_generic returns 0 but does not fill
in walker->fault.  To avoid this, add an x86_exception* argument
to the translate_gpa function, and let it fill in walker->fault.
The nested_page_fault field will be true, since the walk_mmu is the
nested_mmu and translate_gpu instead operates on the "outer" (NPT)
instance.

Reported-by: Valentine Sinitsyn <valentine.sinitsyn@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: skip writeback on injection of nested exception
Paolo Bonzini [Thu, 4 Sep 2014 17:46:15 +0000 (19:46 +0200)]
KVM: x86: skip writeback on injection of nested exception

If a nested page fault happens during emulation, we will inject a vmexit,
not a page fault.  However because writeback happens after the injection,
we will write ctxt->eip from L2 into the L1 EIP.  We do not write back
if an instruction caused an interception vmexit---do the same for page
faults.

Suggested-by: Gleb Natapov <gleb@kernel.org>
Reviewed-by: Gleb Natapov <gleb@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: nSVM: propagate the NPF EXITINFO to the guest
Paolo Bonzini [Tue, 2 Sep 2014 11:18:37 +0000 (13:18 +0200)]
KVM: nSVM: propagate the NPF EXITINFO to the guest

This is similar to what the EPT code does with the exit qualification.
This allows the guest to see a valid value for bits 33:32.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: reserve bit 8 of non-leaf PDPEs and PML4Es in 64-bit mode on AMD
Paolo Bonzini [Tue, 2 Sep 2014 11:24:12 +0000 (13:24 +0200)]
KVM: x86: reserve bit 8 of non-leaf PDPEs and PML4Es in 64-bit mode on AMD

Bit 8 would be the "global" bit, which does not quite make sense for non-leaf
page table entries.  Intel ignores it; AMD ignores it in PDEs, but reserves it
in PDPEs and PML4Es.  The SVM test is relying on this behavior, so enforce it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: mmio: cleanup kvm_set_mmio_spte_mask
Tiejun Chen [Mon, 1 Sep 2014 10:44:04 +0000 (18:44 +0800)]
KVM: mmio: cleanup kvm_set_mmio_spte_mask

Just reuse rsvd_bits() inside kvm_set_mmio_spte_mask()
for slightly better code.

Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: x86: fix stale mmio cache bug
David Matlack [Mon, 18 Aug 2014 22:46:07 +0000 (15:46 -0700)]
kvm: x86: fix stale mmio cache bug

The following events can lead to an incorrect KVM_EXIT_MMIO bubbling
up to userspace:

(1) Guest accesses gpa X without a memory slot. The gfn is cached in
struct kvm_vcpu_arch (mmio_gfn). On Intel EPT-enabled hosts, KVM sets
the SPTE write-execute-noread so that future accesses cause
EPT_MISCONFIGs.

(2) Host userspace creates a memory slot via KVM_SET_USER_MEMORY_REGION
covering the page just accessed.

(3) Guest attempts to read or write to gpa X again. On Intel, this
generates an EPT_MISCONFIG. The memory slot generation number that
was incremented in (2) would normally take care of this but we fast
path mmio faults through quickly_check_mmio_pf(), which only checks
the per-vcpu mmio cache. Since we hit the cache, KVM passes a
KVM_EXIT_MMIO up to userspace.

This patch fixes the issue by using the memslot generation number
to validate the mmio cache.

Cc: stable@vger.kernel.org
Signed-off-by: David Matlack <dmatlack@google.com>
[xiaoguangrong: adjust the code to make it simpler for stable-tree fix.]
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Tested-by: David Matlack <dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agokvm: fix potentially corrupt mmio cache
David Matlack [Mon, 18 Aug 2014 22:46:06 +0000 (15:46 -0700)]
kvm: fix potentially corrupt mmio cache

vcpu exits and memslot mutations can run concurrently as long as the
vcpu does not aquire the slots mutex. Thus it is theoretically possible
for memslots to change underneath a vcpu that is handling an exit.

If we increment the memslot generation number again after
synchronize_srcu_expedited(), vcpus can safely cache memslot generation
without maintaining a single rcu_dereference through an entire vm exit.
And much of the x86/kvm code does not maintain a single rcu_dereference
of the current memslots during each exit.

We can prevent the following case:

   vcpu (CPU 0)                             | thread (CPU 1)
--------------------------------------------+--------------------------
1  vm exit                                  |
2  srcu_read_unlock(&kvm->srcu)             |
3  decide to cache something based on       |
     old memslots                           |
4                                           | change memslots
                                            | (increments generation)
5                                           | synchronize_srcu(&kvm->srcu);
6  retrieve generation # from new memslots  |
7  tag cache with new memslot generation    |
8  srcu_read_unlock(&kvm->srcu)             |
...                                         |
   <action based on cache occurs even       |
    though the caching decision was based   |
    on the old memslots>                    |
...                                         |
   <action *continues* to occur until next  |
    memslot generation change, which may    |
    be never>                               |
                                            |

By incrementing the generation after synchronizing with kvm->srcu readers,
we ensure that the generation retrieved in (6) will become invalid soon
after (8).

Keeping the existing increment is not strictly necessary, but we
do keep it and just move it for consistency from update_memslots to
install_new_memslots.  It invalidates old cached MMIOs immediately,
instead of having to wait for the end of synchronize_srcu_expedited,
which makes the code more clearly correct in case CPU 1 is preempted
right after synchronize_srcu() returns.

To avoid halving the generation space in SPTEs, always presume that the
low bit of the generation is zero when reconstructing a generation number
out of an SPTE.  This effectively disables MMIO caching in SPTEs during
the call to synchronize_srcu_expedited.  Using the low bit this way is
somewhat like a seqcount---where the protected thing is a cache, and
instead of retrying we can simply punt if we observe the low bit to be 1.

Cc: stable@vger.kernel.org
Signed-off-by: David Matlack <dmatlack@google.com>
Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: do not bias the generation number in kvm_current_mmio_generation
Paolo Bonzini [Wed, 20 Aug 2014 12:29:21 +0000 (14:29 +0200)]
KVM: do not bias the generation number in kvm_current_mmio_generation

The next patch will give a meaning (a la seqcount) to the low bit of the
generation number.  Ensure that it matches between kvm->memslots->generation
and kvm_current_mmio_generation().

Cc: stable@vger.kernel.org
Reviewed-by: David Matlack <dmatlack@google.com>
Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: use guest maxphyaddr to check MTRR values
Paolo Bonzini [Fri, 29 Aug 2014 16:56:01 +0000 (18:56 +0200)]
KVM: x86: use guest maxphyaddr to check MTRR values

The check introduced in commit d7a2a246a1b5 (KVM: x86: #GP when attempts to write reserved bits of Variable Range MTRRs, 2014-08-19)
will break if the guest maxphyaddr is higher than the host's (which
sometimes happens depending on your hardware and how QEMU is
configured).

To fix this, use cpuid_maxphyaddr similar to how the APIC_BASE MSR
does already.

Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
Tested-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: remove garbage arg to *hardware_{en,dis}able
Radim Krčmář [Thu, 28 Aug 2014 13:13:03 +0000 (15:13 +0200)]
KVM: remove garbage arg to *hardware_{en,dis}able

In the beggining was on_each_cpu(), which required an unused argument to
kvm_arch_ops.hardware_{en,dis}able, but this was soon forgotten.

Remove unnecessary arguments that stem from this.

Signed-off-by: Radim KrÄ\8dmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: static inline empty kvm_arch functions
Radim Krčmář [Thu, 28 Aug 2014 13:13:02 +0000 (15:13 +0200)]
KVM: static inline empty kvm_arch functions

Using static inline is going to save few bytes and cycles.
For example on powerpc, the difference is 700 B after stripping.
(5 kB before)

This patch also deals with two overlooked empty functions:
kvm_arch_flush_shadow was not removed from arch/mips/kvm/mips.c
  2df72e9bc KVM: split kvm_arch_flush_shadow
and kvm_arch_sched_in never made it into arch/ia64/kvm/kvm-ia64.c.
  e790d9ef6 KVM: add kvm_arch_sched_in

Signed-off-by: Radim KrÄ\8dmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: forward declare structs in kvm_types.h
Paolo Bonzini [Fri, 29 Aug 2014 12:01:17 +0000 (14:01 +0200)]
KVM: forward declare structs in kvm_types.h

Opaque KVM structs are useful for prototypes in asm/kvm_host.h, to avoid
"'struct foo' declared inside parameter list" warnings (and consequent
breakage due to conflicting types).

Move them from individual files to a generic place in linux/kvm_types.h.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: remove Aligned bit from movntps/movntpd
Paolo Bonzini [Mon, 14 Jul 2014 10:54:48 +0000 (12:54 +0200)]
KVM: x86: remove Aligned bit from movntps/movntpd

These are not explicitly aligned, and do not require alignment on AVX.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86 emulator: emulate MOVNTDQ
Alex Williamson [Fri, 11 Jul 2014 17:56:31 +0000 (11:56 -0600)]
KVM: x86 emulator: emulate MOVNTDQ

Windows 8.1 guest with NVIDIA driver and GPU fails to boot with an
emulation failure.  The KVM spew suggests the fault is with lack of
movntdq emulation (courtesy of Paolo):

Code=02 00 00 b8 08 00 00 00 f3 0f 6f 44 0a f0 f3 0f 6f 4c 0a e0 <66> 0f e7 41 f0 66 0f e7 49 e0 48 83 e9 40 f3 0f 6f 44 0a 10 f3 0f 6f 0c 0a 66 0f e7 41 10

$ as -o a.out
        .section .text
        .byte 0x66, 0x0f, 0xe7, 0x41, 0xf0
        .byte 0x66, 0x0f, 0xe7, 0x49, 0xe0
$ objdump -d a.out
    0:  66 0f e7 41 f0          movntdq %xmm0,-0x10(%rcx)
    5:  66 0f e7 49 e0          movntdq %xmm1,-0x20(%rcx)

Add the necessary emulation.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: vmx: VMXOFF emulation in vm86 should cause #UD
Nadav Amit [Fri, 29 Aug 2014 08:26:55 +0000 (11:26 +0300)]
KVM: vmx: VMXOFF emulation in vm86 should cause #UD

Unlike VMCALL, the instructions VMXOFF, VMLAUNCH and VMRESUME should cause a UD
exception in real-mode or vm86.  However, the emulator considers all these
instructions the same for the matter of mode checks, and emulation upon exit
due to #UD exception.

As a result, the hypervisor behaves incorrectly on vm86 mode. VMXOFF, VMLAUNCH
or VMRESUME cause on vm86 exit due to #UD. The hypervisor then emulates these
instruction and inject #GP to the guest instead of #UD.

This patch creates a new group for these instructions and mark only VMCALL as
an instruction which can be emulated.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: x86: fix some sparse warnings
Paolo Bonzini [Tue, 26 Aug 2014 11:27:46 +0000 (13:27 +0200)]
KVM: x86: fix some sparse warnings

Sparse reports the following easily fixed warnings:

   arch/x86/kvm/vmx.c:8795:48: sparse: Using plain integer as NULL pointer
   arch/x86/kvm/vmx.c:2138:5: sparse: symbol vmx_read_l1_tsc was not declared. Should it be static?
   arch/x86/kvm/vmx.c:6151:48: sparse: Using plain integer as NULL pointer
   arch/x86/kvm/vmx.c:8851:6: sparse: symbol vmx_sched_in was not declared. Should it be static?

   arch/x86/kvm/svm.c:2162:5: sparse: symbol svm_read_l1_tsc was not declared. Should it be static?

Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: nVMX: nested TPR shadow/threshold emulation
Wanpeng Li [Thu, 21 Aug 2014 11:46:50 +0000 (19:46 +0800)]
KVM: nVMX: nested TPR shadow/threshold emulation

This patch fix bug https://bugzilla.kernel.org/show_bug.cgi?id=61411

TPR shadow/threshold feature is important to speed up the Windows guest.
Besides, it is a must feature for certain VMM.

We map virtual APIC page address and TPR threshold from L1 VMCS. If
TPR_BELOW_THRESHOLD VM exit is triggered by L2 guest and L1 interested
in, we inject it into L1 VMM for handling.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
[Add PAGE_ALIGNED check, do not write useless virtual APIC page address
 if TPR shadowing is disabled. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: nVMX: introduce nested_get_vmcs12_pages
Wanpeng Li [Thu, 21 Aug 2014 11:46:49 +0000 (19:46 +0800)]
KVM: nVMX: introduce nested_get_vmcs12_pages

Introduce function nested_get_vmcs12_pages() to check the valid
of nested apic access page and virtual apic page earlier.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: Unconditionally export KVM_CAP_USER_NMI
Christoffer Dall [Tue, 26 Aug 2014 12:00:38 +0000 (14:00 +0200)]
KVM: Unconditionally export KVM_CAP_USER_NMI

The idea between capabilities and the KVM_CHECK_EXTENSION ioctl is that
userspace can, at run-time, determine if a feature is supported or not.
This allows KVM to being supporting a new feature with a new kernel
version without any need to update user space.  Unfortunately, since the
definition of KVM_CAP_USER_NMI was guarded by #ifdef
__KVM_HAVE_USER_NMI, such discovery still required a user space update.

Therefore, unconditionally export KVM_CAP_USER_NMI and change the
the typo in the comment for the IOCTL number definition as well.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: Unconditionally export KVM_CAP_READONLY_MEM
Christoffer Dall [Tue, 26 Aug 2014 12:00:37 +0000 (14:00 +0200)]
KVM: Unconditionally export KVM_CAP_READONLY_MEM

The idea between capabilities and the KVM_CHECK_EXTENSION ioctl is that
userspace can, at run-time, determine if a feature is supported or not.
This allows KVM to being supporting a new feature with a new kernel
version without any need to update user space.  Unfortunately, since the
definition of KVM_CAP_READONLY_MEM was guarded by #ifdef
__KVM_HAVE_READONLY_MEM, such discovery still required a user space
update.

Therefore, unconditionally export KVM_CAP_READONLY_MEM and change the
in-kernel conditional to rely on __KVM_HAVE_READONLY_MEM.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: s390/mm: fix up indentation of set_guest_storage_key
Christian Borntraeger [Wed, 27 Aug 2014 10:20:02 +0000 (12:20 +0200)]
KVM: s390/mm: fix up indentation of set_guest_storage_key

commit ab3f285f227f ("KVM: s390/mm: try a cow on read only pages for
key ops")' misaligned a code block. Let's fixup the indentation.

Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoKVM: vgic: declare probe function pointer as const
Will Deacon [Tue, 26 Aug 2014 14:13:25 +0000 (15:13 +0100)]
KVM: vgic: declare probe function pointer as const

We extract the vgic probe function from the of_device_id data pointer,
which is const. Kill the sparse warning by ensuring that the local
function pointer is also marked as const.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoKVM: vgic: return int instead of bool when checking I/O ranges
Will Deacon [Tue, 26 Aug 2014 14:13:24 +0000 (15:13 +0100)]
KVM: vgic: return int instead of bool when checking I/O ranges

vgic_ioaddr_overlap claims to return a bool, but in reality it returns
an int. Shut sparse up by fixing the type signature.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoKVM: ARM/arm64: return -EFAULT if copy_from_user fails in set_timer_reg
Will Deacon [Tue, 26 Aug 2014 14:13:23 +0000 (15:13 +0100)]
KVM: ARM/arm64: return -EFAULT if copy_from_user fails in set_timer_reg

We currently return the number of bytes not copied if set_timer_reg
fails, which is almost certainly not what userspace would like.

This patch returns -EFAULT instead.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoKVM: ARM/arm64: avoid returning negative error code as bool
Will Deacon [Tue, 26 Aug 2014 14:13:22 +0000 (15:13 +0100)]
KVM: ARM/arm64: avoid returning negative error code as bool

is_valid_cache returns true if the specified cache is valid.
Unfortunately, if the parameter passed it out of range, we return
-ENOENT, which ends up as true leading to potential hilarity.

This patch returns false on the failure path instead.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoKVM: ARM/arm64: fix broken __percpu annotation
Will Deacon [Tue, 26 Aug 2014 14:13:21 +0000 (15:13 +0100)]
KVM: ARM/arm64: fix broken __percpu annotation

Running sparse results in a bunch of noisy address space mismatches
thanks to the broken __percpu annotation on kvm_get_running_vcpus.

This function returns a pcpu pointer to a pointer, not a pointer to a
pcpu pointer. This patch fixes the annotation, which kills the warnings
from sparse.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoKVM: ARM/arm64: fix non-const declaration of function returning const
Will Deacon [Tue, 26 Aug 2014 14:13:20 +0000 (15:13 +0100)]
KVM: ARM/arm64: fix non-const declaration of function returning const

Sparse kicks up about a type mismatch for kvm_target_cpu:

arch/arm64/kvm/guest.c:271:25: error: symbol 'kvm_target_cpu' redeclared with different type (originally declared at ./arch/arm64/include/asm/kvm_host.h:45) - different modifiers

so fix this by adding the missing const attribute to the function
declaration.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoarm/arm64: KVM: Support KVM_CAP_READONLY_MEM
Christoffer Dall [Tue, 19 Aug 2014 10:18:04 +0000 (12:18 +0200)]
arm/arm64: KVM: Support KVM_CAP_READONLY_MEM

When userspace loads code and data in a read-only memory regions, KVM
needs to be able to handle this on arm and arm64.  Specifically this is
used when running code directly from a read-only flash device; the
common scenario is a UEFI blob loaded with the -bios option in QEMU.

Note that the MMIO exit on writes to a read-only memory is ABI and can
be used to emulate block-erase style flash devices.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoKVM: Introduce gfn_to_hva_memslot_prot
Christoffer Dall [Tue, 19 Aug 2014 10:15:00 +0000 (12:15 +0200)]
KVM: Introduce gfn_to_hva_memslot_prot

To support read-only memory regions on arm and arm64, we have a need to
resolve a gfn to an hva given a pointer to a memslot to avoid looping
through the memslots twice and to reuse the hva error checking of
gfn_to_hva_prot(), add a new gfn_to_hva_memslot_prot() function and
refactor gfn_to_hva_prot() to use this function.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
10 years agoMerge tag 'kvm-s390-next-20140825' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Tue, 26 Aug 2014 12:31:44 +0000 (14:31 +0200)]
Merge tag 'kvm-s390-next-20140825' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Fixes and features for 3.18 part 1

1. The usual cleanups: get rid of duplicate code, use defines, factor
   out the sync_reg handling, additional docs for sync_regs, better
   error handling on interrupt injection
2. We use KVM_REQ_TLB_FLUSH instead of open coding tlb flushes
3. Additional registers for kvm_run sync regs. This is usually not
   needed in the fast path due to eventfd/irqfd, but kvm stat claims
   that we reduced the overhead of console output by ~50% on my system
4. A rework of the gmap infrastructure. This is the 2nd step towards
   host large page support (after getting rid of the storage key
   dependency). We introduces two radix trees to store the guest-to-host
   and host-to-guest translations. This gets us rid of most of
   the page-table walks in the gmap code. Only one in __gmap_link is left,
   this one is required to link the shadow page table to the process page
   table. Finally this contains the plumbing to support gmap page tables
   with less than 5 levels.

10 years agoKVM: s390/mm: remove outdated gmap data structures
Martin Schwidefsky [Fri, 1 Aug 2014 13:03:33 +0000 (15:03 +0200)]
KVM: s390/mm: remove outdated gmap data structures

The radix tree rework removed all code that uses the gmap_rmap
and gmap_pgtable data structures. Remove these outdated definitions.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390/mm: support gmap page tables with less than 5 levels
Martin Schwidefsky [Tue, 1 Jul 2014 12:36:04 +0000 (14:36 +0200)]
KVM: s390/mm: support gmap page tables with less than 5 levels

Add an addressing limit to the gmap address spaces and only allocate
the page table levels that are needed for the given limit. The limit
is fixed and can not be changed after a gmap has been created.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390/mm: use radix trees for guest to host mappings
Martin Schwidefsky [Wed, 30 Apr 2014 14:04:25 +0000 (16:04 +0200)]
KVM: s390/mm: use radix trees for guest to host mappings

Store the target address for the gmap segments in a radix tree
instead of using invalid segment table entries. gmap_translate
becomes a simple radix_tree_lookup, gmap_fault is split into the
address translation with gmap_translate and the part that does
the linking of the gmap shadow page table with the process page
table.
A second radix tree is used to keep the pointers to the segment
table entries for segments that are mapped in the guest address
space. On unmap of a segment the pointer is retrieved from the
radix tree and is used to carry out the segment invalidation in
the gmap shadow page table. As the radix tree can only store one
pointer, each host segment may only be mapped to exactly one
guest location.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agokvm: x86: fix tracing for 32-bit
Paolo Bonzini [Mon, 25 Aug 2014 14:08:21 +0000 (16:08 +0200)]
kvm: x86: fix tracing for 32-bit

Fix commit 7b46268d29543e313e731606d845e65c17f232e4, which mistakenly
included the new tracepoint under #ifdef CONFIG_X86_64.

Reported-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
10 years agoMerge tag 'kvm-s390-20140825' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms39...
Paolo Bonzini [Mon, 25 Aug 2014 13:37:00 +0000 (15:37 +0200)]
Merge tag 'kvm-s390-20140825' of git://git./linux/kernel/git/kvms390/linux into HEAD

Here are two fixes for s390 KVM code that prevent:
1. a malicious user to trigger a kernel BUG
2. a malicious user to change the storage key of read-only pages

10 years agoKVM: s390/mm: cleanup gmap function arguments, variable names
Martin Schwidefsky [Tue, 29 Apr 2014 07:34:41 +0000 (09:34 +0200)]
KVM: s390/mm: cleanup gmap function arguments, variable names

Make the order of arguments for the gmap calls more consistent,
if the gmap pointer is passed it is always the first argument.
In addition distinguish between guest address and user address
by naming the variables gaddr for a guest address and vmaddr for
a user address.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390/mm: readd address parameter to gmap_do_ipte_notify
Martin Schwidefsky [Wed, 30 Apr 2014 12:46:26 +0000 (14:46 +0200)]
KVM: s390/mm: readd address parameter to gmap_do_ipte_notify

Revert git commit c3a23b9874c1 ("remove unnecessary parameter from
gmap_do_ipte_notify").

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390/mm: readd address parameter to pgste_ipte_notify
Martin Schwidefsky [Wed, 30 Apr 2014 12:44:44 +0000 (14:44 +0200)]
KVM: s390/mm: readd address parameter to pgste_ipte_notify

Revert git commit 1b7fd6952063 ("remove unecessary parameter from
pgste_ipte_notify")

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: don't use kvm lock in interrupt injection code
Jens Freimann [Mon, 11 Aug 2014 13:39:43 +0000 (15:39 +0200)]
KVM: s390: don't use kvm lock in interrupt injection code

The kvm lock protects us against vcpus going away, but they only go
away when the virtual machine is shut down. We don't need this
mutex here, so let's get rid of it.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: return -EFAULT if lowcore is not mapped during irq delivery
Jens Freimann [Thu, 17 Apr 2014 08:10:30 +0000 (10:10 +0200)]
KVM: s390: return -EFAULT if lowcore is not mapped during irq delivery

Currently we just kill the userspace process and exit the thread
immediatly without making sure that we don't hold any locks etc.

Improve this by making KVM_RUN return -EFAULT if the lowcore is not
mapped during interrupt delivery. To achieve this we need to pass
the return code of guest memory access routines used in interrupt
delivery all the way back to the KVM_RUN ioctl.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: implement KVM_REQ_TLB_FLUSH and make use of it
David Hildenbrand [Tue, 29 Jul 2014 06:53:36 +0000 (08:53 +0200)]
KVM: s390: implement KVM_REQ_TLB_FLUSH and make use of it

Use the KVM_REQ_TLB_FLUSH request in order to trigger tlb flushes instead
of manipulating the SIE control block whenever we need it. Also trigger it for
a control register sync directly instead of (ab)using kvm_s390_set_prefix().

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: synchronize more registers with kvm_run
David Hildenbrand [Thu, 17 Jul 2014 08:47:43 +0000 (10:47 +0200)]
KVM: s390: synchronize more registers with kvm_run

In order to reduce the number of syscalls when dropping to user space, this
patch enables the synchronization of the following "registers" with kvm_run:
- ARCH0: CPU timer, clock comparator, TOD programmable register,
         guest breaking-event register, program parameter
- PFAULT: pfault parameters (token, select, compare)

The registers are grouped to reduce the overhead when syncing.

As this grows the number of sync registers quite a bit, let's move the code
synchronizing registers with kvm_run from kvm_arch_vcpu_ioctl_run() into
separate helper routines.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: no special machine check delivery
Christian Borntraeger [Mon, 4 Aug 2014 14:54:22 +0000 (16:54 +0200)]
KVM: s390: no special machine check delivery

The load PSW handler does not have to inject pending machine checks.
This can wait until the CPU runs the generic interrupt injection code.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
10 years agoKVM: s390: clear kvm_dirty_regs when dropping to user space
David Hildenbrand [Tue, 29 Jul 2014 06:22:33 +0000 (08:22 +0200)]
KVM: s390: clear kvm_dirty_regs when dropping to user space

We should make sure that all kvm_dirty_regs bits are cleared before dropping
to user space. Until now, some would remain pending.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: clarify the idea of kvm_dirty_regs
David Hildenbrand [Tue, 29 Jul 2014 06:19:26 +0000 (08:19 +0200)]
KVM: clarify the idea of kvm_dirty_regs

This patch clarifies that kvm_dirty_regs are just a hint to the kernel and
that the kernel might just ignore some flags and sync the values (like done for
acrs and gprs now).

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
10 years agoKVM: s390: factor out get_ilc() function
Jens Freimann [Wed, 23 Jul 2014 14:36:06 +0000 (16:36 +0200)]
KVM: s390: factor out get_ilc() function

Let's make this a reusable function.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>