Thierry Reding [Thu, 20 Dec 2018 16:35:47 +0000 (17:35 +0100)]
dma-mapping: fix inverted logic in dma_supported
The cleanup in commit
356da6d0cde3 ("dma-mapping: bypass indirect calls
for dma-direct") accidentally inverted the logic in the check for the
presence of a ->dma_supported() callback. Switch this back to the way it
was to prevent a crash on boot.
Fixes: 356da6d0cde3 ("dma-mapping: bypass indirect calls for dma-direct")
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Fri, 14 Dec 2018 08:15:02 +0000 (09:15 +0100)]
dma-mapping: deprecate dma_zalloc_coherent
We now always return zeroed memory from dma_alloc_coherent. Note that
simply passing GFP_ZERO to dma_alloc_coherent wasn't always doing the
right thing to start with given that various allocators are not backed
by the page allocator and thus would ignore GFP_ZERO.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Fri, 14 Dec 2018 08:00:40 +0000 (09:00 +0100)]
dma-mapping: zero memory returned from dma_alloc_*
If we want to map memory from the DMA allocator to userspace it must be
zeroed at allocation time to prevent stale data leaks. We already do
this on most common architectures, but some architectures don't do this
yet, fix them up, either by passing GFP_ZERO when we use the normal page
allocator or doing a manual memset otherwise.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Acked-by: Sam Ravnborg <sam@ravnborg.org> [sparc]
Christoph Hellwig [Sun, 16 Dec 2018 09:23:28 +0000 (10:23 +0100)]
sparc/iommu: fix ->map_sg return value
Just decrementing the sz value will lead to an incorrect return value.
Instead of just introducing a local variable switch to the standard
for_each_sg helper and standard naming of the arguments.
Fixes: ce65d36f3e ("sparc: remove the sparc32_dma_ops indirection")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Christoph Hellwig [Sun, 16 Dec 2018 09:20:04 +0000 (10:20 +0100)]
sparc/io-unit: fix ->map_sg return value
Just decrementing the sz value will lead to an incorrect return value.
Instead of just introducing a local variable switch to the standard
for_each_sg helper and standard naming of the arguments.
Fixes: ce65d36f3e ("sparc: remove the sparc32_dma_ops indirection")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Christoph Hellwig [Fri, 14 Dec 2018 14:18:08 +0000 (15:18 +0100)]
arm64: default to the direct mapping in get_arch_dma_ops
Otherwise the direct mapping won't work at all given that a NULL
dev->dma_ops causes a fallback. Note that we already explicitly set
dev->dma_ops to dma_dummy_ops for dma-incapable devices, so this
fallback should not be needed anyway.
Fixes: 356da6d0cd ("dma-mapping: bypass indirect calls for dma-direct")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Nathan Chancellor [Sat, 15 Dec 2018 01:49:01 +0000 (18:49 -0700)]
PCI: Remove unused attr variable in pci_dma_configure
Clang warns:
drivers/pci/pci-driver.c:1603:21: error: unused variable 'attr'
[-Werror,-Wunused-variable]
Commit
e5361ca29f2f ("ACPI / scan: Refactor _CCA enforcement") removed
attr's use and replaced it with its assigned value so it is no longer
needed.
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Sat, 15 Dec 2018 10:01:25 +0000 (11:01 +0100)]
ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled
Otherwise we get a build failure due in swiotlb-less configs with
non-generic kernels.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Thu, 6 Dec 2018 21:39:32 +0000 (13:39 -0800)]
dma-mapping: bypass indirect calls for dma-direct
Avoid expensive indirect calls in the fast path DMA mapping
operations by directly calling the dma_direct_* ops if we are using
the directly mapped DMA operations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Thu, 6 Dec 2018 21:37:00 +0000 (13:37 -0800)]
vmd: use the proper dma_* APIs instead of direct methods calls
With the bypass support for the direct mapping we might not always have
methods to call, so use the proper APIs instead. The only downside is
that we will create two dma-debug entries for each mapping if
CONFIG_DMA_DEBUG is enabled.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Mon, 3 Dec 2018 10:43:54 +0000 (11:43 +0100)]
dma-direct: merge swiotlb_dma_ops into the dma_direct code
While the dma-direct code is (relatively) clean and simple we actually
have to use the swiotlb ops for the mapping on many architectures due
to devices with addressing limits. Instead of keeping two
implementations around this commit allows the dma-direct
implementation to call the swiotlb bounce buffering functions and
thus share the guts of the mapping implementation. This also
simplified the dma-mapping setup on a few architectures where we
don't have to differenciate which implementation to use.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Mon, 3 Dec 2018 10:14:09 +0000 (11:14 +0100)]
dma-direct: use dma_direct_map_page to implement dma_direct_map_sg
No need to duplicate the mapping logic.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Mon, 3 Dec 2018 06:43:05 +0000 (07:43 +0100)]
dma-direct: improve addressability error reporting
Only report report a DMA addressability report once to avoid spewing the
kernel log with repeated message. Also provide a stack trace to make it
easy to find the actual caller that caused the problem.
Last but not least move the actual check into the fast path and only
leave the error reporting in a helper.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Thu, 6 Dec 2018 15:06:04 +0000 (07:06 -0800)]
swiotlb: remove dma_mark_clean
Instead of providing a special dma_mark_clean hook just for ia64, switch
ia64 to use the normal arch_sync_dma_for_cpu hooks instead.
This means that we now also set the PG_arch_1 bit for pages in the
swiotlb buffer, which isn't stricly needed as we will never execute code
out of the swiotlb buffer, but otherwise harmless.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Mon, 3 Dec 2018 10:42:52 +0000 (11:42 +0100)]
swiotlb: remove SWIOTLB_MAP_ERROR
We can use DMA_MAPPING_ERROR instead, which already maps to the same
value.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Robin Murphy [Thu, 6 Dec 2018 21:20:49 +0000 (13:20 -0800)]
ACPI / scan: Refactor _CCA enforcement
Rather than checking the DMA attribute at each callsite, just pass it
through for acpi_dma_configure() to handle directly. That can then deal
with the relatively exceptional DEV_DMA_NOT_SUPPORTED case by explicitly
installing dummy DMA ops instead of just skipping setup entirely. This
will then free up the dev->dma_ops == NULL case for some valuable
fastpath optimisations.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Tony Luck <tony.luck@intel.com>
Robin Murphy [Thu, 6 Dec 2018 21:14:44 +0000 (13:14 -0800)]
dma-mapping: factor out dummy DMA ops
The dummy DMA ops are currently used by arm64 for any device which has
an invalid ACPI description and is thus barred from using DMA due to not
knowing whether is is cache-coherent or not. Factor these out into
general dma-mapping code so that they can be referenced from other
common code paths. In the process, we can prune all the optional
callbacks which just do the same thing as the default behaviour, and
fill in .map_resource for completeness.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
[hch: moved to a separate source file]
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Thu, 6 Dec 2018 20:50:26 +0000 (12:50 -0800)]
dma-mapping: always build the direct mapping code
All architectures except for sparc64 use the dma-direct code in some
form, and even for sparc64 we had the discussion of a direct mapping
mode a while ago. In preparation for directly calling the direct
mapping code don't bother having it optionally but always build the
code in. This is a minor hardship for some powerpc and arm configs
that don't pull it in yet (although they should in a relase ot two),
and sparc64 which currently doesn't need it at all, but it will
reduce the ifdef mess we'd otherwise need significantly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Thu, 6 Dec 2018 20:47:50 +0000 (12:47 -0800)]
dma-mapping: move dma_cache_sync out of line
This isn't exactly a slow path routine, but it is not super critical
either, and moving it out of line will help to keep the include chain
clean for the following DMA indirection bypass work.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Thu, 6 Dec 2018 20:43:30 +0000 (12:43 -0800)]
dma-mapping: move various slow path functions out of line
There is no need to have all setup and coherent allocation / freeing
routines inline. Move them out of line to keep the implemeation
nicely encapsulated and save some kernel text size.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Thu, 6 Dec 2018 20:25:54 +0000 (12:25 -0800)]
dma-mapping: move dma_get_required_mask to kernel/dma
dma_get_required_mask should really be with the rest of the DMA mapping
implementation instead of in drivers/base as a lone outlier.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Thu, 6 Dec 2018 20:24:27 +0000 (12:24 -0800)]
dma-mapping: merge dma_unmap_page_attrs and dma_unmap_single_attrs
The two functions are exactly the same, so don't bother implementing
them twice.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Mon, 3 Dec 2018 13:58:59 +0000 (14:58 +0100)]
dma-mapping: simplify the dma_sync_single_range_for_{cpu,device} implementation
We can just call the regular calls after adding offset the the address instead
of reimplementing them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Tested-by: Tony Luck <tony.luck@intel.com>
Christoph Hellwig [Tue, 4 Dec 2018 16:04:31 +0000 (08:04 -0800)]
dma-mapping: remove a pointless memset in dma_atomic_pool_init
We already zero the memory after allocating it from the pool that
this function fills, and having the memset here in this form means
we can't support CMA highmem allocations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Russell King - ARM Linux <linux@armlinux.org.uk>
Christoph Hellwig [Wed, 12 Dec 2018 16:09:58 +0000 (17:09 +0100)]
sparc: use DT node full_name in sparc_dma_alloc_resource
The sparc tree already has this change for the pre-refactored code,
but pulling it into the dma-mapping tree like this should ease
the merge conflicts a bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Acked-by: David Miller <davem@davemloft.net>
Christoph Hellwig [Sat, 8 Dec 2018 17:40:03 +0000 (09:40 -0800)]
sparc: merge 32-bit and 64-bit version of pci.h
There are enough common defintions that a single header seems nicer.
Also drop the pointless <linux/dma-mapping.h> include.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Christoph Hellwig [Sat, 8 Dec 2018 17:39:12 +0000 (09:39 -0800)]
sparc: move the leon PCI memory space comment to <asm/leon.h>
It has nothing to do with the content of the pci.h header.
Suggested by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Christoph Hellwig [Mon, 3 Dec 2018 13:21:58 +0000 (14:21 +0100)]
sparc: remove not required includes from dma-mapping.h
The only thing we need to explicitly pull in is the defines for the
CPU type.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Christoph Hellwig [Mon, 3 Dec 2018 13:04:32 +0000 (14:04 +0100)]
sparc: remove the sparc32_dma_ops indirection
There is no good reason to have a double indirection for the sparc32
dma ops, so remove the sparc32_dma_ops and define separate dma_map_ops
instance for the different IOMMU types.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Christoph Hellwig [Mon, 3 Dec 2018 13:02:26 +0000 (14:02 +0100)]
sparc: factor the dma coherent mapping into helper
Factor the code to remap memory returned from the DMA coherent allocator
into two helpers that can be shared by the IOMMU and direct mapping code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Christoph Hellwig [Mon, 3 Dec 2018 11:28:35 +0000 (12:28 +0100)]
sparc: remove not needed sbus_dma_ops methods
No need to BUG_ON() on the cache maintainance ops - they are no-ops
by default, and there is nothing in the DMA API contract that prohibits
calling them on sbus devices (even if such drivers are unlikely to
ever appear).
Similarly a dma_supported method that always returns 0 is rather
pointless. The only thing that indicates is that no one ever calls
the method on sbus devices.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Robin Murphy [Mon, 10 Dec 2018 14:00:33 +0000 (14:00 +0000)]
dma-debug: Batch dma_debug_entry allocation
DMA debug entries are one of those things which aren't that useful
individually - we will always want some larger quantity of them - and
which we don't really need to manage the exact number of - we only care
about having 'enough'. In that regard, the current behaviour of creating
them one-by-one leads to a lot of unwarranted function call overhead and
memory wasted on alignment padding.
Now that we don't have to worry about freeing anything via
dma_debug_resize_entries(), we can optimise the allocation behaviour by
grabbing whole pages at once, which will save considerably on the
aforementioned overheads, and probably offer a little more cache/TLB
locality benefit for traversing the lists under normal operation. This
should also give even less reason for an architecture-level override of
the preallocation size, so make the definition unconditional - if there
is still any desire to change the compile-time value for some platforms
it would be better off as a Kconfig option anyway.
Since freeing a whole page of entries at once becomes enough of a
challenge that it's not really worth complicating dma_debug_init(), we
may as well tweak the preallocation behaviour such that as long as we
manage to allocate *some* pages, we can leave debugging enabled on a
best-effort basis rather than otherwise wasting them.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Robin Murphy [Mon, 10 Dec 2018 14:00:32 +0000 (14:00 +0000)]
dma/debug: Remove dma_debug_resize_entries()
With the only caller now gone, we can clean up this part of dma-debug's
exposed internals and make way to tweak the allocation behaviour.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Robin Murphy [Mon, 10 Dec 2018 14:00:31 +0000 (14:00 +0000)]
x86/dma/amd-gart: Stop resizing dma_debug_entry pool
dma-debug is now capable of adding new entries to its pool on-demand if
the initial preallocation was insufficient, so the IOMMU_LEAK logic no
longer needs to explicitly change the pool size. This does lose it the
ability to save a couple of megabytes of RAM by reducing the pool size
below its default, but it seems unlikely that that is a realistic
concern these days (or indeed that anyone is actively debugging AGP
drivers' DMA usage any more). Getting rid of dma_debug_resize_entries()
will make room for further streamlining in the dma-debug code itself.
Removing the call reveals quite a lot of cruft which has been useless
for nearly a decade since commit
19c1a6f5764d ("x86 gart: reimplement
IOMMU_LEAK feature by using DMA_API_DEBUG"), including the entire
'iommu=leak' parameter, which controlled nothing except whether
dma_debug_resize_entries() was called or not.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Robin Murphy [Mon, 10 Dec 2018 14:00:30 +0000 (14:00 +0000)]
dma-debug: Make leak-like behaviour apparent
Now that we can dynamically allocate DMA debug entries to cope with
drivers maintaining excessively large numbers of live mappings, a driver
which *does* actually have a bug leaking mappings (and is not unloaded)
will no longer trigger the "DMA-API: debugging out of memory - disabling"
message until it gets to actual kernel OOM conditions, which means it
could go unnoticed for a while. To that end, let's inform the user each
time the pool has grown to a multiple of its initial size, which should
make it apparent that they either have a leak or might want to increase
the preallocation size.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Robin Murphy [Mon, 10 Dec 2018 14:00:29 +0000 (14:00 +0000)]
dma-debug: Dynamically expand the dma_debug_entry pool
Certain drivers such as large multi-queue network adapters can use pools
of mapped DMA buffers larger than the default dma_debug_entry pool of
65536 entries, with the result that merely probing such a device can
cause DMA debug to disable itself during boot unless explicitly given an
appropriate "dma_debug_entries=..." option.
Developers trying to debug some other driver on such a system may not be
immediately aware of this, and at worst it can hide bugs if they fail to
realise that dma-debug has already disabled itself unexpectedly by the
time their code of interest gets to run. Even once they do realise, it
can be a bit of a pain to emprirically determine a suitable number of
preallocated entries to configure, short of massively over-allocating.
There's really no need for such a static limit, though, since we can
quite easily expand the pool at runtime in those rare cases that the
preallocated entries are insufficient, which is arguably the least
surprising and most useful behaviour. To that end, refactor the
prealloc_memory() logic a little bit to generalise it for runtime
reallocations as well.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Robin Murphy [Mon, 10 Dec 2018 14:00:27 +0000 (14:00 +0000)]
dma-debug: Use pr_fmt()
Use pr_fmt() to generate the "DMA-API: " prefix consistently. This
results in it being added to a couple of pr_*() messages which were
missing it before, and for the err_printk() calls moves it to the actual
start of the message instead of somewhere in the middle.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Robin Murphy [Mon, 10 Dec 2018 14:00:28 +0000 (14:00 +0000)]
dma-debug: Expose nr_total_entries in debugfs
Expose nr_total_entries in debugfs, so that {num,min}_free_entries
become even more meaningful to users interested in current/maximum
utilisation. This becomes even more relevant once nr_total_entries
may change at runtime beyond just the existing AMD GART debug code.
Suggested-by: John Garry <john.garry@huawei.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Qian Cai <cai@lca.pw>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Fri, 9 Nov 2018 08:51:00 +0000 (09:51 +0100)]
arch: switch the default on ARCH_HAS_SG_CHAIN
These days architectures are mostly out of the business of dealing with
struct scatterlist at all, unless they have architecture specific iommu
drivers. Replace the ARCH_HAS_SG_CHAIN symbol with a ARCH_NO_SG_CHAIN
one only enabled for architectures with horrible legacy iommu drivers
like alpha and parisc, and conditionally for arm which wants to keep it
disable for legacy platforms.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Christoph Hellwig [Fri, 30 Nov 2018 09:59:37 +0000 (10:59 +0100)]
dma-mapping: return an error code from dma_mapping_error
Currently dma_mapping_error returns a boolean as int, with 1 meaning
error. This is rather unusual and many callers have to convert it to
errno value. The callers are highly inconsistent with error codes
ranging from -ENOMEM over -EIO, -EINVAL and -EFAULT ranging to -EAGAIN.
Return -ENOMEM which seems to be what the largest number of callers
convert it to, and which also matches the typical error case where
we are out of resources.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Tue, 4 Dec 2018 22:33:24 +0000 (14:33 -0800)]
dma-mapping: remove the mapping_error dma_map_ops method
No users left except for vmd which just forwards it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:38:19 +0000 (19:38 +0100)]
xen-swiotlb: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:35:19 +0000 (19:35 +0100)]
iommu/dma-iommu: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:34:10 +0000 (19:34 +0100)]
iommu/vt-d: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:32:03 +0000 (19:32 +0100)]
iommu/intel: small map_page cleanup
Pass the page + offset to the low-level __iommu_map_single helper
(which gets renamed to fit the new calling conventions) as both
callers have the page at hand.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:28:34 +0000 (19:28 +0100)]
iommu: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Note that the existing code used AMD_IOMMU_MAPPING_ERROR to check from
a 0 return from the IOVA allocator, which is replaced with an explicit
0 as in the implementation and other users of that interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:24:00 +0000 (19:24 +0100)]
x86/calgary: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of the magic bad_dma_addr on a dma
mapping failure and let the core dma-mapping code handle the rest.
Remove the magic EMERGENCY_PAGES that the bad_dma_addr gets redirected to.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:20:44 +0000 (19:20 +0100)]
x86/amd_gart: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of the magic bad_dma_addr on a dma
mapping failure and let the core dma-mapping code handle the rest.
Remove the magic EMERGENCY_PAGES that the bad_dma_addr gets redirected to.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:18:10 +0000 (19:18 +0100)]
ia64/sn: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:16:38 +0000 (19:16 +0100)]
ia64/sba_iommu: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:06:58 +0000 (19:06 +0100)]
ia64/sba_iommu: improve internal map_page users
Remove the odd sba_{un,}map_single_attrs wrappers, check errors
everywhere.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:04:03 +0000 (19:04 +0100)]
alpha: remove the mapping_error dma_map_ops method
Return DMA_MAPPING_ERROR instead of 0 on a dma mapping failure and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:04:31 +0000 (19:04 +0100)]
arm64: remove the dummy_dma_ops mapping_error method
Just return DMA_MAPPING_ERROR from __dummy_map_page and let the core
dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:00:17 +0000 (19:00 +0100)]
parisc/sba_iommu: remove the mapping_error dma_map_ops method
The SBA iommu code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:59:50 +0000 (18:59 +0100)]
parisc/ccio: remove the mapping_error dma_map_ops method
The CCIO iommu code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:59:05 +0000 (18:59 +0100)]
sparc: remove the mapping_error dma_map_ops method
Sparc already returns (~(dma_addr_t)0x0) on mapping failures, so we can
switch over to returning DMA_MAPPING_ERROR and let the core dma-mapping
code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:58:15 +0000 (18:58 +0100)]
s390: remove the mapping_error dma_map_ops method
S390 already returns (~(dma_addr_t)0x0) on mapping failures, so we can
switch over to returning DMA_MAPPING_ERROR and let the core dma-mapping
code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 18:18:58 +0000 (19:18 +0100)]
mips/jazz: remove the mapping_error dma_map_ops method
The Jazz iommu code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and
let the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:56:25 +0000 (18:56 +0100)]
powerpc/iommu: remove the mapping_error dma_map_ops method
The powerpc iommu code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:57:36 +0000 (18:57 +0100)]
arm: remove the mapping_error dma_map_ops method
Arm already returns (~(dma_addr_t)0x0) on mapping failures, so we can
switch over to returning DMA_MAPPING_ERROR and let the core dma-mapping
code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:52:35 +0000 (18:52 +0100)]
dma-direct: remove the mapping_error dma_map_ops method
The dma-direct code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Christoph Hellwig [Wed, 21 Nov 2018 17:52:35 +0000 (18:52 +0100)]
dma-mapping: provide a generic DMA_MAPPING_ERROR
Error handling of the dma_map_single and dma_map_page APIs is a little
problematic at the moment, in that we use different encodings in the
returned dma_addr_t to indicate an error. That means we require an
additional indirect call to figure out if a dma mapping call returned
an error, and a lot of boilerplate code to implement these semantics.
Instead return the maximum addressable value as the error. As long
as we don't allow mapping single-byte ranges with single-byte alignment
this value can never be a valid return. Additionaly if drivers do
not check the return value from the dma_map* routines this values means
they will generally not be pointed to actual memory.
Once the default value is added here we can start removing the
various mapping_error methods and just rely on this generic check.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Marek Szyprowski [Wed, 5 Dec 2018 10:14:01 +0000 (11:14 +0100)]
dma-mapping: fix lack of DMA address assignment in generic remap allocator
Commit
bfd56cd60521 ("dma-mapping: support highmem in the generic remap
allocator") replaced dma_direct_alloc_pages() with __dma_direct_alloc_pages(),
which doesn't set dma_handle and zero allocated memory. Fix it by doing this
directly in the caller function.
Fixes: bfd56cd60521 ("dma-mapping: support highmem in the generic remap allocator")
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Sun, 4 Nov 2018 16:47:44 +0000 (17:47 +0100)]
csky: use the generic remapping dma alloc implementation
The csky code was largely copied from arm/arm64, so switch to the
generic arm64-based implementation instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Guo Ren <ren_guo@c-sky.com>
Christoph Hellwig [Sun, 4 Nov 2018 16:46:21 +0000 (17:46 +0100)]
csky: don't use GFP_DMA in atomic_pool_init
csky does not implement ZONE_DMA, which means passing GFP_DMA is a no-op.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Guo Ren <ren_guo@c-sky.com>
Christoph Hellwig [Sun, 4 Nov 2018 16:53:47 +0000 (17:53 +0100)]
csky: don't select DMA_NONCOHERENT_OPS
This option is gone past Linux 4.19.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Guo Ren <ren_guo@c-sky.com>
Christoph Hellwig [Sun, 4 Nov 2018 16:51:50 +0000 (17:51 +0100)]
dma-remap: support DMA_ATTR_NO_KERNEL_MAPPING
Do not waste vmalloc space on allocations that do not require a mapping
into the kernel address space.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Christoph Hellwig [Sun, 4 Nov 2018 16:38:39 +0000 (17:38 +0100)]
dma-mapping: support highmem in the generic remap allocator
By using __dma_direct_alloc_pages we can deal entirely with struct page
instead of having to derive a kernel virtual address.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Christoph Hellwig [Sun, 4 Nov 2018 19:29:28 +0000 (20:29 +0100)]
dma-mapping: move the arm64 noncoherent alloc/free support to common code
The arm64 codebase to implement coherent dma allocation for architectures
with non-coherent DMA is a good start for a generic implementation, given
that is uses the generic remap helpers, provides the atomic pool for
allocations that can't sleep and still is realtively simple and well
tested. Move it to kernel/dma and allow architectures to opt into it
using a config symbol. Architectures just need to provide a new
arch_dma_prep_coherent helper to writeback an invalidate the caches
for any memory that gets remapped for uncached access.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Christoph Hellwig [Fri, 24 Aug 2018 08:31:08 +0000 (10:31 +0200)]
dma-mapping: move the remap helpers to a separate file
The dma remap code only makes sense for not cache coherent architectures
(or possibly the corner case of highmem CMA allocations) and currently
is only used by arm, arm64, csky and xtensa. Split it out into a
separate file with a separate Kconfig symbol, which gets the right
copyright notice given that this code was written by Laura Abbott
working for Code Aurora at that point.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Christoph Hellwig [Sat, 22 Sep 2018 18:47:26 +0000 (20:47 +0200)]
dma-direct: reject highmem pages from dma_alloc_from_contiguous
dma_alloc_from_contiguous can return highmem pages depending on the
setup, which a plain non-remapping DMA allocator can't handle. Detect
this case and fail the allocation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Christoph Hellwig [Sun, 4 Nov 2018 16:27:56 +0000 (17:27 +0100)]
dma-direct: provide page based alloc/free helpers
Some architectures support remapping highmem into DMA coherent
allocations. To use the common code for them we need variants of
dma_direct_{alloc,free}_pages that do not use kernel virtual addresses.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Niklas Söderlund [Wed, 29 Aug 2018 21:29:21 +0000 (23:29 +0200)]
dma-mapping: fix return type of dma_set_max_seg_size()
The function dma_set_max_seg_size() can return either 0 on success or
-EIO on error. Change its return type from unsigned int to int to
capture this.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Matias Bjørling [Mon, 26 Nov 2018 19:27:03 +0000 (11:27 -0800)]
ia64: export node_distance function
The numa_slit variable used by node_distance is available to a
module as long as it is linked at compile-time. However, it is
not available to loadable modules. Leading to errors such as:
ERROR: "numa_slit" [drivers/nvme/host/nvme-core.ko] undefined!
The error above is caused by the nvme multipath code that makes
use of node_distance for its path calculation. When the patch was
added, the lightnvm subsystem would select nvme and always compile
it in, leading to the node_distance call to always succeed.
However, when this requirement was removed, nvme could be compiled
in as a module, which exposed this bug.
This patch extracts node_distance to a function and exports it.
Since ACPI is depending on node_distance being a simple lookup to
numa_slit, the previous behavior is exposed as slit_distance and its
users updated.
Fixes: f333444708f82 "nvme: take node locality into account when selecting a path"
Fixes: 73569e11032f "lightnvm: remove dependencies on BLK_DEV_NVME and PCI"
Signed-off-by: Matias Bjøring <mb@lightnvm.io>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Mon, 26 Nov 2018 17:34:31 +0000 (09:34 -0800)]
Merge tag 'hwmon-for-v4.20-rc5' of git://git./linux/kernel/git/groeck/linux-staging
Pull hwmon fixes from Guenter Roeck:
- fix temp4_type attribute permissions in w83795 driver
- fix tacho fault detection in mlxreg-fan driver
- fix current value calculations in ina2xx driver
- fix initial notification/warning in raspberrypi driver
- fix a NULL pointer access in ina2xx
* tag 'hwmon-for-v4.20-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
hwmon: (w83795) temp4_type has writable permission
hwmon: (mlxreg-fan) Fix macros for tacho fault reading
hwmon: (ina2xx) Fix current value calculation
hwmon: (raspberrypi) Fix initial notify
hwmon (ina2xx) Fix NULL id pointer in probe()
Linus Torvalds [Sun, 25 Nov 2018 22:19:31 +0000 (14:19 -0800)]
Linux 4.20-rc4
Linus Torvalds [Sun, 25 Nov 2018 17:24:40 +0000 (09:24 -0800)]
Merge tag 'dma-mapping-4.20-3' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fixes from Christoph Hellwig:
"Two dma-direct / swiotlb regressions fixes:
- zero is a valid physical address on some arm boards, we can't use
it as the error value
- don't try to cache flush the error return value (no matter what it
is)"
* tag 'dma-mapping-4.20-3' of git://git.infradead.org/users/hch/dma-mapping:
swiotlb: Skip cache maintenance on map error
dma-direct: Make DIRECT_MAPPING_ERROR viable for SWIOTLB
Linus Torvalds [Sun, 25 Nov 2018 17:19:58 +0000 (09:19 -0800)]
Merge tag 'nfs-for-4.20-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client bugfixes from Trond Myklebust:
- Fix a NFSv4 state manager deadlock when returning a delegation
- NFSv4.2 copy do not allocate memory under the lock
- flexfiles: Use the correct stateid for IO in the tightly coupled case
* tag 'nfs-for-4.20-4' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
flexfiles: use per-mirror specified stateid for IO
NFSv4.2 copy do not allocate memory under the lock
NFSv4: Fix a NFSv4 state manager deadlock
Luc Van Oostenryck [Sun, 25 Nov 2018 16:34:05 +0000 (17:34 +0100)]
MAINTAINERS: change Sparse's maintainer
I'm taking over the maintainance of Sparse so add myself as
maintainer and move Christopher's info to CREDITS.
Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sun, 25 Nov 2018 02:44:01 +0000 (18:44 -0800)]
Merge tag 'xarray-4.20-rc4' of git://git.infradead.org/users/willy/linux-dax
Pull XArray updates from Matthew Wilcox:
"We found some bugs in the DAX conversion to XArray (and one bug which
predated the XArray conversion). There were a couple of bugs in some
of the higher-level functions, which aren't actually being called in
today's kernel, but surfaced as a result of converting existing radix
tree & IDR users over to the XArray.
Some of the other changes to how the higher-level APIs work were also
motivated by converting various users; again, they're not in use in
today's kernel, so changing them has a low probability of introducing
a bug.
Dan can still trigger a bug in the DAX code with hot-offline/online,
and we're working on tracking that down"
* tag 'xarray-4.20-rc4' of git://git.infradead.org/users/willy/linux-dax:
XArray tests: Add missing locking
dax: Avoid losing wakeup in dax_lock_mapping_entry
dax: Fix huge page faults
dax: Fix dax_unlock_mapping_entry for PMD pages
dax: Reinstate RCU protection of inode
dax: Make sure the unlocking entry isn't locked
dax: Remove optimisation from dax_lock_mapping_entry
XArray tests: Correct some 64-bit assumptions
XArray: Correct xa_store_range
XArray: Fix Documentation
XArray: Handle NULL pointers differently for allocation
XArray: Unify xa_store and __xa_store
XArray: Add xa_store_bh() and xa_store_irq()
XArray: Turn xa_erase into an exported function
XArray: Unify xa_cmpxchg and __xa_cmpxchg
XArray: Regularise xa_reserve
nilfs2: Use xa_erase_irq
XArray: Export __xa_foo to non-GPL modules
XArray: Fix xa_for_each with a single element at 0
Linus Torvalds [Sat, 24 Nov 2018 20:58:47 +0000 (12:58 -0800)]
Merge branch 'for-linus' of git://git./linux/kernel/git/hid/hid
Pull HID fixes from Jiri Kosina:
- revert of the high-resolution scrolling feature, as it breaks certain
hardware due to incompatibilities between Logitech and Microsoft
worlds. Peter Hutterer is working on a fixed implementation. Until
that is finished, revert by Benjamin Tissoires.
- revert of incorrect strncpy->strlcpy conversion in uhid, from David
Herrmann
- fix for buggy sendfile() implementation on uhid device node, from
Eric Biggers
- a few assorted device-ID specific quirks
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid:
Revert "Input: Add the `REL_WHEEL_HI_RES` event code"
Revert "HID: input: Create a utility class for counting scroll events"
Revert "HID: logitech: Add function to enable HID++ 1.0 "scrolling acceleration""
Revert "HID: logitech: Enable high-resolution scrolling on Logitech mice"
Revert "HID: logitech: Use LDJ_DEVICE macro for existing Logitech mice"
Revert "HID: logitech: fix a used uninitialized GCC warning"
Revert "HID: input: simplify/fix high-res scroll event handling"
HID: Add quirk for Primax PIXART OEM mice
HID: i2c-hid: Disable runtime PM for LG touchscreen
HID: multitouch: Add pointstick support for Cirque Touchpad
HID: steam: remove input device when a hid client is running.
Revert "HID: uhid: use strlcpy() instead of strncpy()"
HID: uhid: forbid UHID_CREATE under KERNEL_DS or elevated privileges
HID: input: Ignore battery reported by Symbol DS4308
HID: Add quirk for Microsoft PIXART OEM mouse
Linus Torvalds [Sat, 24 Nov 2018 17:42:32 +0000 (09:42 -0800)]
Merge tag 'arm64-fixes' of git://git./linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas::
- Fix wrong conflict resolution around CONFIG_ARM64_SSBD
- Fix sparse warning on unsigned long constant
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: cpufeature: Fix mismerge of CONFIG_ARM64_SSBD block
arm64: sysreg: fix sparse warnings
Linus Torvalds [Sat, 24 Nov 2018 17:19:38 +0000 (09:19 -0800)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
1) Need to take mutex in ath9k_add_interface(), from Dan Carpenter.
2) Fix mt76 build without CONFIG_LEDS_CLASS, from Arnd Bergmann.
3) Fix socket wmem accounting in SCTP, from Xin Long.
4) Fix failed resume crash in ena driver, from Arthur Kiyanovski.
5) qed driver passes bytes instead of bits into second arg of
bitmap_weight(). From Denis Bolotin.
6) Fix reset deadlock in ibmvnic, from Juliet Kim.
7) skb_scrube_packet() needs to scrub the fwd marks too, from Petr
Machata.
8) Make sure older TCP stacks see enough dup ACKs, and avoid doing SACK
compression during this period, from Eric Dumazet.
9) Add atomicity to SMC protocol cursor handling, from Ursula Braun.
10) Don't leave dangling error pointer if bpf_prog_add() fails in
thunderx driver, from Lorenzo Bianconi. Also, when we unmap TSO
headers, set sq->tso_hdrs to NULL.
11) Fix race condition over state variables in act_police, from Davide
Caratti.
12) Disable guest csum in the presence of XDP in virtio_net, from Jason
Wang.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (64 commits)
net: gemini: Fix copy/paste error
net: phy: mscc: fix deadlock in vsc85xx_default_config
dt-bindings: dsa: Fix typo in "probed"
net: thunderx: set tso_hdrs pointer to NULL in nicvf_free_snd_queue
net: amd: add missing of_node_put()
team: no need to do team_notify_peers or team_mcast_rejoin when disabling port
virtio-net: fail XDP set if guest csum is negotiated
virtio-net: disable guest csum during XDP set
net/sched: act_police: add missing spinlock initialization
net: don't keep lonely packets forever in the gro hash
net/ipv6: re-do dad when interface has IFF_NOARP flag change
packet: copy user buffers before orphan or clone
ibmvnic: Update driver queues after change in ring size support
ibmvnic: Fix RX queue buffer cleanup
net: thunderx: set xdp_prog to NULL if bpf_prog_add fails
net/dim: Update DIM start sample after each DIM iteration
net: faraday: ftmac100: remove netif_running(netdev) check before disabling interrupts
net/smc: use after free fix in smc_wr_tx_put_slot()
net/smc: atomic SMCD cursor handling
net/smc: add SMC-D shutdown signal
...
Linus Torvalds [Sat, 24 Nov 2018 17:11:52 +0000 (09:11 -0800)]
Merge tag 'xfs-4.20-fixes-2' of git://git./fs/xfs/xfs-linux
Pull xfs fixes from Darrick Wong:
"Dave and I have continued our work fixing corruption problems that can
be found when running long-term burn-in exercisers on xfs. Here are
some patches fixing most of the problems, but there will likely be
more. :/
- Numerous corruption fixes for copy on write
- Numerous corruption fixes for blocksize < pagesize writes
- Don't miscalculate AG reservations for small final AGs
- Fix page cache truncation to work properly for reflink and extent
shifting
- Fix use-after-free when retrying failed inode/dquot buffer logging
- Fix corruptions seen when using copy_file_range in directio mode"
* tag 'xfs-4.20-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
iomap: readpages doesn't zero page tail beyond EOF
vfs: vfs_dedupe_file_range() doesn't return EOPNOTSUPP
iomap: dio data corruption and spurious errors when pipes fill
iomap: sub-block dio needs to zeroout beyond EOF
iomap: FUA is wrong for DIO O_DSYNC writes into unwritten extents
xfs: delalloc -> unwritten COW fork allocation can go wrong
xfs: flush removing page cache in xfs_reflink_remap_prep
xfs: extent shifting doesn't fully invalidate page cache
xfs: finobt AG reserves don't consider last AG can be a runt
xfs: fix transient reference count error in xfs_buf_resubmit_failed_buffers
xfs: uncached buffer tracing needs to print bno
xfs: make xfs_file_remap_range() static
xfs: fix shared extent data corruption due to missing cow reservation
Andreas Fiedler [Fri, 23 Nov 2018 23:16:34 +0000 (00:16 +0100)]
net: gemini: Fix copy/paste error
The TX stats should be started with the tx_stats_syncp,
there seems to be a copy/paste error in the driver.
Signed-off-by: Andreas Fiedler <andreas.fiedler@gmx.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Quentin Schulz [Fri, 23 Nov 2018 18:01:51 +0000 (19:01 +0100)]
net: phy: mscc: fix deadlock in vsc85xx_default_config
The vsc85xx_default_config function called in the vsc85xx_config_init
function which is used by VSC8530, VSC8531, VSC8540 and VSC8541 PHYs
mistakenly calls phy_read and phy_write in-between phy_select_page and
phy_restore_page.
phy_select_page and phy_restore_page actually take and release the MDIO
bus lock and phy_write and phy_read take and release the lock to write
or read to a PHY register.
Let's fix this deadlock by using phy_modify_paged which handles
correctly a read followed by a write in a non-standard page.
Fixes: 6a0bfbbe20b0 ("net: phy: mscc: migrate to phy_select/restore_page functions")
Signed-off-by: Quentin Schulz <quentin.schulz@bootlin.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fabio Estevam [Fri, 23 Nov 2018 17:46:50 +0000 (15:46 -0200)]
dt-bindings: dsa: Fix typo in "probed"
The correct form is "can be probed", so fix the typo.
Signed-off-by: Fabio Estevam <festevam@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lorenzo Bianconi [Fri, 23 Nov 2018 17:28:01 +0000 (18:28 +0100)]
net: thunderx: set tso_hdrs pointer to NULL in nicvf_free_snd_queue
Reset snd_queue tso_hdrs pointer to NULL in nicvf_free_snd_queue routine
since it is used to check if tso dma descriptor queue has been previously
allocated. The issue can be triggered with the following reproducer:
$ip link set dev enP2p1s0v0 xdpdrv obj xdp_dummy.o
$ip link set dev enP2p1s0v0 xdpdrv off
[ 341.467649] WARNING: CPU: 74 PID: 2158 at mm/vmalloc.c:1511 __vunmap+0x98/0xe0
[ 341.515010] Hardware name: GIGABYTE H270-T70/MT70-HD0, BIOS T49 02/02/2018
[ 341.521874] pstate:
60400005 (nZCv daif +PAN -UAO)
[ 341.526654] pc : __vunmap+0x98/0xe0
[ 341.530132] lr : __vunmap+0x98/0xe0
[ 341.533609] sp :
ffff00001c5db860
[ 341.536913] x29:
ffff00001c5db860 x28:
0000000000020000
[ 341.542214] x27:
ffff810feb5090b0 x26:
ffff000017e57000
[ 341.547515] x25:
0000000000000000 x24:
00000000fbd00000
[ 341.552816] x23:
0000000000000000 x22:
ffff810feb5090b0
[ 341.558117] x21:
0000000000000000 x20:
0000000000000000
[ 341.563418] x19:
ffff000017e57000 x18:
0000000000000000
[ 341.568719] x17:
0000000000000000 x16:
0000000000000000
[ 341.574020] x15:
0000000000000010 x14:
ffffffffffffffff
[ 341.579321] x13:
ffff00008985eb27 x12:
ffff00000985eb2f
[ 341.584622] x11:
ffff0000096b3000 x10:
ffff00001c5db510
[ 341.589923] x9 :
00000000ffffffd0 x8 :
ffff0000086868e8
[ 341.595224] x7 :
3430303030303030 x6 :
00000000000006ef
[ 341.600525] x5 :
00000000003fffff x4 :
0000000000000000
[ 341.605825] x3 :
0000000000000000 x2 :
ffffffffffffffff
[ 341.611126] x1 :
ffff0000096b3728 x0 :
0000000000000038
[ 341.616428] Call trace:
[ 341.618866] __vunmap+0x98/0xe0
[ 341.621997] vunmap+0x3c/0x50
[ 341.624961] arch_dma_free+0x68/0xa0
[ 341.628534] dma_direct_free+0x50/0x80
[ 341.632285] nicvf_free_resources+0x160/0x2d8 [nicvf]
[ 341.637327] nicvf_config_data_transfer+0x174/0x5e8 [nicvf]
[ 341.642890] nicvf_stop+0x298/0x340 [nicvf]
[ 341.647066] __dev_close_many+0x9c/0x108
[ 341.650977] dev_close_many+0xa4/0x158
[ 341.654720] rollback_registered_many+0x140/0x530
[ 341.659414] rollback_registered+0x54/0x80
[ 341.663499] unregister_netdevice_queue+0x9c/0xe8
[ 341.668192] unregister_netdev+0x28/0x38
[ 341.672106] nicvf_remove+0xa4/0xa8 [nicvf]
[ 341.676280] nicvf_shutdown+0x20/0x30 [nicvf]
[ 341.680630] pci_device_shutdown+0x44/0x88
[ 341.684720] device_shutdown+0x144/0x250
[ 341.688640] kernel_restart_prepare+0x44/0x50
[ 341.692986] kernel_restart+0x20/0x68
[ 341.696638] __se_sys_reboot+0x210/0x238
[ 341.700550] __arm64_sys_reboot+0x24/0x30
[ 341.704555] el0_svc_handler+0x94/0x110
[ 341.708382] el0_svc+0x8/0xc
[ 341.711252] ---[ end trace
3f4019c8439959c9 ]---
[ 341.715874] page:
ffff7e0003ef4000 count:0 mapcount:0 mapping:
0000000000000000 index:0x4
[ 341.723872] flags: 0x1fffe000000000()
[ 341.727527] raw:
001fffe000000000 ffff7e0003f1a008 ffff7e0003ef4048 0000000000000000
[ 341.735263] raw:
0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
[ 341.742994] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
where xdp_dummy.c is a simple bpf program that forwards the incoming
frames to the network stack (available here:
https://github.com/altoor/xdp_walkthrough_examples/blob/master/sample_1/xdp_dummy.c)
Fixes: 05c773f52b96 ("net: thunderx: Add basic XDP support")
Fixes: 4863dea3fab0 ("net: Adding support for Cavium ThunderX network controller")
Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Yangtao Li [Thu, 22 Nov 2018 12:34:41 +0000 (07:34 -0500)]
net: amd: add missing of_node_put()
of_find_node_by_path() acquires a reference to the node
returned by it and that reference needs to be dropped by its caller.
This place doesn't do that, so fix it.
Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Thu, 22 Nov 2018 08:15:28 +0000 (16:15 +0800)]
team: no need to do team_notify_peers or team_mcast_rejoin when disabling port
team_notify_peers() will send ARP and NA to notify peers. team_mcast_rejoin()
will send multicast join group message to notify peers. We should do this when
enabling/changed to a new port. But it doesn't make sense to do it when a port
is disabled.
On the other hand, when we set mcast_rejoin_count to 2, and do a failover,
team_port_disable() will increase mcast_rejoin.count_pending to 2 and then
team_port_enable() will increase mcast_rejoin.count_pending to 4. We will send
4 mcast rejoin messages at latest, which will make user confused. The same
with notify_peers.count.
Fix it by deleting team_notify_peers() and team_mcast_rejoin() in
team_port_disable().
Reported-by: Liang Li <liali@redhat.com>
Fixes: fc423ff00df3a ("team: add peer notification")
Fixes: 492b200efdd20 ("team: add support for sending multicast rejoins")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 22 Nov 2018 06:36:31 +0000 (14:36 +0800)]
virtio-net: fail XDP set if guest csum is negotiated
We don't support partial csumed packet since its metadata will be lost
or incorrect during XDP processing. So fail the XDP set if guest_csum
feature is negotiated.
Fixes: f600b6905015 ("virtio_net: Add XDP support")
Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pavel Popa <pashinho1990@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 22 Nov 2018 06:36:30 +0000 (14:36 +0800)]
virtio-net: disable guest csum during XDP set
We don't disable VIRTIO_NET_F_GUEST_CSUM if XDP was set. This means we
can receive partial csumed packets with metadata kept in the
vnet_hdr. This may have several side effects:
- It could be overridden by header adjustment, thus is might be not
correct after XDP processing.
- There's no way to pass such metadata information through
XDP_REDIRECT to another driver.
- XDP does not support checksum offload right now.
So simply disable guest csum if possible in this the case of XDP.
Fixes: 3f93522ffab2d ("virtio-net: switch off offloads on demand if possible on XDP set")
Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pavel Popa <pashinho1990@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Fri, 23 Nov 2018 19:24:55 +0000 (11:24 -0800)]
Merge tag 'ceph-for-4.20-rc4' of https://github.com/ceph/ceph-client
Pullk ceph fix from Ilya Dryomov:
"A messenger fix, marked for stable"
* tag 'ceph-for-4.20-rc4' of https://github.com/ceph/ceph-client:
libceph: fall back to sendmsg for slab pages
Linus Torvalds [Fri, 23 Nov 2018 19:20:14 +0000 (11:20 -0800)]
Merge tag 'for-linus-
20181123' of git://git.kernel.dk/linux-block
Pull block fix from Jens Axboe:
"Just a single fix for this week, fixing an issue with nvme-fc"
* tag 'for-linus-
20181123' of git://git.kernel.dk/linux-block:
nvme-fc: resolve io failures during connect
Davide Caratti [Wed, 21 Nov 2018 17:23:53 +0000 (18:23 +0100)]
net/sched: act_police: add missing spinlock initialization
commit
f2cbd4852820 ("net/sched: act_police: fix race condition on state
variables") introduces a new spinlock, but forgets its initialization.
Ensure that tcf_police_init() initializes 'tcfp_lock' every time a 'police'
action is newly created, to avoid the following lockdep splat:
INFO: trying to register non-static key.
the code is fine but needs lockdep annotation.
turning off the locking correctness validator.
<...>
Call Trace:
dump_stack+0x85/0xcb
register_lock_class+0x581/0x590
__lock_acquire+0xd4/0x1330
? tcf_police_init+0x2fa/0x650 [act_police]
? lock_acquire+0x9e/0x1a0
lock_acquire+0x9e/0x1a0
? tcf_police_init+0x2fa/0x650 [act_police]
? tcf_police_init+0x55a/0x650 [act_police]
_raw_spin_lock_bh+0x34/0x40
? tcf_police_init+0x2fa/0x650 [act_police]
tcf_police_init+0x2fa/0x650 [act_police]
tcf_action_init_1+0x384/0x4c0
tcf_action_init+0xf6/0x160
tcf_action_add+0x73/0x170
tc_ctl_action+0x122/0x160
rtnetlink_rcv_msg+0x2a4/0x490
? netlink_deliver_tap+0x99/0x400
? validate_linkmsg+0x370/0x370
netlink_rcv_skb+0x4d/0x130
netlink_unicast+0x196/0x230
netlink_sendmsg+0x2e5/0x3e0
sock_sendmsg+0x36/0x40
___sys_sendmsg+0x280/0x2f0
? _raw_spin_unlock+0x24/0x30
? handle_pte_fault+0xafe/0xf30
? find_held_lock+0x2d/0x90
? syscall_trace_enter+0x1df/0x360
? __sys_sendmsg+0x5e/0xa0
__sys_sendmsg+0x5e/0xa0
do_syscall_64+0x60/0x210
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7f1841c7cf10
Code: c3 48 8b 05 82 6f 2c 00 f7 db 64 89 18 48 83 cb ff eb dd 0f 1f 80 00 00 00 00 83 3d 8d d0 2c 00 00 75 10 b8 2e 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 ae cc 00 00 48 89 04 24
RSP: 002b:
00007ffcf9df4d68 EFLAGS:
00000246 ORIG_RAX:
000000000000002e
RAX:
ffffffffffffffda RBX:
0000000000000001 RCX:
00007f1841c7cf10
RDX:
0000000000000000 RSI:
00007ffcf9df4dc0 RDI:
0000000000000003
RBP:
000000005bf56105 R08:
0000000000000002 R09:
00007ffcf9df8edc
R10:
00007ffcf9df47e0 R11:
0000000000000246 R12:
0000000000671be0
R13:
00007ffcf9df4e84 R14:
0000000000000008 R15:
0000000000000000
Fixes: f2cbd4852820 ("net/sched: act_police: fix race condition on state variables")
Reported-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Paolo Abeni [Wed, 21 Nov 2018 17:21:35 +0000 (18:21 +0100)]
net: don't keep lonely packets forever in the gro hash
Eric noted that with UDP GRO and NAPI timeout, we could keep a single
UDP packet inside the GRO hash forever, if the related NAPI instance
calls napi_gro_complete() at an higher frequency than the NAPI timeout.
Willem noted that even TCP packets could be trapped there, till the
next retransmission.
This patch tries to address the issue, flushing the old packets -
those with a NAPI_GRO_CB age before the current jiffy - before scheduling
the NAPI timeout. The rationale is that such a timeout should be
well below a jiffy and we are not flushing packets eligible for sane GRO.
v1 -> v2:
- clarified the commit message and comment
RFC -> v1:
- added 'Fixes tags', cleaned-up the wording.
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Fixes: 3b47d30396ba ("net: gro: add a per device gro flush timer")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Wed, 21 Nov 2018 13:52:33 +0000 (21:52 +0800)]
net/ipv6: re-do dad when interface has IFF_NOARP flag change
When we add a new IPv6 address, we should also join corresponding solicited-node
multicast address, unless the interface has IFF_NOARP flag, as function
addrconf_join_solict() did. But if we remove IFF_NOARP flag later, we do
not do dad and add the mcast address. So we will drop corresponding neighbour
discovery message that came from other nodes.
A typical example is after creating a ipvlan with mode l3, setting up an ipv6
address and changing the mode to l2. Then we will not be able to ping this
address as the interface doesn't join related solicited-node mcast address.
Fix it by re-doing dad when interface changed IFF_NOARP flag. Then we will add
corresponding mcast group and check if there is a duplicate address on the
network.
Reported-by: Jianlin Shi <jishi@redhat.com>
Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Fri, 23 Nov 2018 19:15:27 +0000 (11:15 -0800)]
Merge tag 'iommu-fixes-v4.20-rc3' of git://git./linux/kernel/git/joro/iommu
Pull IOMMU fixes from Joerg Roedel:
- Two fixes for the Intel VT-d driver to fix a NULL-ptr dereference and
an unbalance in an allocate/free path (allocated with memremap, freed
with iounmap)
- Fix for a crash in the Renesas IOMMU driver
- Fix for the Advanced Virtual Interrupt Controler (AVIC) code in the
AMD IOMMU driver
* tag 'iommu-fixes-v4.20-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/vt-d: Use memunmap to free memremap
amd/iommu: Fix Guest Virtual APIC Log Tail Address Register
iommu/ipmmu-vmsa: Fix crash on early domain free
iommu/vt-d: Fix NULL pointer dereference in prq_event_thread()
Willem de Bruijn [Tue, 20 Nov 2018 18:00:18 +0000 (13:00 -0500)]
packet: copy user buffers before orphan or clone
tpacket_snd sends packets with user pages linked into skb frags. It
notifies that pages can be reused when the skb is released by setting
skb->destructor to tpacket_destruct_skb.
This can cause data corruption if the skb is orphaned (e.g., on
transmit through veth) or cloned (e.g., on mirror to another psock).
Create a kernel-private copy of data in these cases, same as tun/tap
zerocopy transmission. Reuse that infrastructure: mark the skb as
SKBTX_ZEROCOPY_FRAG, which will trigger copy in skb_orphan_frags(_rx).
Unlike other zerocopy packets, do not set shinfo destructor_arg to
struct ubuf_info. tpacket_destruct_skb already uses that ptr to notify
when the original skb is released and a timestamp is recorded. Do not
change this timestamp behavior. The ubuf_info->callback is not needed
anyway, as no zerocopy notification is expected.
Mark destructor_arg as not-a-uarg by setting the lower bit to 1. The
resulting value is not a valid ubuf_info pointer, nor a valid
tpacket_snd frame address. Add skb_zcopy_.._nouarg helpers for this.
The fix relies on features introduced in commit
52267790ef52 ("sock:
add MSG_ZEROCOPY"), so can be backported as is only to 4.14.
Tested with from `./in_netns.sh ./txring_overwrite` from
http://github.com/wdebruij/kerneltools/tests
Fixes: 69e3c75f4d54 ("net: TX_RING and packet mmap")
Reported-by: Anand H. Krishnan <anandhkrishnan@gmail.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Fri, 23 Nov 2018 18:56:16 +0000 (10:56 -0800)]
Merge tag 'acpi-4.20-rc4' of git://git./linux/kernel/git/rafael/linux-pm
Pull ACPI fix from Rafael Wysocki:
"Prevent the ACPI core from registering a platform device for the
SMB0001 HID to avoid IRQ allocation issues (Hans de Goede)"
* tag 'acpi-4.20-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
ACPI / platform: Add SMB0001 HID to forbidden_id_list