mirror of
https://github.com/Fishwaldo/build.git
synced 2025-07-11 15:38:48 +00:00
* add upstream patches * Add more patches Co-authored-by: Igor Pecovnik <igor.pecovnik@gmail.com>
11590 lines
386 KiB
Diff
11590 lines
386 KiB
Diff
diff --git a/Documentation/ABI/testing/sysfs-devices-memory b/Documentation/ABI/testing/sysfs-devices-memory
|
|
index 2da2b1fba2c1c..16a727a611b1e 100644
|
|
--- a/Documentation/ABI/testing/sysfs-devices-memory
|
|
+++ b/Documentation/ABI/testing/sysfs-devices-memory
|
|
@@ -26,8 +26,9 @@ Date: September 2008
|
|
Contact: Badari Pulavarty <pbadari@us.ibm.com>
|
|
Description:
|
|
The file /sys/devices/system/memory/memoryX/phys_device
|
|
- is read-only and is designed to show the name of physical
|
|
- memory device. Implementation is currently incomplete.
|
|
+ is read-only; it is a legacy interface only ever used on s390x
|
|
+ to expose the covered storage increment.
|
|
+Users: Legacy s390-tools lsmem/chmem
|
|
|
|
What: /sys/devices/system/memory/memoryX/phys_index
|
|
Date: September 2008
|
|
diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst
|
|
index 5c4432c96c4b6..245739f55ac7d 100644
|
|
--- a/Documentation/admin-guide/mm/memory-hotplug.rst
|
|
+++ b/Documentation/admin-guide/mm/memory-hotplug.rst
|
|
@@ -160,8 +160,8 @@ Under each memory block, you can see 5 files:
|
|
|
|
"online_movable", "online", "offline" command
|
|
which will be performed on all sections in the block.
|
|
-``phys_device`` read-only: designed to show the name of physical memory
|
|
- device. This is not well implemented now.
|
|
+``phys_device`` read-only: legacy interface only ever used on s390x to
|
|
+ expose the covered storage increment.
|
|
``removable`` read-only: contains an integer value indicating
|
|
whether the memory block is removable or not
|
|
removable. A value of 1 indicates that the memory
|
|
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
|
|
index 654649556306f..7272a4bd74dd0 100644
|
|
--- a/Documentation/gpu/todo.rst
|
|
+++ b/Documentation/gpu/todo.rst
|
|
@@ -560,6 +560,27 @@ Some of these date from the very introduction of KMS in 2008 ...
|
|
|
|
Level: Intermediate
|
|
|
|
+Remove automatic page mapping from dma-buf importing
|
|
+----------------------------------------------------
|
|
+
|
|
+When importing dma-bufs, the dma-buf and PRIME frameworks automatically map
|
|
+imported pages into the importer's DMA area. drm_gem_prime_fd_to_handle() and
|
|
+drm_gem_prime_handle_to_fd() require that importers call dma_buf_attach()
|
|
+even if they never do actual device DMA, but only CPU access through
|
|
+dma_buf_vmap(). This is a problem for USB devices, which do not support DMA
|
|
+operations.
|
|
+
|
|
+To fix the issue, automatic page mappings should be removed from the
|
|
+buffer-sharing code. Fixing this is a bit more involved, since the import/export
|
|
+cache is also tied to &drm_gem_object.import_attach. Meanwhile we paper over
|
|
+this problem for USB devices by fishing out the USB host controller device, as
|
|
+long as that supports DMA. Otherwise importing can still needlessly fail.
|
|
+
|
|
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
|
|
+
|
|
+Level: Advanced
|
|
+
|
|
+
|
|
Better Testing
|
|
==============
|
|
|
|
diff --git a/Documentation/networking/netdev-FAQ.rst b/Documentation/networking/netdev-FAQ.rst
|
|
index 4b9ed5874d5ad..be88ab15e53ce 100644
|
|
--- a/Documentation/networking/netdev-FAQ.rst
|
|
+++ b/Documentation/networking/netdev-FAQ.rst
|
|
@@ -144,77 +144,13 @@ Please send incremental versions on top of what has been merged in order to fix
|
|
the patches the way they would look like if your latest patch series was to be
|
|
merged.
|
|
|
|
-Q: How can I tell what patches are queued up for backporting to the various stable releases?
|
|
---------------------------------------------------------------------------------------------
|
|
-A: Normally Greg Kroah-Hartman collects stable commits himself, but for
|
|
-networking, Dave collects up patches he deems critical for the
|
|
-networking subsystem, and then hands them off to Greg.
|
|
-
|
|
-There is a patchworks queue that you can see here:
|
|
-
|
|
- https://patchwork.kernel.org/bundle/netdev/stable/?state=*
|
|
-
|
|
-It contains the patches which Dave has selected, but not yet handed off
|
|
-to Greg. If Greg already has the patch, then it will be here:
|
|
-
|
|
- https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
|
|
-
|
|
-A quick way to find whether the patch is in this stable-queue is to
|
|
-simply clone the repo, and then git grep the mainline commit ID, e.g.
|
|
-::
|
|
-
|
|
- stable-queue$ git grep -l 284041ef21fdf2e
|
|
- releases/3.0.84/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
|
- releases/3.4.51/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
|
- releases/3.9.8/ipv6-fix-possible-crashes-in-ip6_cork_release.patch
|
|
- stable/stable-queue$
|
|
-
|
|
-Q: I see a network patch and I think it should be backported to stable.
|
|
------------------------------------------------------------------------
|
|
-Q: Should I request it via stable@vger.kernel.org like the references in
|
|
-the kernel's Documentation/process/stable-kernel-rules.rst file say?
|
|
-A: No, not for networking. Check the stable queues as per above first
|
|
-to see if it is already queued. If not, then send a mail to netdev,
|
|
-listing the upstream commit ID and why you think it should be a stable
|
|
-candidate.
|
|
-
|
|
-Before you jump to go do the above, do note that the normal stable rules
|
|
-in :ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
|
|
-still apply. So you need to explicitly indicate why it is a critical
|
|
-fix and exactly what users are impacted. In addition, you need to
|
|
-convince yourself that you *really* think it has been overlooked,
|
|
-vs. having been considered and rejected.
|
|
-
|
|
-Generally speaking, the longer it has had a chance to "soak" in
|
|
-mainline, the better the odds that it is an OK candidate for stable. So
|
|
-scrambling to request a commit be added the day after it appears should
|
|
-be avoided.
|
|
-
|
|
-Q: I have created a network patch and I think it should be backported to stable.
|
|
---------------------------------------------------------------------------------
|
|
-Q: Should I add a Cc: stable@vger.kernel.org like the references in the
|
|
-kernel's Documentation/ directory say?
|
|
-A: No. See above answer. In short, if you think it really belongs in
|
|
-stable, then ensure you write a decent commit log that describes who
|
|
-gets impacted by the bug fix and how it manifests itself, and when the
|
|
-bug was introduced. If you do that properly, then the commit will get
|
|
-handled appropriately and most likely get put in the patchworks stable
|
|
-queue if it really warrants it.
|
|
-
|
|
-If you think there is some valid information relating to it being in
|
|
-stable that does *not* belong in the commit log, then use the three dash
|
|
-marker line as described in
|
|
-:ref:`Documentation/process/submitting-patches.rst <the_canonical_patch_format>`
|
|
-to temporarily embed that information into the patch that you send.
|
|
-
|
|
-Q: Are all networking bug fixes backported to all stable releases?
|
|
-------------------------------------------------------------------
|
|
-A: Due to capacity, Dave could only take care of the backports for the
|
|
-last two stable releases. For earlier stable releases, each stable
|
|
-branch maintainer is supposed to take care of them. If you find any
|
|
-patch is missing from an earlier stable branch, please notify
|
|
-stable@vger.kernel.org with either a commit ID or a formal patch
|
|
-backported, and CC Dave and other relevant networking developers.
|
|
+Q: Are there special rules regarding stable submissions on netdev?
|
|
+---------------------------------------------------------------
|
|
+While it used to be the case that netdev submissions were not supposed
|
|
+to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer
|
|
+the case today. Please follow the standard stable rules in
|
|
+:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,
|
|
+and make sure you include appropriate Fixes tags!
|
|
|
|
Q: Is the comment style convention different for the networking content?
|
|
------------------------------------------------------------------------
|
|
diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
|
|
index 3973556250e17..003c865e9c212 100644
|
|
--- a/Documentation/process/stable-kernel-rules.rst
|
|
+++ b/Documentation/process/stable-kernel-rules.rst
|
|
@@ -35,12 +35,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
|
|
Procedure for submitting patches to the -stable tree
|
|
----------------------------------------------------
|
|
|
|
- - If the patch covers files in net/ or drivers/net please follow netdev stable
|
|
- submission guidelines as described in
|
|
- :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
|
|
- after first checking the stable networking queue at
|
|
- https://patchwork.kernel.org/bundle/netdev/stable/?state=*
|
|
- to ensure the requested patch is not already queued up.
|
|
- Security patches should not be handled (solely) by the -stable review
|
|
process but should follow the procedures in
|
|
:ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
|
|
diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst
|
|
index 83d9a82055a78..5a267f5d1a501 100644
|
|
--- a/Documentation/process/submitting-patches.rst
|
|
+++ b/Documentation/process/submitting-patches.rst
|
|
@@ -250,11 +250,6 @@ should also read
|
|
:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`
|
|
in addition to this file.
|
|
|
|
-Note, however, that some subsystem maintainers want to come to their own
|
|
-conclusions on which patches should go to the stable trees. The networking
|
|
-maintainer, in particular, would rather not see individual developers
|
|
-adding lines like the above to their patches.
|
|
-
|
|
If changes affect userland-kernel interfaces, please send the MAN-PAGES
|
|
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
|
|
least a notification of the change, so that some information makes its way
|
|
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
|
|
index 4ba0df574eb25..a5d27553d59c9 100644
|
|
--- a/Documentation/virt/kvm/api.rst
|
|
+++ b/Documentation/virt/kvm/api.rst
|
|
@@ -182,6 +182,9 @@ is dependent on the CPU capability and the kernel configuration. The limit can
|
|
be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the KVM_CHECK_EXTENSION
|
|
ioctl() at run-time.
|
|
|
|
+Creation of the VM will fail if the requested IPA size (whether it is
|
|
+implicit or explicit) is unsupported on the host.
|
|
+
|
|
Please note that configuring the IPA size does not affect the capability
|
|
exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects
|
|
size of the address translated by the stage2 level (guest physical to
|
|
diff --git a/Makefile b/Makefile
|
|
index 7fdb78b48f556..3a435c928e750 100644
|
|
--- a/Makefile
|
|
+++ b/Makefile
|
|
@@ -1,7 +1,7 @@
|
|
# SPDX-License-Identifier: GPL-2.0
|
|
VERSION = 5
|
|
PATCHLEVEL = 10
|
|
-SUBLEVEL = 23
|
|
+SUBLEVEL = 24
|
|
EXTRAVERSION =
|
|
NAME = Dare mighty things
|
|
|
|
@@ -1247,9 +1247,15 @@ define filechk_utsrelease.h
|
|
endef
|
|
|
|
define filechk_version.h
|
|
- echo \#define LINUX_VERSION_CODE $(shell \
|
|
- expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \
|
|
- echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'
|
|
+ if [ $(SUBLEVEL) -gt 255 ]; then \
|
|
+ echo \#define LINUX_VERSION_CODE $(shell \
|
|
+ expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \
|
|
+ else \
|
|
+ echo \#define LINUX_VERSION_CODE $(shell \
|
|
+ expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
|
|
+ fi; \
|
|
+ echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + \
|
|
+ ((c) > 255 ? 255 : (c)))'
|
|
endef
|
|
|
|
$(version_h): FORCE
|
|
diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
|
|
index a0de09f994d88..247ce90559901 100644
|
|
--- a/arch/arm/boot/compressed/head.S
|
|
+++ b/arch/arm/boot/compressed/head.S
|
|
@@ -1440,8 +1440,7 @@ ENTRY(efi_enter_kernel)
|
|
mov r4, r0 @ preserve image base
|
|
mov r8, r1 @ preserve DT pointer
|
|
|
|
- ARM( adrl r0, call_cache_fn )
|
|
- THUMB( adr r0, call_cache_fn )
|
|
+ adr_l r0, call_cache_fn
|
|
adr r1, 0f @ clean the region of code we
|
|
bl cache_clean_flush @ may run with the MMU off
|
|
|
|
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
|
|
index feac2c8b86f29..72627c5fb3b2c 100644
|
|
--- a/arch/arm/include/asm/assembler.h
|
|
+++ b/arch/arm/include/asm/assembler.h
|
|
@@ -494,4 +494,88 @@ THUMB( orr \reg , \reg , #PSR_T_BIT )
|
|
#define _ASM_NOKPROBE(entry)
|
|
#endif
|
|
|
|
+ .macro __adldst_l, op, reg, sym, tmp, c
|
|
+ .if __LINUX_ARM_ARCH__ < 7
|
|
+ ldr\c \tmp, .La\@
|
|
+ .subsection 1
|
|
+ .align 2
|
|
+.La\@: .long \sym - .Lpc\@
|
|
+ .previous
|
|
+ .else
|
|
+ .ifnb \c
|
|
+ THUMB( ittt \c )
|
|
+ .endif
|
|
+ movw\c \tmp, #:lower16:\sym - .Lpc\@
|
|
+ movt\c \tmp, #:upper16:\sym - .Lpc\@
|
|
+ .endif
|
|
+
|
|
+#ifndef CONFIG_THUMB2_KERNEL
|
|
+ .set .Lpc\@, . + 8 // PC bias
|
|
+ .ifc \op, add
|
|
+ add\c \reg, \tmp, pc
|
|
+ .else
|
|
+ \op\c \reg, [pc, \tmp]
|
|
+ .endif
|
|
+#else
|
|
+.Lb\@: add\c \tmp, \tmp, pc
|
|
+ /*
|
|
+ * In Thumb-2 builds, the PC bias depends on whether we are currently
|
|
+ * emitting into a .arm or a .thumb section. The size of the add opcode
|
|
+ * above will be 2 bytes when emitting in Thumb mode and 4 bytes when
|
|
+ * emitting in ARM mode, so let's use this to account for the bias.
|
|
+ */
|
|
+ .set .Lpc\@, . + (. - .Lb\@)
|
|
+
|
|
+ .ifnc \op, add
|
|
+ \op\c \reg, [\tmp]
|
|
+ .endif
|
|
+#endif
|
|
+ .endm
|
|
+
|
|
+ /*
|
|
+ * mov_l - move a constant value or [relocated] address into a register
|
|
+ */
|
|
+ .macro mov_l, dst:req, imm:req
|
|
+ .if __LINUX_ARM_ARCH__ < 7
|
|
+ ldr \dst, =\imm
|
|
+ .else
|
|
+ movw \dst, #:lower16:\imm
|
|
+ movt \dst, #:upper16:\imm
|
|
+ .endif
|
|
+ .endm
|
|
+
|
|
+ /*
|
|
+ * adr_l - adr pseudo-op with unlimited range
|
|
+ *
|
|
+ * @dst: destination register
|
|
+ * @sym: name of the symbol
|
|
+ * @cond: conditional opcode suffix
|
|
+ */
|
|
+ .macro adr_l, dst:req, sym:req, cond
|
|
+ __adldst_l add, \dst, \sym, \dst, \cond
|
|
+ .endm
|
|
+
|
|
+ /*
|
|
+ * ldr_l - ldr <literal> pseudo-op with unlimited range
|
|
+ *
|
|
+ * @dst: destination register
|
|
+ * @sym: name of the symbol
|
|
+ * @cond: conditional opcode suffix
|
|
+ */
|
|
+ .macro ldr_l, dst:req, sym:req, cond
|
|
+ __adldst_l ldr, \dst, \sym, \dst, \cond
|
|
+ .endm
|
|
+
|
|
+ /*
|
|
+ * str_l - str <literal> pseudo-op with unlimited range
|
|
+ *
|
|
+ * @src: source register
|
|
+ * @sym: name of the symbol
|
|
+ * @tmp: mandatory scratch register
|
|
+ * @cond: conditional opcode suffix
|
|
+ */
|
|
+ .macro str_l, src:req, sym:req, tmp:req, cond
|
|
+ __adldst_l str, \src, \sym, \tmp, \cond
|
|
+ .endm
|
|
+
|
|
#endif /* __ASM_ASSEMBLER_H__ */
|
|
diff --git a/arch/arm/kernel/iwmmxt.S b/arch/arm/kernel/iwmmxt.S
|
|
index 0dcae787b004d..d2b4ac06e4ed8 100644
|
|
--- a/arch/arm/kernel/iwmmxt.S
|
|
+++ b/arch/arm/kernel/iwmmxt.S
|
|
@@ -16,6 +16,7 @@
|
|
#include <asm/thread_info.h>
|
|
#include <asm/asm-offsets.h>
|
|
#include <asm/assembler.h>
|
|
+#include "iwmmxt.h"
|
|
|
|
#if defined(CONFIG_CPU_PJ4) || defined(CONFIG_CPU_PJ4B)
|
|
#define PJ4(code...) code
|
|
@@ -113,33 +114,33 @@ concan_save:
|
|
|
|
concan_dump:
|
|
|
|
- wstrw wCSSF, [r1, #MMX_WCSSF]
|
|
- wstrw wCASF, [r1, #MMX_WCASF]
|
|
- wstrw wCGR0, [r1, #MMX_WCGR0]
|
|
- wstrw wCGR1, [r1, #MMX_WCGR1]
|
|
- wstrw wCGR2, [r1, #MMX_WCGR2]
|
|
- wstrw wCGR3, [r1, #MMX_WCGR3]
|
|
+ wstrw wCSSF, r1, MMX_WCSSF
|
|
+ wstrw wCASF, r1, MMX_WCASF
|
|
+ wstrw wCGR0, r1, MMX_WCGR0
|
|
+ wstrw wCGR1, r1, MMX_WCGR1
|
|
+ wstrw wCGR2, r1, MMX_WCGR2
|
|
+ wstrw wCGR3, r1, MMX_WCGR3
|
|
|
|
1: @ MUP? wRn
|
|
tst r2, #0x2
|
|
beq 2f
|
|
|
|
- wstrd wR0, [r1, #MMX_WR0]
|
|
- wstrd wR1, [r1, #MMX_WR1]
|
|
- wstrd wR2, [r1, #MMX_WR2]
|
|
- wstrd wR3, [r1, #MMX_WR3]
|
|
- wstrd wR4, [r1, #MMX_WR4]
|
|
- wstrd wR5, [r1, #MMX_WR5]
|
|
- wstrd wR6, [r1, #MMX_WR6]
|
|
- wstrd wR7, [r1, #MMX_WR7]
|
|
- wstrd wR8, [r1, #MMX_WR8]
|
|
- wstrd wR9, [r1, #MMX_WR9]
|
|
- wstrd wR10, [r1, #MMX_WR10]
|
|
- wstrd wR11, [r1, #MMX_WR11]
|
|
- wstrd wR12, [r1, #MMX_WR12]
|
|
- wstrd wR13, [r1, #MMX_WR13]
|
|
- wstrd wR14, [r1, #MMX_WR14]
|
|
- wstrd wR15, [r1, #MMX_WR15]
|
|
+ wstrd wR0, r1, MMX_WR0
|
|
+ wstrd wR1, r1, MMX_WR1
|
|
+ wstrd wR2, r1, MMX_WR2
|
|
+ wstrd wR3, r1, MMX_WR3
|
|
+ wstrd wR4, r1, MMX_WR4
|
|
+ wstrd wR5, r1, MMX_WR5
|
|
+ wstrd wR6, r1, MMX_WR6
|
|
+ wstrd wR7, r1, MMX_WR7
|
|
+ wstrd wR8, r1, MMX_WR8
|
|
+ wstrd wR9, r1, MMX_WR9
|
|
+ wstrd wR10, r1, MMX_WR10
|
|
+ wstrd wR11, r1, MMX_WR11
|
|
+ wstrd wR12, r1, MMX_WR12
|
|
+ wstrd wR13, r1, MMX_WR13
|
|
+ wstrd wR14, r1, MMX_WR14
|
|
+ wstrd wR15, r1, MMX_WR15
|
|
|
|
2: teq r0, #0 @ anything to load?
|
|
reteq lr @ if not, return
|
|
@@ -147,30 +148,30 @@ concan_dump:
|
|
concan_load:
|
|
|
|
@ Load wRn
|
|
- wldrd wR0, [r0, #MMX_WR0]
|
|
- wldrd wR1, [r0, #MMX_WR1]
|
|
- wldrd wR2, [r0, #MMX_WR2]
|
|
- wldrd wR3, [r0, #MMX_WR3]
|
|
- wldrd wR4, [r0, #MMX_WR4]
|
|
- wldrd wR5, [r0, #MMX_WR5]
|
|
- wldrd wR6, [r0, #MMX_WR6]
|
|
- wldrd wR7, [r0, #MMX_WR7]
|
|
- wldrd wR8, [r0, #MMX_WR8]
|
|
- wldrd wR9, [r0, #MMX_WR9]
|
|
- wldrd wR10, [r0, #MMX_WR10]
|
|
- wldrd wR11, [r0, #MMX_WR11]
|
|
- wldrd wR12, [r0, #MMX_WR12]
|
|
- wldrd wR13, [r0, #MMX_WR13]
|
|
- wldrd wR14, [r0, #MMX_WR14]
|
|
- wldrd wR15, [r0, #MMX_WR15]
|
|
+ wldrd wR0, r0, MMX_WR0
|
|
+ wldrd wR1, r0, MMX_WR1
|
|
+ wldrd wR2, r0, MMX_WR2
|
|
+ wldrd wR3, r0, MMX_WR3
|
|
+ wldrd wR4, r0, MMX_WR4
|
|
+ wldrd wR5, r0, MMX_WR5
|
|
+ wldrd wR6, r0, MMX_WR6
|
|
+ wldrd wR7, r0, MMX_WR7
|
|
+ wldrd wR8, r0, MMX_WR8
|
|
+ wldrd wR9, r0, MMX_WR9
|
|
+ wldrd wR10, r0, MMX_WR10
|
|
+ wldrd wR11, r0, MMX_WR11
|
|
+ wldrd wR12, r0, MMX_WR12
|
|
+ wldrd wR13, r0, MMX_WR13
|
|
+ wldrd wR14, r0, MMX_WR14
|
|
+ wldrd wR15, r0, MMX_WR15
|
|
|
|
@ Load wCx
|
|
- wldrw wCSSF, [r0, #MMX_WCSSF]
|
|
- wldrw wCASF, [r0, #MMX_WCASF]
|
|
- wldrw wCGR0, [r0, #MMX_WCGR0]
|
|
- wldrw wCGR1, [r0, #MMX_WCGR1]
|
|
- wldrw wCGR2, [r0, #MMX_WCGR2]
|
|
- wldrw wCGR3, [r0, #MMX_WCGR3]
|
|
+ wldrw wCSSF, r0, MMX_WCSSF
|
|
+ wldrw wCASF, r0, MMX_WCASF
|
|
+ wldrw wCGR0, r0, MMX_WCGR0
|
|
+ wldrw wCGR1, r0, MMX_WCGR1
|
|
+ wldrw wCGR2, r0, MMX_WCGR2
|
|
+ wldrw wCGR3, r0, MMX_WCGR3
|
|
|
|
@ clear CUP/MUP (only if r1 != 0)
|
|
teq r1, #0
|
|
diff --git a/arch/arm/kernel/iwmmxt.h b/arch/arm/kernel/iwmmxt.h
|
|
new file mode 100644
|
|
index 0000000000000..fb627286f5bb9
|
|
--- /dev/null
|
|
+++ b/arch/arm/kernel/iwmmxt.h
|
|
@@ -0,0 +1,47 @@
|
|
+/* SPDX-License-Identifier: GPL-2.0 */
|
|
+
|
|
+#ifndef __IWMMXT_H__
|
|
+#define __IWMMXT_H__
|
|
+
|
|
+.irp b, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
|
|
+.set .LwR\b, \b
|
|
+.set .Lr\b, \b
|
|
+.endr
|
|
+
|
|
+.set .LwCSSF, 0x2
|
|
+.set .LwCASF, 0x3
|
|
+.set .LwCGR0, 0x8
|
|
+.set .LwCGR1, 0x9
|
|
+.set .LwCGR2, 0xa
|
|
+.set .LwCGR3, 0xb
|
|
+
|
|
+.macro wldrd, reg:req, base:req, offset:req
|
|
+.inst 0xedd00100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
|
|
+.endm
|
|
+
|
|
+.macro wldrw, reg:req, base:req, offset:req
|
|
+.inst 0xfd900100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
|
|
+.endm
|
|
+
|
|
+.macro wstrd, reg:req, base:req, offset:req
|
|
+.inst 0xedc00100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
|
|
+.endm
|
|
+
|
|
+.macro wstrw, reg:req, base:req, offset:req
|
|
+.inst 0xfd800100 | (.L\reg << 12) | (.L\base << 16) | (\offset >> 2)
|
|
+.endm
|
|
+
|
|
+#ifdef __clang__
|
|
+
|
|
+#define wCon c1
|
|
+
|
|
+.macro tmrc, dest:req, control:req
|
|
+mrc p1, 0, \dest, \control, c0, 0
|
|
+.endm
|
|
+
|
|
+.macro tmcr, control:req, src:req
|
|
+mcr p1, 0, \src, \control, c0, 0
|
|
+.endm
|
|
+#endif
|
|
+
|
|
+#endif
|
|
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
|
|
index 54387ccd1ab26..044bb9e2cd74f 100644
|
|
--- a/arch/arm64/include/asm/kvm_asm.h
|
|
+++ b/arch/arm64/include/asm/kvm_asm.h
|
|
@@ -49,7 +49,7 @@
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context 2
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa 3
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid 4
|
|
-#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_vmid 5
|
|
+#define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context 5
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff 6
|
|
#define __KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs 7
|
|
#define __KVM_HOST_SMCCC_FUNC___vgic_v3_get_ich_vtr_el2 8
|
|
@@ -180,10 +180,10 @@ DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
|
|
#define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
|
|
|
|
extern void __kvm_flush_vm_context(void);
|
|
+extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
|
|
extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
|
|
int level);
|
|
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
|
|
-extern void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu);
|
|
|
|
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
|
|
|
|
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
|
|
index 6b664de5ec1f4..123e67cb85050 100644
|
|
--- a/arch/arm64/include/asm/kvm_hyp.h
|
|
+++ b/arch/arm64/include/asm/kvm_hyp.h
|
|
@@ -82,6 +82,11 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt);
|
|
void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
|
|
void __debug_switch_to_host(struct kvm_vcpu *vcpu);
|
|
|
|
+#ifdef __KVM_NVHE_HYPERVISOR__
|
|
+void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu);
|
|
+void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
|
|
+#endif
|
|
+
|
|
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
|
|
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
|
|
|
|
@@ -94,7 +99,8 @@ u64 __guest_enter(struct kvm_vcpu *vcpu);
|
|
|
|
void __noreturn hyp_panic(void);
|
|
#ifdef __KVM_NVHE_HYPERVISOR__
|
|
-void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
|
|
+void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
|
|
+ u64 elr, u64 par);
|
|
#endif
|
|
|
|
#endif /* __ARM64_KVM_HYP_H__ */
|
|
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
|
|
index 75c8e9a350cc7..505bdd75b5411 100644
|
|
--- a/arch/arm64/include/asm/memory.h
|
|
+++ b/arch/arm64/include/asm/memory.h
|
|
@@ -306,6 +306,11 @@ static inline void *phys_to_virt(phys_addr_t x)
|
|
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
|
|
|
|
#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
|
|
+#define page_to_virt(x) ({ \
|
|
+ __typeof__(x) __page = x; \
|
|
+ void *__addr = __va(page_to_phys(__page)); \
|
|
+ (void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
|
|
+})
|
|
#define virt_to_page(x) pfn_to_page(virt_to_pfn(x))
|
|
#else
|
|
#define page_to_virt(x) ({ \
|
|
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
|
|
index 0672236e1aeab..4e2ba94778450 100644
|
|
--- a/arch/arm64/include/asm/mmu_context.h
|
|
+++ b/arch/arm64/include/asm/mmu_context.h
|
|
@@ -65,10 +65,7 @@ extern u64 idmap_ptrs_per_pgd;
|
|
|
|
static inline bool __cpu_uses_extended_idmap(void)
|
|
{
|
|
- if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
|
|
- return false;
|
|
-
|
|
- return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
|
|
+ return unlikely(idmap_t0sz != TCR_T0SZ(vabits_actual));
|
|
}
|
|
|
|
/*
|
|
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
|
|
index 046be789fbb47..9a65fb5281100 100644
|
|
--- a/arch/arm64/include/asm/pgtable-prot.h
|
|
+++ b/arch/arm64/include/asm/pgtable-prot.h
|
|
@@ -66,7 +66,6 @@ extern bool arm64_use_ng_mappings;
|
|
#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
|
|
|
|
#define PAGE_KERNEL __pgprot(PROT_NORMAL)
|
|
-#define PAGE_KERNEL_TAGGED __pgprot(PROT_NORMAL_TAGGED)
|
|
#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
|
|
#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
|
|
#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
|
|
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
|
|
index 5628289b9d5e6..717f13d52ecc5 100644
|
|
--- a/arch/arm64/include/asm/pgtable.h
|
|
+++ b/arch/arm64/include/asm/pgtable.h
|
|
@@ -484,6 +484,9 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
|
|
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN)
|
|
#define pgprot_device(prot) \
|
|
__pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN)
|
|
+#define pgprot_tagged(prot) \
|
|
+ __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_TAGGED))
|
|
+#define pgprot_mhp pgprot_tagged
|
|
/*
|
|
* DMA allocations for non-coherent devices use what the Arm architecture calls
|
|
* "Normal non-cacheable" memory, which permits speculation, unaligned accesses
|
|
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
|
|
index e7550a5289fef..78cdd6b24172c 100644
|
|
--- a/arch/arm64/kernel/head.S
|
|
+++ b/arch/arm64/kernel/head.S
|
|
@@ -334,7 +334,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
|
|
*/
|
|
adrp x5, __idmap_text_end
|
|
clz x5, x5
|
|
- cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough?
|
|
+ cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
|
|
b.ge 1f // .. then skip VA range extension
|
|
|
|
adr_l x6, idmap_t0sz
|
|
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
|
|
index 3605f77ad4df1..11852e05ee32a 100644
|
|
--- a/arch/arm64/kernel/perf_event.c
|
|
+++ b/arch/arm64/kernel/perf_event.c
|
|
@@ -460,7 +460,7 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
|
|
return pmnc & BIT(ARMV8_IDX_TO_COUNTER(idx));
|
|
}
|
|
|
|
-static inline u32 armv8pmu_read_evcntr(int idx)
|
|
+static inline u64 armv8pmu_read_evcntr(int idx)
|
|
{
|
|
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
|
|
|
|
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
|
|
index c0ffb019ca8be..a1c2c955474e9 100644
|
|
--- a/arch/arm64/kvm/arm.c
|
|
+++ b/arch/arm64/kvm/arm.c
|
|
@@ -352,11 +352,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
|
|
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
|
|
|
|
/*
|
|
+ * We guarantee that both TLBs and I-cache are private to each
|
|
+ * vcpu. If detecting that a vcpu from the same VM has
|
|
+ * previously run on the same physical CPU, call into the
|
|
+ * hypervisor code to nuke the relevant contexts.
|
|
+ *
|
|
* We might get preempted before the vCPU actually runs, but
|
|
* over-invalidation doesn't affect correctness.
|
|
*/
|
|
if (*last_ran != vcpu->vcpu_id) {
|
|
- kvm_call_hyp(__kvm_tlb_flush_local_vmid, mmu);
|
|
+ kvm_call_hyp(__kvm_flush_cpu_context, mmu);
|
|
*last_ran = vcpu->vcpu_id;
|
|
}
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
|
|
index b0afad7a99c6e..0c66a1d408fd7 100644
|
|
--- a/arch/arm64/kvm/hyp/entry.S
|
|
+++ b/arch/arm64/kvm/hyp/entry.S
|
|
@@ -146,7 +146,7 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
|
|
// Now restore the hyp regs
|
|
restore_callee_saved_regs x2
|
|
|
|
- set_loaded_vcpu xzr, x1, x2
|
|
+ set_loaded_vcpu xzr, x2, x3
|
|
|
|
alternative_if ARM64_HAS_RAS_EXTN
|
|
// If we have the RAS extensions we can consume a pending error
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
|
|
index 91a711aa8382e..f401724f12ef7 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
|
|
@@ -58,16 +58,24 @@ static void __debug_restore_spe(u64 pmscr_el1)
|
|
write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
|
|
}
|
|
|
|
-void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
|
|
+void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
|
|
{
|
|
/* Disable and flush SPE data generation */
|
|
__debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1);
|
|
+}
|
|
+
|
|
+void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
|
|
+{
|
|
__debug_switch_to_guest_common(vcpu);
|
|
}
|
|
|
|
-void __debug_switch_to_host(struct kvm_vcpu *vcpu)
|
|
+void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
|
|
{
|
|
__debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1);
|
|
+}
|
|
+
|
|
+void __debug_switch_to_host(struct kvm_vcpu *vcpu)
|
|
+{
|
|
__debug_switch_to_host_common(vcpu);
|
|
}
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
|
|
index ed27f06a31ba2..4ce934fc1f72a 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/host.S
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
|
|
@@ -64,10 +64,15 @@ __host_enter_without_restoring:
|
|
SYM_FUNC_END(__host_exit)
|
|
|
|
/*
|
|
- * void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
|
|
+ * void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
|
|
+ * u64 elr, u64 par);
|
|
*/
|
|
SYM_FUNC_START(__hyp_do_panic)
|
|
- /* Load the format arguments into x1-7 */
|
|
+ mov x29, x0
|
|
+
|
|
+ /* Load the format string into x0 and arguments into x1-7 */
|
|
+ ldr x0, =__hyp_panic_string
|
|
+
|
|
mov x6, x3
|
|
get_vcpu_ptr x7, x3
|
|
|
|
@@ -82,13 +87,8 @@ SYM_FUNC_START(__hyp_do_panic)
|
|
ldr lr, =panic
|
|
msr elr_el2, lr
|
|
|
|
- /*
|
|
- * Set the panic format string and enter the host, conditionally
|
|
- * restoring the host context.
|
|
- */
|
|
- cmp x0, xzr
|
|
- ldr x0, =__hyp_panic_string
|
|
- b.eq __host_enter_without_restoring
|
|
+ /* Enter the host, conditionally restoring the host context. */
|
|
+ cbz x29, __host_enter_without_restoring
|
|
b __host_enter_for_panic
|
|
SYM_FUNC_END(__hyp_do_panic)
|
|
|
|
@@ -144,7 +144,7 @@ SYM_FUNC_END(__hyp_do_panic)
|
|
|
|
.macro invalid_host_el1_vect
|
|
.align 7
|
|
- mov x0, xzr /* restore_host = false */
|
|
+ mov x0, xzr /* host_ctxt = NULL */
|
|
mrs x1, spsr_el2
|
|
mrs x2, elr_el2
|
|
mrs x3, par_el1
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
|
|
index e2eafe2c93aff..3df30b459215b 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
|
|
@@ -46,11 +46,11 @@ static void handle_host_hcall(unsigned long func_id,
|
|
__kvm_tlb_flush_vmid(kern_hyp_va(mmu));
|
|
break;
|
|
}
|
|
- case KVM_HOST_SMCCC_FUNC(__kvm_tlb_flush_local_vmid): {
|
|
+ case KVM_HOST_SMCCC_FUNC(__kvm_flush_cpu_context): {
|
|
unsigned long r1 = host_ctxt->regs.regs[1];
|
|
struct kvm_s2_mmu *mmu = (struct kvm_s2_mmu *)r1;
|
|
|
|
- __kvm_tlb_flush_local_vmid(kern_hyp_va(mmu));
|
|
+ __kvm_flush_cpu_context(kern_hyp_va(mmu));
|
|
break;
|
|
}
|
|
case KVM_HOST_SMCCC_FUNC(__kvm_timer_set_cntvoff): {
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
|
|
index 8ae8160bc93ab..6624596846d3d 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
|
|
@@ -188,6 +188,14 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
|
pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
|
|
|
|
__sysreg_save_state_nvhe(host_ctxt);
|
|
+ /*
|
|
+ * We must flush and disable the SPE buffer for nVHE, as
|
|
+ * the translation regime(EL1&0) is going to be loaded with
|
|
+ * that of the guest. And we must do this before we change the
|
|
+ * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and
|
|
+ * before we load guest Stage1.
|
|
+ */
|
|
+ __debug_save_host_buffers_nvhe(vcpu);
|
|
|
|
/*
|
|
* We must restore the 32-bit state before the sysregs, thanks
|
|
@@ -228,11 +236,12 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
|
if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED)
|
|
__fpsimd_save_fpexc32(vcpu);
|
|
|
|
+ __debug_switch_to_host(vcpu);
|
|
/*
|
|
* This must come after restoring the host sysregs, since a non-VHE
|
|
* system may enable SPE here and make use of the TTBRs.
|
|
*/
|
|
- __debug_switch_to_host(vcpu);
|
|
+ __debug_restore_host_buffers_nvhe(vcpu);
|
|
|
|
if (pmu_switch_needed)
|
|
__pmu_switch_to_host(host_ctxt);
|
|
@@ -251,7 +260,6 @@ void __noreturn hyp_panic(void)
|
|
u64 spsr = read_sysreg_el2(SYS_SPSR);
|
|
u64 elr = read_sysreg_el2(SYS_ELR);
|
|
u64 par = read_sysreg_par();
|
|
- bool restore_host = true;
|
|
struct kvm_cpu_context *host_ctxt;
|
|
struct kvm_vcpu *vcpu;
|
|
|
|
@@ -265,7 +273,7 @@ void __noreturn hyp_panic(void)
|
|
__sysreg_restore_state_nvhe(host_ctxt);
|
|
}
|
|
|
|
- __hyp_do_panic(restore_host, spsr, elr, par);
|
|
+ __hyp_do_panic(host_ctxt, spsr, elr, par);
|
|
unreachable();
|
|
}
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
|
|
index fbde89a2c6e83..229b06748c208 100644
|
|
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
|
|
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
|
|
@@ -123,7 +123,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_host(&cxt);
|
|
}
|
|
|
|
-void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
+void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
|
|
{
|
|
struct tlb_inv_context cxt;
|
|
|
|
@@ -131,6 +131,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_guest(mmu, &cxt);
|
|
|
|
__tlbi(vmalle1);
|
|
+ asm volatile("ic iallu");
|
|
dsb(nsh);
|
|
isb();
|
|
|
|
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
|
|
index bdf8e55ed308e..4d99d07c610c8 100644
|
|
--- a/arch/arm64/kvm/hyp/pgtable.c
|
|
+++ b/arch/arm64/kvm/hyp/pgtable.c
|
|
@@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
|
|
goto out;
|
|
|
|
if (!table) {
|
|
+ data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
|
|
data->addr += kvm_granule_size(level);
|
|
goto out;
|
|
}
|
|
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
|
|
index fd7895945bbc6..66f17349f0c36 100644
|
|
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
|
|
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
|
|
@@ -127,7 +127,7 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_host(&cxt);
|
|
}
|
|
|
|
-void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
+void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
|
|
{
|
|
struct tlb_inv_context cxt;
|
|
|
|
@@ -135,6 +135,7 @@ void __kvm_tlb_flush_local_vmid(struct kvm_s2_mmu *mmu)
|
|
__tlb_switch_to_guest(mmu, &cxt);
|
|
|
|
__tlbi(vmalle1);
|
|
+ asm volatile("ic iallu");
|
|
dsb(nsh);
|
|
isb();
|
|
|
|
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
|
|
index 75814a02d1894..26068456ec0f3 100644
|
|
--- a/arch/arm64/kvm/mmu.c
|
|
+++ b/arch/arm64/kvm/mmu.c
|
|
@@ -1309,8 +1309,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
|
|
* Prevent userspace from creating a memory region outside of the IPA
|
|
* space addressable by the KVM guest IPA space.
|
|
*/
|
|
- if (memslot->base_gfn + memslot->npages >=
|
|
- (kvm_phys_size(kvm) >> PAGE_SHIFT))
|
|
+ if ((memslot->base_gfn + memslot->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT))
|
|
return -EFAULT;
|
|
|
|
mmap_read_lock(current->mm);
|
|
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
|
|
index f32490229a4c7..e911eea36eb0e 100644
|
|
--- a/arch/arm64/kvm/reset.c
|
|
+++ b/arch/arm64/kvm/reset.c
|
|
@@ -373,10 +373,9 @@ int kvm_set_ipa_limit(void)
|
|
}
|
|
|
|
kvm_ipa_limit = id_aa64mmfr0_parange_to_phys_shift(parange);
|
|
- WARN(kvm_ipa_limit < KVM_PHYS_SHIFT,
|
|
- "KVM IPA Size Limit (%d bits) is smaller than default size\n",
|
|
- kvm_ipa_limit);
|
|
- kvm_info("IPA Size Limit: %d bits\n", kvm_ipa_limit);
|
|
+ kvm_info("IPA Size Limit: %d bits%s\n", kvm_ipa_limit,
|
|
+ ((kvm_ipa_limit < KVM_PHYS_SHIFT) ?
|
|
+ " (Reduced IPA size, limited VM/VMM compatibility)" : ""));
|
|
|
|
return 0;
|
|
}
|
|
@@ -405,6 +404,11 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
|
|
return -EINVAL;
|
|
} else {
|
|
phys_shift = KVM_PHYS_SHIFT;
|
|
+ if (phys_shift > kvm_ipa_limit) {
|
|
+ pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n",
|
|
+ current->comm);
|
|
+ return -EINVAL;
|
|
+ }
|
|
}
|
|
|
|
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
|
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
|
|
index b913844ab7404..916e0547fdccf 100644
|
|
--- a/arch/arm64/mm/init.c
|
|
+++ b/arch/arm64/mm/init.c
|
|
@@ -218,6 +218,18 @@ int pfn_valid(unsigned long pfn)
|
|
|
|
if (!valid_section(__pfn_to_section(pfn)))
|
|
return 0;
|
|
+
|
|
+ /*
|
|
+ * ZONE_DEVICE memory does not have the memblock entries.
|
|
+ * memblock_is_map_memory() check for ZONE_DEVICE based
|
|
+ * addresses will always fail. Even the normal hotplugged
|
|
+ * memory will never have MEMBLOCK_NOMAP flag set in their
|
|
+ * memblock entries. Skip memblock search for all non early
|
|
+ * memory sections covering all of hotplug memory including
|
|
+ * both normal and ZONE_DEVICE based.
|
|
+ */
|
|
+ if (!early_section(__pfn_to_section(pfn)))
|
|
+ return pfn_section_valid(__pfn_to_section(pfn), pfn);
|
|
#endif
|
|
return memblock_is_map_memory(addr);
|
|
}
|
|
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
|
|
index ca692a8157315..6aabf1eced31e 100644
|
|
--- a/arch/arm64/mm/mmu.c
|
|
+++ b/arch/arm64/mm/mmu.c
|
|
@@ -40,7 +40,7 @@
|
|
#define NO_BLOCK_MAPPINGS BIT(0)
|
|
#define NO_CONT_MAPPINGS BIT(1)
|
|
|
|
-u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
|
|
+u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
|
|
u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
|
|
|
|
u64 __section(".mmuoff.data.write") vabits_actual;
|
|
@@ -502,7 +502,8 @@ static void __init map_mem(pgd_t *pgdp)
|
|
* if MTE is present. Otherwise, it has the same attributes as
|
|
* PAGE_KERNEL.
|
|
*/
|
|
- __map_memblock(pgdp, start, end, PAGE_KERNEL_TAGGED, flags);
|
|
+ __map_memblock(pgdp, start, end, pgprot_tagged(PAGE_KERNEL),
|
|
+ flags);
|
|
}
|
|
|
|
/*
|
|
diff --git a/arch/mips/crypto/Makefile b/arch/mips/crypto/Makefile
|
|
index 8e1deaf00e0c0..5e4105cccf9fa 100644
|
|
--- a/arch/mips/crypto/Makefile
|
|
+++ b/arch/mips/crypto/Makefile
|
|
@@ -12,8 +12,8 @@ AFLAGS_chacha-core.o += -O2 # needed to fill branch delay slots
|
|
obj-$(CONFIG_CRYPTO_POLY1305_MIPS) += poly1305-mips.o
|
|
poly1305-mips-y := poly1305-core.o poly1305-glue.o
|
|
|
|
-perlasm-flavour-$(CONFIG_CPU_MIPS32) := o32
|
|
-perlasm-flavour-$(CONFIG_CPU_MIPS64) := 64
|
|
+perlasm-flavour-$(CONFIG_32BIT) := o32
|
|
+perlasm-flavour-$(CONFIG_64BIT) := 64
|
|
|
|
quiet_cmd_perlasm = PERLASM $@
|
|
cmd_perlasm = $(PERL) $(<) $(perlasm-flavour-y) $(@)
|
|
diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
|
|
index eacc9102c2515..d5b3c3bb95b40 100644
|
|
--- a/arch/powerpc/include/asm/code-patching.h
|
|
+++ b/arch/powerpc/include/asm/code-patching.h
|
|
@@ -73,7 +73,7 @@ void __patch_exception(int exc, unsigned long addr);
|
|
#endif
|
|
|
|
#define OP_RT_RA_MASK 0xffff0000UL
|
|
-#define LIS_R2 0x3c020000UL
|
|
+#define LIS_R2 0x3c400000UL
|
|
#define ADDIS_R2_R12 0x3c4c0000UL
|
|
#define ADDI_R2_R2 0x38420000UL
|
|
|
|
diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
|
|
index 475687f24f4ad..d319160d790c0 100644
|
|
--- a/arch/powerpc/include/asm/machdep.h
|
|
+++ b/arch/powerpc/include/asm/machdep.h
|
|
@@ -59,6 +59,9 @@ struct machdep_calls {
|
|
int (*pcibios_root_bridge_prepare)(struct pci_host_bridge
|
|
*bridge);
|
|
|
|
+ /* finds all the pci_controllers present at boot */
|
|
+ void (*discover_phbs)(void);
|
|
+
|
|
/* To setup PHBs when using automatic OF platform driver for PCI */
|
|
int (*pci_setup_phb)(struct pci_controller *host);
|
|
|
|
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
|
|
index e2c778c176a3a..d6f262df4f346 100644
|
|
--- a/arch/powerpc/include/asm/ptrace.h
|
|
+++ b/arch/powerpc/include/asm/ptrace.h
|
|
@@ -62,6 +62,9 @@ struct pt_regs
|
|
};
|
|
#endif
|
|
|
|
+
|
|
+#define STACK_FRAME_WITH_PT_REGS (STACK_FRAME_OVERHEAD + sizeof(struct pt_regs))
|
|
+
|
|
#ifdef __powerpc64__
|
|
|
|
/*
|
|
@@ -190,7 +193,7 @@ extern int ptrace_put_reg(struct task_struct *task, int regno,
|
|
#define TRAP_FLAGS_MASK 0x11
|
|
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
|
|
#define FULL_REGS(regs) (((regs)->trap & 1) == 0)
|
|
-#define SET_FULL_REGS(regs) ((regs)->trap |= 1)
|
|
+#define SET_FULL_REGS(regs) ((regs)->trap &= ~1)
|
|
#endif
|
|
#define CHECK_FULL_REGS(regs) BUG_ON(!FULL_REGS(regs))
|
|
#define NV_REG_POISON 0xdeadbeefdeadbeefUL
|
|
@@ -205,7 +208,7 @@ extern int ptrace_put_reg(struct task_struct *task, int regno,
|
|
#define TRAP_FLAGS_MASK 0x1F
|
|
#define TRAP(regs) ((regs)->trap & ~TRAP_FLAGS_MASK)
|
|
#define FULL_REGS(regs) (((regs)->trap & 1) == 0)
|
|
-#define SET_FULL_REGS(regs) ((regs)->trap |= 1)
|
|
+#define SET_FULL_REGS(regs) ((regs)->trap &= ~1)
|
|
#define IS_CRITICAL_EXC(regs) (((regs)->trap & 2) != 0)
|
|
#define IS_MCHECK_EXC(regs) (((regs)->trap & 4) != 0)
|
|
#define IS_DEBUG_EXC(regs) (((regs)->trap & 8) != 0)
|
|
diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h
|
|
index fdab934283721..9d1fbd8be1c74 100644
|
|
--- a/arch/powerpc/include/asm/switch_to.h
|
|
+++ b/arch/powerpc/include/asm/switch_to.h
|
|
@@ -71,6 +71,16 @@ static inline void disable_kernel_vsx(void)
|
|
{
|
|
msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
|
|
}
|
|
+#else
|
|
+static inline void enable_kernel_vsx(void)
|
|
+{
|
|
+ BUILD_BUG();
|
|
+}
|
|
+
|
|
+static inline void disable_kernel_vsx(void)
|
|
+{
|
|
+ BUILD_BUG();
|
|
+}
|
|
#endif
|
|
|
|
#ifdef CONFIG_SPE
|
|
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
|
|
index c2722ff36e982..5c125255571cd 100644
|
|
--- a/arch/powerpc/kernel/asm-offsets.c
|
|
+++ b/arch/powerpc/kernel/asm-offsets.c
|
|
@@ -307,7 +307,7 @@ int main(void)
|
|
|
|
/* Interrupt register frame */
|
|
DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
|
|
- DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
|
|
+ DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_WITH_PT_REGS);
|
|
STACK_PT_REGS_OFFSET(GPR0, gpr[0]);
|
|
STACK_PT_REGS_OFFSET(GPR1, gpr[1]);
|
|
STACK_PT_REGS_OFFSET(GPR2, gpr[2]);
|
|
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
|
|
index 3cde2fbd74fce..9d3b468bd2d7a 100644
|
|
--- a/arch/powerpc/kernel/exceptions-64s.S
|
|
+++ b/arch/powerpc/kernel/exceptions-64s.S
|
|
@@ -470,7 +470,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
|
|
|
|
ld r10,PACAKMSR(r13) /* get MSR value for kernel */
|
|
/* MSR[RI] is clear iff using SRR regs */
|
|
- .if IHSRR == EXC_HV_OR_STD
|
|
+ .if IHSRR_IF_HVMODE
|
|
BEGIN_FTR_SECTION
|
|
xori r10,r10,MSR_RI
|
|
END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE)
|
|
diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
|
|
index 2729d8fa6e77c..96b45901da647 100644
|
|
--- a/arch/powerpc/kernel/head_book3s_32.S
|
|
+++ b/arch/powerpc/kernel/head_book3s_32.S
|
|
@@ -461,10 +461,11 @@ InstructionTLBMiss:
|
|
cmplw 0,r1,r3
|
|
#endif
|
|
mfspr r2, SPRN_SPRG_PGDIR
|
|
- li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
|
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
|
|
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
|
|
bgt- 112f
|
|
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
|
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
|
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
|
#endif
|
|
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
|
@@ -523,9 +524,10 @@ DataLoadTLBMiss:
|
|
lis r1, TASK_SIZE@h /* check if kernel address */
|
|
cmplw 0,r1,r3
|
|
mfspr r2, SPRN_SPRG_PGDIR
|
|
- li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
|
+ li r1, _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
|
|
bgt- 112f
|
|
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
|
+ li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
|
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
|
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
|
lwz r2,0(r2) /* get pmd entry */
|
|
@@ -599,9 +601,10 @@ DataStoreTLBMiss:
|
|
lis r1, TASK_SIZE@h /* check if kernel address */
|
|
cmplw 0,r1,r3
|
|
mfspr r2, SPRN_SPRG_PGDIR
|
|
- li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
|
+ li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
|
|
bgt- 112f
|
|
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
|
+ li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
|
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
|
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
|
lwz r2,0(r2) /* get pmd entry */
|
|
diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
|
|
index be108616a721f..7920559a1ca81 100644
|
|
--- a/arch/powerpc/kernel/pci-common.c
|
|
+++ b/arch/powerpc/kernel/pci-common.c
|
|
@@ -1625,3 +1625,13 @@ static void fixup_hide_host_resource_fsl(struct pci_dev *dev)
|
|
}
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MOTOROLA, PCI_ANY_ID, fixup_hide_host_resource_fsl);
|
|
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_FREESCALE, PCI_ANY_ID, fixup_hide_host_resource_fsl);
|
|
+
|
|
+
|
|
+static int __init discover_phbs(void)
|
|
+{
|
|
+ if (ppc_md.discover_phbs)
|
|
+ ppc_md.discover_phbs();
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+core_initcall(discover_phbs);
|
|
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
|
|
index d421a2c7f8224..1a1d2657fe8dd 100644
|
|
--- a/arch/powerpc/kernel/process.c
|
|
+++ b/arch/powerpc/kernel/process.c
|
|
@@ -2170,7 +2170,7 @@ void show_stack(struct task_struct *tsk, unsigned long *stack,
|
|
* See if this is an exception frame.
|
|
* We look for the "regshere" marker in the current frame.
|
|
*/
|
|
- if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE)
|
|
+ if (validate_sp(sp, tsk, STACK_FRAME_WITH_PT_REGS)
|
|
&& stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
|
|
struct pt_regs *regs = (struct pt_regs *)
|
|
(sp + STACK_FRAME_OVERHEAD);
|
|
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
|
|
index 5006dcbe1d9fd..77dffea3d5373 100644
|
|
--- a/arch/powerpc/kernel/traps.c
|
|
+++ b/arch/powerpc/kernel/traps.c
|
|
@@ -509,8 +509,11 @@ out:
|
|
die("Unrecoverable nested System Reset", regs, SIGABRT);
|
|
#endif
|
|
/* Must die if the interrupt is not recoverable */
|
|
- if (!(regs->msr & MSR_RI))
|
|
+ if (!(regs->msr & MSR_RI)) {
|
|
+ /* For the reason explained in die_mce, nmi_exit before die */
|
|
+ nmi_exit();
|
|
die("Unrecoverable System Reset", regs, SIGABRT);
|
|
+ }
|
|
|
|
if (saved_hsrrs) {
|
|
mtspr(SPRN_HSRR0, hsrr0);
|
|
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
|
|
index 43599e671d383..ded4a3efd3f06 100644
|
|
--- a/arch/powerpc/perf/core-book3s.c
|
|
+++ b/arch/powerpc/perf/core-book3s.c
|
|
@@ -211,7 +211,7 @@ static inline void perf_get_data_addr(struct perf_event *event, struct pt_regs *
|
|
if (!(mmcra & MMCRA_SAMPLE_ENABLE) || sdar_valid)
|
|
*addrp = mfspr(SPRN_SDAR);
|
|
|
|
- if (is_kernel_addr(mfspr(SPRN_SDAR)) && perf_allow_kernel(&event->attr) != 0)
|
|
+ if (is_kernel_addr(mfspr(SPRN_SDAR)) && event->attr.exclude_kernel)
|
|
*addrp = 0;
|
|
}
|
|
|
|
@@ -477,7 +477,7 @@ static void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *
|
|
* addresses, hence include a check before filtering code
|
|
*/
|
|
if (!(ppmu->flags & PPMU_ARCH_31) &&
|
|
- is_kernel_addr(addr) && perf_allow_kernel(&event->attr) != 0)
|
|
+ is_kernel_addr(addr) && event->attr.exclude_kernel)
|
|
continue;
|
|
|
|
/* Branches are read most recent first (ie. mfbhrb 0 is
|
|
@@ -2112,7 +2112,17 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
|
left += period;
|
|
if (left <= 0)
|
|
left = period;
|
|
- record = siar_valid(regs);
|
|
+
|
|
+ /*
|
|
+ * If address is not requested in the sample via
|
|
+ * PERF_SAMPLE_IP, just record that sample irrespective
|
|
+ * of SIAR valid check.
|
|
+ */
|
|
+ if (event->attr.sample_type & PERF_SAMPLE_IP)
|
|
+ record = siar_valid(regs);
|
|
+ else
|
|
+ record = 1;
|
|
+
|
|
event->hw.last_period = event->hw.sample_period;
|
|
}
|
|
if (left < 0x80000000LL)
|
|
@@ -2130,9 +2140,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
|
* MMCR2. Check attr.exclude_kernel and address to drop the sample in
|
|
* these cases.
|
|
*/
|
|
- if (event->attr.exclude_kernel && record)
|
|
- if (is_kernel_addr(mfspr(SPRN_SIAR)))
|
|
- record = 0;
|
|
+ if (event->attr.exclude_kernel &&
|
|
+ (event->attr.sample_type & PERF_SAMPLE_IP) &&
|
|
+ is_kernel_addr(mfspr(SPRN_SIAR)))
|
|
+ record = 0;
|
|
|
|
/*
|
|
* Finally record data if requested.
|
|
diff --git a/arch/powerpc/platforms/pseries/msi.c b/arch/powerpc/platforms/pseries/msi.c
|
|
index b3ac2455faadc..637300330507f 100644
|
|
--- a/arch/powerpc/platforms/pseries/msi.c
|
|
+++ b/arch/powerpc/platforms/pseries/msi.c
|
|
@@ -4,6 +4,7 @@
|
|
* Copyright 2006-2007 Michael Ellerman, IBM Corp.
|
|
*/
|
|
|
|
+#include <linux/crash_dump.h>
|
|
#include <linux/device.h>
|
|
#include <linux/irq.h>
|
|
#include <linux/msi.h>
|
|
@@ -458,8 +459,28 @@ again:
|
|
return hwirq;
|
|
}
|
|
|
|
- virq = irq_create_mapping_affinity(NULL, hwirq,
|
|
- entry->affinity);
|
|
+ /*
|
|
+ * Depending on the number of online CPUs in the original
|
|
+ * kernel, it is likely for CPU #0 to be offline in a kdump
|
|
+ * kernel. The associated IRQs in the affinity mappings
|
|
+ * provided by irq_create_affinity_masks() are thus not
|
|
+ * started by irq_startup(), as per-design for managed IRQs.
|
|
+ * This can be a problem with multi-queue block devices driven
|
|
+ * by blk-mq : such a non-started IRQ is very likely paired
|
|
+ * with the single queue enforced by blk-mq during kdump (see
|
|
+ * blk_mq_alloc_tag_set()). This causes the device to remain
|
|
+ * silent and likely hangs the guest at some point.
|
|
+ *
|
|
+ * We don't really care for fine-grained affinity when doing
|
|
+ * kdump actually : simply ignore the pre-computed affinity
|
|
+ * masks in this case and let the default mask with all CPUs
|
|
+ * be used when creating the IRQ mappings.
|
|
+ */
|
|
+ if (is_kdump_kernel())
|
|
+ virq = irq_create_mapping(NULL, hwirq);
|
|
+ else
|
|
+ virq = irq_create_mapping_affinity(NULL, hwirq,
|
|
+ entry->affinity);
|
|
|
|
if (!virq) {
|
|
pr_debug("rtas_msi: Failed mapping hwirq %d\n", hwirq);
|
|
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
|
|
index 3a0d545f0ce84..791bc373418bd 100644
|
|
--- a/arch/s390/kernel/smp.c
|
|
+++ b/arch/s390/kernel/smp.c
|
|
@@ -775,7 +775,7 @@ static int smp_add_core(struct sclp_core_entry *core, cpumask_t *avail,
|
|
static int __smp_rescan_cpus(struct sclp_core_info *info, bool early)
|
|
{
|
|
struct sclp_core_entry *core;
|
|
- cpumask_t avail;
|
|
+ static cpumask_t avail;
|
|
bool configured;
|
|
u16 core_id;
|
|
int nr, i;
|
|
diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
|
|
index f94532f25db14..274217e7ed702 100644
|
|
--- a/arch/sparc/include/asm/mman.h
|
|
+++ b/arch/sparc/include/asm/mman.h
|
|
@@ -57,35 +57,39 @@ static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
|
|
{
|
|
if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
|
|
return 0;
|
|
- if (prot & PROT_ADI) {
|
|
- if (!adi_capable())
|
|
- return 0;
|
|
+ return 1;
|
|
+}
|
|
|
|
- if (addr) {
|
|
- struct vm_area_struct *vma;
|
|
+#define arch_validate_flags(vm_flags) arch_validate_flags(vm_flags)
|
|
+/* arch_validate_flags() - Ensure combination of flags is valid for a
|
|
+ * VMA.
|
|
+ */
|
|
+static inline bool arch_validate_flags(unsigned long vm_flags)
|
|
+{
|
|
+ /* If ADI is being enabled on this VMA, check for ADI
|
|
+ * capability on the platform and ensure VMA is suitable
|
|
+ * for ADI
|
|
+ */
|
|
+ if (vm_flags & VM_SPARC_ADI) {
|
|
+ if (!adi_capable())
|
|
+ return false;
|
|
|
|
- vma = find_vma(current->mm, addr);
|
|
- if (vma) {
|
|
- /* ADI can not be enabled on PFN
|
|
- * mapped pages
|
|
- */
|
|
- if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
|
|
- return 0;
|
|
+ /* ADI can not be enabled on PFN mapped pages */
|
|
+ if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
|
|
+ return false;
|
|
|
|
- /* Mergeable pages can become unmergeable
|
|
- * if ADI is enabled on them even if they
|
|
- * have identical data on them. This can be
|
|
- * because ADI enabled pages with identical
|
|
- * data may still not have identical ADI
|
|
- * tags on them. Disallow ADI on mergeable
|
|
- * pages.
|
|
- */
|
|
- if (vma->vm_flags & VM_MERGEABLE)
|
|
- return 0;
|
|
- }
|
|
- }
|
|
+ /* Mergeable pages can become unmergeable
|
|
+ * if ADI is enabled on them even if they
|
|
+ * have identical data on them. This can be
|
|
+ * because ADI enabled pages with identical
|
|
+ * data may still not have identical ADI
|
|
+ * tags on them. Disallow ADI on mergeable
|
|
+ * pages.
|
|
+ */
|
|
+ if (vm_flags & VM_MERGEABLE)
|
|
+ return false;
|
|
}
|
|
- return 1;
|
|
+ return true;
|
|
}
|
|
#endif /* CONFIG_SPARC64 */
|
|
|
|
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
|
|
index eb2946b1df8a4..6139c5700ccc9 100644
|
|
--- a/arch/sparc/mm/init_32.c
|
|
+++ b/arch/sparc/mm/init_32.c
|
|
@@ -197,6 +197,9 @@ unsigned long __init bootmem_init(unsigned long *pages_avail)
|
|
size = memblock_phys_mem_size() - memblock_reserved_size();
|
|
*pages_avail = (size >> PAGE_SHIFT) - high_pages;
|
|
|
|
+ /* Only allow low memory to be allocated via memblock allocation */
|
|
+ memblock_set_current_limit(max_low_pfn << PAGE_SHIFT);
|
|
+
|
|
return max_pfn;
|
|
}
|
|
|
|
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
|
|
index de5358671750d..2e4d91f3feea4 100644
|
|
--- a/arch/x86/entry/common.c
|
|
+++ b/arch/x86/entry/common.c
|
|
@@ -128,7 +128,8 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs)
|
|
regs->ax = -EFAULT;
|
|
|
|
instrumentation_end();
|
|
- syscall_exit_to_user_mode(regs);
|
|
+ local_irq_disable();
|
|
+ irqentry_exit_to_user_mode(regs);
|
|
return false;
|
|
}
|
|
|
|
@@ -213,40 +214,6 @@ SYSCALL_DEFINE0(ni_syscall)
|
|
return -ENOSYS;
|
|
}
|
|
|
|
-noinstr bool idtentry_enter_nmi(struct pt_regs *regs)
|
|
-{
|
|
- bool irq_state = lockdep_hardirqs_enabled();
|
|
-
|
|
- __nmi_enter();
|
|
- lockdep_hardirqs_off(CALLER_ADDR0);
|
|
- lockdep_hardirq_enter();
|
|
- rcu_nmi_enter();
|
|
-
|
|
- instrumentation_begin();
|
|
- trace_hardirqs_off_finish();
|
|
- ftrace_nmi_enter();
|
|
- instrumentation_end();
|
|
-
|
|
- return irq_state;
|
|
-}
|
|
-
|
|
-noinstr void idtentry_exit_nmi(struct pt_regs *regs, bool restore)
|
|
-{
|
|
- instrumentation_begin();
|
|
- ftrace_nmi_exit();
|
|
- if (restore) {
|
|
- trace_hardirqs_on_prepare();
|
|
- lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
|
- }
|
|
- instrumentation_end();
|
|
-
|
|
- rcu_nmi_exit();
|
|
- lockdep_hardirq_exit();
|
|
- if (restore)
|
|
- lockdep_hardirqs_on(CALLER_ADDR0);
|
|
- __nmi_exit();
|
|
-}
|
|
-
|
|
#ifdef CONFIG_XEN_PV
|
|
#ifndef CONFIG_PREEMPTION
|
|
/*
|
|
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
|
|
index 541fdaf640453..0051cf5c792d1 100644
|
|
--- a/arch/x86/entry/entry_64_compat.S
|
|
+++ b/arch/x86/entry/entry_64_compat.S
|
|
@@ -210,6 +210,8 @@ SYM_CODE_START(entry_SYSCALL_compat)
|
|
/* Switch to the kernel stack */
|
|
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
|
|
|
|
+SYM_INNER_LABEL(entry_SYSCALL_compat_safe_stack, SYM_L_GLOBAL)
|
|
+
|
|
/* Construct struct pt_regs on stack */
|
|
pushq $__USER32_DS /* pt_regs->ss */
|
|
pushq %r8 /* pt_regs->sp */
|
|
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
|
|
index aaa7bffdb20f5..4b05c876f9f69 100644
|
|
--- a/arch/x86/events/intel/core.c
|
|
+++ b/arch/x86/events/intel/core.c
|
|
@@ -3565,8 +3565,10 @@ static int intel_pmu_hw_config(struct perf_event *event)
|
|
if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
|
|
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
|
|
if (!(event->attr.sample_type &
|
|
- ~intel_pmu_large_pebs_flags(event)))
|
|
+ ~intel_pmu_large_pebs_flags(event))) {
|
|
event->hw.flags |= PERF_X86_EVENT_LARGE_PEBS;
|
|
+ event->attach_state |= PERF_ATTACH_SCHED_CB;
|
|
+ }
|
|
}
|
|
if (x86_pmu.pebs_aliases)
|
|
x86_pmu.pebs_aliases(event);
|
|
@@ -3579,6 +3581,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
|
|
ret = intel_pmu_setup_lbr_filter(event);
|
|
if (ret)
|
|
return ret;
|
|
+ event->attach_state |= PERF_ATTACH_SCHED_CB;
|
|
|
|
/*
|
|
* BTS is set up earlier in this path, so don't account twice
|
|
diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
|
|
index eb01c2618a9df..f656aabd1545c 100644
|
|
--- a/arch/x86/include/asm/idtentry.h
|
|
+++ b/arch/x86/include/asm/idtentry.h
|
|
@@ -11,9 +11,6 @@
|
|
|
|
#include <asm/irq_stack.h>
|
|
|
|
-bool idtentry_enter_nmi(struct pt_regs *regs);
|
|
-void idtentry_exit_nmi(struct pt_regs *regs, bool irq_state);
|
|
-
|
|
/**
|
|
* DECLARE_IDTENTRY - Declare functions for simple IDT entry points
|
|
* No error code pushed by hardware
|
|
diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h
|
|
index a0f839aa144d9..98b4dae5e8bc8 100644
|
|
--- a/arch/x86/include/asm/insn-eval.h
|
|
+++ b/arch/x86/include/asm/insn-eval.h
|
|
@@ -23,6 +23,8 @@ unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
|
|
int insn_get_code_seg_params(struct pt_regs *regs);
|
|
int insn_fetch_from_user(struct pt_regs *regs,
|
|
unsigned char buf[MAX_INSN_SIZE]);
|
|
+int insn_fetch_from_user_inatomic(struct pt_regs *regs,
|
|
+ unsigned char buf[MAX_INSN_SIZE]);
|
|
bool insn_decode(struct insn *insn, struct pt_regs *regs,
|
|
unsigned char buf[MAX_INSN_SIZE], int buf_size);
|
|
|
|
diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h
|
|
index 2c35f1c01a2df..b6a9d51d1d791 100644
|
|
--- a/arch/x86/include/asm/proto.h
|
|
+++ b/arch/x86/include/asm/proto.h
|
|
@@ -25,6 +25,7 @@ void __end_SYSENTER_singlestep_region(void);
|
|
void entry_SYSENTER_compat(void);
|
|
void __end_entry_SYSENTER_compat(void);
|
|
void entry_SYSCALL_compat(void);
|
|
+void entry_SYSCALL_compat_safe_stack(void);
|
|
void entry_INT80_compat(void);
|
|
#ifdef CONFIG_XEN_PV
|
|
void xen_entry_INT80_compat(void);
|
|
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
|
|
index d8324a2366961..409f661481e11 100644
|
|
--- a/arch/x86/include/asm/ptrace.h
|
|
+++ b/arch/x86/include/asm/ptrace.h
|
|
@@ -94,6 +94,8 @@ struct pt_regs {
|
|
#include <asm/paravirt_types.h>
|
|
#endif
|
|
|
|
+#include <asm/proto.h>
|
|
+
|
|
struct cpuinfo_x86;
|
|
struct task_struct;
|
|
|
|
@@ -175,6 +177,19 @@ static inline bool any_64bit_mode(struct pt_regs *regs)
|
|
#ifdef CONFIG_X86_64
|
|
#define current_user_stack_pointer() current_pt_regs()->sp
|
|
#define compat_user_stack_pointer() current_pt_regs()->sp
|
|
+
|
|
+static inline bool ip_within_syscall_gap(struct pt_regs *regs)
|
|
+{
|
|
+ bool ret = (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
|
|
+ regs->ip < (unsigned long)entry_SYSCALL_64_safe_stack);
|
|
+
|
|
+#ifdef CONFIG_IA32_EMULATION
|
|
+ ret = ret || (regs->ip >= (unsigned long)entry_SYSCALL_compat &&
|
|
+ regs->ip < (unsigned long)entry_SYSCALL_compat_safe_stack);
|
|
+#endif
|
|
+
|
|
+ return ret;
|
|
+}
|
|
#endif
|
|
|
|
static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
|
|
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
|
|
index 311688202ea51..b7a27589dfa0b 100644
|
|
--- a/arch/x86/kernel/cpu/mce/core.c
|
|
+++ b/arch/x86/kernel/cpu/mce/core.c
|
|
@@ -1986,7 +1986,7 @@ void (*machine_check_vector)(struct pt_regs *) = unexpected_machine_check;
|
|
|
|
static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
|
|
{
|
|
- bool irq_state;
|
|
+ irqentry_state_t irq_state;
|
|
|
|
WARN_ON_ONCE(user_mode(regs));
|
|
|
|
@@ -1998,7 +1998,7 @@ static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
|
|
mce_check_crashing_cpu())
|
|
return;
|
|
|
|
- irq_state = idtentry_enter_nmi(regs);
|
|
+ irq_state = irqentry_nmi_enter(regs);
|
|
/*
|
|
* The call targets are marked noinstr, but objtool can't figure
|
|
* that out because it's an indirect call. Annotate it.
|
|
@@ -2009,7 +2009,7 @@ static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
|
|
if (regs->flags & X86_EFLAGS_IF)
|
|
trace_hardirqs_on_prepare();
|
|
instrumentation_end();
|
|
- idtentry_exit_nmi(regs, irq_state);
|
|
+ irqentry_nmi_exit(regs, irq_state);
|
|
}
|
|
|
|
static __always_inline void exc_machine_check_user(struct pt_regs *regs)
|
|
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
|
|
index 34b18f6eeb2ce..5ee705b44560b 100644
|
|
--- a/arch/x86/kernel/kvmclock.c
|
|
+++ b/arch/x86/kernel/kvmclock.c
|
|
@@ -269,21 +269,20 @@ static void __init kvmclock_init_mem(void)
|
|
|
|
static int __init kvm_setup_vsyscall_timeinfo(void)
|
|
{
|
|
-#ifdef CONFIG_X86_64
|
|
- u8 flags;
|
|
+ kvmclock_init_mem();
|
|
|
|
- if (!per_cpu(hv_clock_per_cpu, 0) || !kvmclock_vsyscall)
|
|
- return 0;
|
|
+#ifdef CONFIG_X86_64
|
|
+ if (per_cpu(hv_clock_per_cpu, 0) && kvmclock_vsyscall) {
|
|
+ u8 flags;
|
|
|
|
- flags = pvclock_read_flags(&hv_clock_boot[0].pvti);
|
|
- if (!(flags & PVCLOCK_TSC_STABLE_BIT))
|
|
- return 0;
|
|
+ flags = pvclock_read_flags(&hv_clock_boot[0].pvti);
|
|
+ if (!(flags & PVCLOCK_TSC_STABLE_BIT))
|
|
+ return 0;
|
|
|
|
- kvm_clock.vdso_clock_mode = VDSO_CLOCKMODE_PVCLOCK;
|
|
+ kvm_clock.vdso_clock_mode = VDSO_CLOCKMODE_PVCLOCK;
|
|
+ }
|
|
#endif
|
|
|
|
- kvmclock_init_mem();
|
|
-
|
|
return 0;
|
|
}
|
|
early_initcall(kvm_setup_vsyscall_timeinfo);
|
|
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
|
|
index 4bc77aaf13039..bf250a339655f 100644
|
|
--- a/arch/x86/kernel/nmi.c
|
|
+++ b/arch/x86/kernel/nmi.c
|
|
@@ -475,7 +475,7 @@ static DEFINE_PER_CPU(unsigned long, nmi_dr7);
|
|
|
|
DEFINE_IDTENTRY_RAW(exc_nmi)
|
|
{
|
|
- bool irq_state;
|
|
+ irqentry_state_t irq_state;
|
|
|
|
/*
|
|
* Re-enable NMIs right here when running as an SEV-ES guest. This might
|
|
@@ -502,14 +502,14 @@ nmi_restart:
|
|
|
|
this_cpu_write(nmi_dr7, local_db_save());
|
|
|
|
- irq_state = idtentry_enter_nmi(regs);
|
|
+ irq_state = irqentry_nmi_enter(regs);
|
|
|
|
inc_irq_stat(__nmi_count);
|
|
|
|
if (!ignore_nmis)
|
|
default_do_nmi(regs);
|
|
|
|
- idtentry_exit_nmi(regs, irq_state);
|
|
+ irqentry_nmi_exit(regs, irq_state);
|
|
|
|
local_db_restore(this_cpu_read(nmi_dr7));
|
|
|
|
diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
|
|
index 84c1821819afb..04a780abb512d 100644
|
|
--- a/arch/x86/kernel/sev-es.c
|
|
+++ b/arch/x86/kernel/sev-es.c
|
|
@@ -121,8 +121,18 @@ static void __init setup_vc_stacks(int cpu)
|
|
cea_set_pte((void *)vaddr, pa, PAGE_KERNEL);
|
|
}
|
|
|
|
-static __always_inline bool on_vc_stack(unsigned long sp)
|
|
+static __always_inline bool on_vc_stack(struct pt_regs *regs)
|
|
{
|
|
+ unsigned long sp = regs->sp;
|
|
+
|
|
+ /* User-mode RSP is not trusted */
|
|
+ if (user_mode(regs))
|
|
+ return false;
|
|
+
|
|
+ /* SYSCALL gap still has user-mode RSP */
|
|
+ if (ip_within_syscall_gap(regs))
|
|
+ return false;
|
|
+
|
|
return ((sp >= __this_cpu_ist_bottom_va(VC)) && (sp < __this_cpu_ist_top_va(VC)));
|
|
}
|
|
|
|
@@ -144,7 +154,7 @@ void noinstr __sev_es_ist_enter(struct pt_regs *regs)
|
|
old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
|
|
|
|
/* Make room on the IST stack */
|
|
- if (on_vc_stack(regs->sp))
|
|
+ if (on_vc_stack(regs))
|
|
new_ist = ALIGN_DOWN(regs->sp, 8) - sizeof(old_ist);
|
|
else
|
|
new_ist = old_ist - sizeof(old_ist);
|
|
@@ -248,7 +258,7 @@ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
|
|
int res;
|
|
|
|
if (user_mode(ctxt->regs)) {
|
|
- res = insn_fetch_from_user(ctxt->regs, buffer);
|
|
+ res = insn_fetch_from_user_inatomic(ctxt->regs, buffer);
|
|
if (!res) {
|
|
ctxt->fi.vector = X86_TRAP_PF;
|
|
ctxt->fi.error_code = X86_PF_INSTR | X86_PF_USER;
|
|
@@ -1248,13 +1258,12 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs)
|
|
DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
|
|
{
|
|
struct sev_es_runtime_data *data = this_cpu_read(runtime_data);
|
|
+ irqentry_state_t irq_state;
|
|
struct ghcb_state state;
|
|
struct es_em_ctxt ctxt;
|
|
enum es_result result;
|
|
struct ghcb *ghcb;
|
|
|
|
- lockdep_assert_irqs_disabled();
|
|
-
|
|
/*
|
|
* Handle #DB before calling into !noinstr code to avoid recursive #DB.
|
|
*/
|
|
@@ -1263,6 +1272,8 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
|
|
return;
|
|
}
|
|
|
|
+ irq_state = irqentry_nmi_enter(regs);
|
|
+ lockdep_assert_irqs_disabled();
|
|
instrumentation_begin();
|
|
|
|
/*
|
|
@@ -1325,6 +1336,7 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
|
|
|
|
out:
|
|
instrumentation_end();
|
|
+ irqentry_nmi_exit(regs, irq_state);
|
|
|
|
return;
|
|
|
|
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
|
|
index 170c94ec00685..7692bf7908e6c 100644
|
|
--- a/arch/x86/kernel/traps.c
|
|
+++ b/arch/x86/kernel/traps.c
|
|
@@ -406,7 +406,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
|
|
}
|
|
#endif
|
|
|
|
- idtentry_enter_nmi(regs);
|
|
+ irqentry_nmi_enter(regs);
|
|
instrumentation_begin();
|
|
notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV);
|
|
|
|
@@ -652,12 +652,13 @@ DEFINE_IDTENTRY_RAW(exc_int3)
|
|
instrumentation_end();
|
|
irqentry_exit_to_user_mode(regs);
|
|
} else {
|
|
- bool irq_state = idtentry_enter_nmi(regs);
|
|
+ irqentry_state_t irq_state = irqentry_nmi_enter(regs);
|
|
+
|
|
instrumentation_begin();
|
|
if (!do_int3(regs))
|
|
die("int3", regs, 0);
|
|
instrumentation_end();
|
|
- idtentry_exit_nmi(regs, irq_state);
|
|
+ irqentry_nmi_exit(regs, irq_state);
|
|
}
|
|
}
|
|
|
|
@@ -686,8 +687,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
|
|
* In the SYSCALL entry path the RSP value comes from user-space - don't
|
|
* trust it and switch to the current kernel stack
|
|
*/
|
|
- if (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
|
|
- regs->ip < (unsigned long)entry_SYSCALL_64_safe_stack) {
|
|
+ if (ip_within_syscall_gap(regs)) {
|
|
sp = this_cpu_read(cpu_current_top_of_stack);
|
|
goto sync;
|
|
}
|
|
@@ -852,7 +852,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
|
|
* includes the entry stack is excluded for everything.
|
|
*/
|
|
unsigned long dr7 = local_db_save();
|
|
- bool irq_state = idtentry_enter_nmi(regs);
|
|
+ irqentry_state_t irq_state = irqentry_nmi_enter(regs);
|
|
instrumentation_begin();
|
|
|
|
/*
|
|
@@ -909,7 +909,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs,
|
|
regs->flags &= ~X86_EFLAGS_TF;
|
|
out:
|
|
instrumentation_end();
|
|
- idtentry_exit_nmi(regs, irq_state);
|
|
+ irqentry_nmi_exit(regs, irq_state);
|
|
|
|
local_db_restore(dr7);
|
|
}
|
|
@@ -927,7 +927,7 @@ static __always_inline void exc_debug_user(struct pt_regs *regs,
|
|
|
|
/*
|
|
* NB: We can't easily clear DR7 here because
|
|
- * idtentry_exit_to_usermode() can invoke ptrace, schedule, access
|
|
+ * irqentry_exit_to_usermode() can invoke ptrace, schedule, access
|
|
* user memory, etc. This means that a recursive #DB is possible. If
|
|
* this happens, that #DB will hit exc_debug_kernel() and clear DR7.
|
|
* Since we're not on the IST stack right now, everything will be
|
|
diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
|
|
index 73f8001000669..c451d5f6422f6 100644
|
|
--- a/arch/x86/kernel/unwind_orc.c
|
|
+++ b/arch/x86/kernel/unwind_orc.c
|
|
@@ -367,8 +367,8 @@ static bool deref_stack_regs(struct unwind_state *state, unsigned long addr,
|
|
if (!stack_access_ok(state, addr, sizeof(struct pt_regs)))
|
|
return false;
|
|
|
|
- *ip = regs->ip;
|
|
- *sp = regs->sp;
|
|
+ *ip = READ_ONCE_NOCHECK(regs->ip);
|
|
+ *sp = READ_ONCE_NOCHECK(regs->sp);
|
|
return true;
|
|
}
|
|
|
|
@@ -380,8 +380,8 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
|
|
if (!stack_access_ok(state, addr, IRET_FRAME_SIZE))
|
|
return false;
|
|
|
|
- *ip = regs->ip;
|
|
- *sp = regs->sp;
|
|
+ *ip = READ_ONCE_NOCHECK(regs->ip);
|
|
+ *sp = READ_ONCE_NOCHECK(regs->sp);
|
|
return true;
|
|
}
|
|
|
|
@@ -402,12 +402,12 @@ static bool get_reg(struct unwind_state *state, unsigned int reg_off,
|
|
return false;
|
|
|
|
if (state->full_regs) {
|
|
- *val = ((unsigned long *)state->regs)[reg];
|
|
+ *val = READ_ONCE_NOCHECK(((unsigned long *)state->regs)[reg]);
|
|
return true;
|
|
}
|
|
|
|
if (state->prev_regs) {
|
|
- *val = ((unsigned long *)state->prev_regs)[reg];
|
|
+ *val = READ_ONCE_NOCHECK(((unsigned long *)state->prev_regs)[reg]);
|
|
return true;
|
|
}
|
|
|
|
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
|
|
index 86c33d53c90a0..4ca81ae9bc8ad 100644
|
|
--- a/arch/x86/kvm/lapic.c
|
|
+++ b/arch/x86/kvm/lapic.c
|
|
@@ -1641,7 +1641,16 @@ static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn)
|
|
}
|
|
|
|
if (kvm_use_posted_timer_interrupt(apic->vcpu)) {
|
|
- kvm_wait_lapic_expire(vcpu);
|
|
+ /*
|
|
+ * Ensure the guest's timer has truly expired before posting an
|
|
+ * interrupt. Open code the relevant checks to avoid querying
|
|
+ * lapic_timer_int_injected(), which will be false since the
|
|
+ * interrupt isn't yet injected. Waiting until after injecting
|
|
+ * is not an option since that won't help a posted interrupt.
|
|
+ */
|
|
+ if (vcpu->arch.apic->lapic_timer.expired_tscdeadline &&
|
|
+ vcpu->arch.apic->lapic_timer.timer_advance_ns)
|
|
+ __kvm_wait_lapic_expire(vcpu);
|
|
kvm_apic_inject_pending_timer_irqs(apic);
|
|
return;
|
|
}
|
|
diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
|
|
index 4229950a5d78c..bb0b3fe1e0a02 100644
|
|
--- a/arch/x86/lib/insn-eval.c
|
|
+++ b/arch/x86/lib/insn-eval.c
|
|
@@ -1415,6 +1415,25 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
|
|
}
|
|
}
|
|
|
|
+static unsigned long insn_get_effective_ip(struct pt_regs *regs)
|
|
+{
|
|
+ unsigned long seg_base = 0;
|
|
+
|
|
+ /*
|
|
+ * If not in user-space long mode, a custom code segment could be in
|
|
+ * use. This is true in protected mode (if the process defined a local
|
|
+ * descriptor table), or virtual-8086 mode. In most of the cases
|
|
+ * seg_base will be zero as in USER_CS.
|
|
+ */
|
|
+ if (!user_64bit_mode(regs)) {
|
|
+ seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
|
|
+ if (seg_base == -1L)
|
|
+ return 0;
|
|
+ }
|
|
+
|
|
+ return seg_base + regs->ip;
|
|
+}
|
|
+
|
|
/**
|
|
* insn_fetch_from_user() - Copy instruction bytes from user-space memory
|
|
* @regs: Structure with register values as seen when entering kernel mode
|
|
@@ -1431,24 +1450,43 @@ void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
|
|
*/
|
|
int insn_fetch_from_user(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
|
|
{
|
|
- unsigned long seg_base = 0;
|
|
+ unsigned long ip;
|
|
int not_copied;
|
|
|
|
- /*
|
|
- * If not in user-space long mode, a custom code segment could be in
|
|
- * use. This is true in protected mode (if the process defined a local
|
|
- * descriptor table), or virtual-8086 mode. In most of the cases
|
|
- * seg_base will be zero as in USER_CS.
|
|
- */
|
|
- if (!user_64bit_mode(regs)) {
|
|
- seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
|
|
- if (seg_base == -1L)
|
|
- return 0;
|
|
- }
|
|
+ ip = insn_get_effective_ip(regs);
|
|
+ if (!ip)
|
|
+ return 0;
|
|
+
|
|
+ not_copied = copy_from_user(buf, (void __user *)ip, MAX_INSN_SIZE);
|
|
|
|
+ return MAX_INSN_SIZE - not_copied;
|
|
+}
|
|
+
|
|
+/**
|
|
+ * insn_fetch_from_user_inatomic() - Copy instruction bytes from user-space memory
|
|
+ * while in atomic code
|
|
+ * @regs: Structure with register values as seen when entering kernel mode
|
|
+ * @buf: Array to store the fetched instruction
|
|
+ *
|
|
+ * Gets the linear address of the instruction and copies the instruction bytes
|
|
+ * to the buf. This function must be used in atomic context.
|
|
+ *
|
|
+ * Returns:
|
|
+ *
|
|
+ * Number of instruction bytes copied.
|
|
+ *
|
|
+ * 0 if nothing was copied.
|
|
+ */
|
|
+int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE])
|
|
+{
|
|
+ unsigned long ip;
|
|
+ int not_copied;
|
|
+
|
|
+ ip = insn_get_effective_ip(regs);
|
|
+ if (!ip)
|
|
+ return 0;
|
|
|
|
- not_copied = copy_from_user(buf, (void __user *)(seg_base + regs->ip),
|
|
- MAX_INSN_SIZE);
|
|
+ not_copied = __copy_from_user_inatomic(buf, (void __user *)ip, MAX_INSN_SIZE);
|
|
|
|
return MAX_INSN_SIZE - not_copied;
|
|
}
|
|
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
|
|
index 6817a673e5cec..4676c6f00489c 100644
|
|
--- a/block/blk-zoned.c
|
|
+++ b/block/blk-zoned.c
|
|
@@ -318,6 +318,22 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
|
|
return 0;
|
|
}
|
|
|
|
+static int blkdev_truncate_zone_range(struct block_device *bdev, fmode_t mode,
|
|
+ const struct blk_zone_range *zrange)
|
|
+{
|
|
+ loff_t start, end;
|
|
+
|
|
+ if (zrange->sector + zrange->nr_sectors <= zrange->sector ||
|
|
+ zrange->sector + zrange->nr_sectors > get_capacity(bdev->bd_disk))
|
|
+ /* Out of range */
|
|
+ return -EINVAL;
|
|
+
|
|
+ start = zrange->sector << SECTOR_SHIFT;
|
|
+ end = ((zrange->sector + zrange->nr_sectors) << SECTOR_SHIFT) - 1;
|
|
+
|
|
+ return truncate_bdev_range(bdev, mode, start, end);
|
|
+}
|
|
+
|
|
/*
|
|
* BLKRESETZONE, BLKOPENZONE, BLKCLOSEZONE and BLKFINISHZONE ioctl processing.
|
|
* Called from blkdev_ioctl.
|
|
@@ -329,6 +345,7 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
struct request_queue *q;
|
|
struct blk_zone_range zrange;
|
|
enum req_opf op;
|
|
+ int ret;
|
|
|
|
if (!argp)
|
|
return -EINVAL;
|
|
@@ -352,6 +369,11 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
switch (cmd) {
|
|
case BLKRESETZONE:
|
|
op = REQ_OP_ZONE_RESET;
|
|
+
|
|
+ /* Invalidate the page cache, including dirty pages. */
|
|
+ ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
|
|
+ if (ret)
|
|
+ return ret;
|
|
break;
|
|
case BLKOPENZONE:
|
|
op = REQ_OP_ZONE_OPEN;
|
|
@@ -366,8 +388,20 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode,
|
|
return -ENOTTY;
|
|
}
|
|
|
|
- return blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
|
|
- GFP_KERNEL);
|
|
+ ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors,
|
|
+ GFP_KERNEL);
|
|
+
|
|
+ /*
|
|
+ * Invalidate the page cache again for zone reset: writes can only be
|
|
+ * direct for zoned devices so concurrent writes would not add any page
|
|
+ * to the page cache after/during reset. The page cache may be filled
|
|
+ * again due to concurrent reads though and dropping the pages for
|
|
+ * these is fine.
|
|
+ */
|
|
+ if (!ret && cmd == BLKRESETZONE)
|
|
+ ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
|
|
+
|
|
+ return ret;
|
|
}
|
|
|
|
static inline unsigned long *blk_alloc_zone_bitmap(int node,
|
|
diff --git a/crypto/Kconfig b/crypto/Kconfig
|
|
index 37de7d006858d..774adc9846fa8 100644
|
|
--- a/crypto/Kconfig
|
|
+++ b/crypto/Kconfig
|
|
@@ -772,7 +772,7 @@ config CRYPTO_POLY1305_X86_64
|
|
|
|
config CRYPTO_POLY1305_MIPS
|
|
tristate "Poly1305 authenticator algorithm (MIPS optimized)"
|
|
- depends on CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
|
+ depends on MIPS
|
|
select CRYPTO_ARCH_HAVE_LIB_POLY1305
|
|
|
|
config CRYPTO_MD4
|
|
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
|
|
index eef4ffb6122c9..de058d15b33ea 100644
|
|
--- a/drivers/base/memory.c
|
|
+++ b/drivers/base/memory.c
|
|
@@ -290,20 +290,20 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
|
|
}
|
|
|
|
/*
|
|
- * phys_device is a bad name for this. What I really want
|
|
- * is a way to differentiate between memory ranges that
|
|
- * are part of physical devices that constitute
|
|
- * a complete removable unit or fru.
|
|
- * i.e. do these ranges belong to the same physical device,
|
|
- * s.t. if I offline all of these sections I can then
|
|
- * remove the physical device?
|
|
+ * Legacy interface that we cannot remove: s390x exposes the storage increment
|
|
+ * covered by a memory block, allowing for identifying which memory blocks
|
|
+ * comprise a storage increment. Since a memory block spans complete
|
|
+ * storage increments nowadays, this interface is basically unused. Other
|
|
+ * archs never exposed != 0.
|
|
*/
|
|
static ssize_t phys_device_show(struct device *dev,
|
|
struct device_attribute *attr, char *buf)
|
|
{
|
|
struct memory_block *mem = to_memory_block(dev);
|
|
+ unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
|
|
|
|
- return sysfs_emit(buf, "%d\n", mem->phys_device);
|
|
+ return sysfs_emit(buf, "%d\n",
|
|
+ arch_get_memory_phys_device(start_pfn));
|
|
}
|
|
|
|
#ifdef CONFIG_MEMORY_HOTREMOVE
|
|
@@ -488,11 +488,7 @@ static DEVICE_ATTR_WO(soft_offline_page);
|
|
static DEVICE_ATTR_WO(hard_offline_page);
|
|
#endif
|
|
|
|
-/*
|
|
- * Note that phys_device is optional. It is here to allow for
|
|
- * differentiation between which *physical* devices each
|
|
- * section belongs to...
|
|
- */
|
|
+/* See phys_device_show(). */
|
|
int __weak arch_get_memory_phys_device(unsigned long start_pfn)
|
|
{
|
|
return 0;
|
|
@@ -574,7 +570,6 @@ int register_memory(struct memory_block *memory)
|
|
static int init_memory_block(unsigned long block_id, unsigned long state)
|
|
{
|
|
struct memory_block *mem;
|
|
- unsigned long start_pfn;
|
|
int ret = 0;
|
|
|
|
mem = find_memory_block_by_id(block_id);
|
|
@@ -588,8 +583,6 @@ static int init_memory_block(unsigned long block_id, unsigned long state)
|
|
|
|
mem->start_section_nr = block_id * sections_per_block;
|
|
mem->state = state;
|
|
- start_pfn = section_nr_to_pfn(mem->start_section_nr);
|
|
- mem->phys_device = arch_get_memory_phys_device(start_pfn);
|
|
mem->nid = NUMA_NO_NODE;
|
|
|
|
ret = register_memory(mem);
|
|
diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
|
|
index 615a0c93e1166..206bd4d7d7e23 100644
|
|
--- a/drivers/base/swnode.c
|
|
+++ b/drivers/base/swnode.c
|
|
@@ -786,6 +786,9 @@ int software_node_register(const struct software_node *node)
|
|
if (software_node_to_swnode(node))
|
|
return -EEXIST;
|
|
|
|
+ if (node->parent && !parent)
|
|
+ return -EINVAL;
|
|
+
|
|
return PTR_ERR_OR_ZERO(swnode_register(node, parent, 0));
|
|
}
|
|
EXPORT_SYMBOL_GPL(software_node_register);
|
|
diff --git a/drivers/base/test/Makefile b/drivers/base/test/Makefile
|
|
index 3ca56367c84b7..2f15fae8625f1 100644
|
|
--- a/drivers/base/test/Makefile
|
|
+++ b/drivers/base/test/Makefile
|
|
@@ -2,3 +2,4 @@
|
|
obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE) += test_async_driver_probe.o
|
|
|
|
obj-$(CONFIG_KUNIT_DRIVER_PE_TEST) += property-entry-test.o
|
|
+CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all
|
|
diff --git a/drivers/block/rsxx/core.c b/drivers/block/rsxx/core.c
|
|
index 5ac1881396afb..227e1be4c6f99 100644
|
|
--- a/drivers/block/rsxx/core.c
|
|
+++ b/drivers/block/rsxx/core.c
|
|
@@ -871,6 +871,7 @@ static int rsxx_pci_probe(struct pci_dev *dev,
|
|
card->event_wq = create_singlethread_workqueue(DRIVER_NAME"_event");
|
|
if (!card->event_wq) {
|
|
dev_err(CARD_TO_DEV(card), "Failed card event setup.\n");
|
|
+ st = -ENOMEM;
|
|
goto failed_event_handler;
|
|
}
|
|
|
|
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
|
|
index 711168451e9e5..7dce17fd59baa 100644
|
|
--- a/drivers/block/zram/zram_drv.c
|
|
+++ b/drivers/block/zram/zram_drv.c
|
|
@@ -633,7 +633,7 @@ static ssize_t writeback_store(struct device *dev,
|
|
struct bio_vec bio_vec;
|
|
struct page *page;
|
|
ssize_t ret = len;
|
|
- int mode;
|
|
+ int mode, err;
|
|
unsigned long blk_idx = 0;
|
|
|
|
if (sysfs_streq(buf, "idle"))
|
|
@@ -725,12 +725,17 @@ static ssize_t writeback_store(struct device *dev,
|
|
* XXX: A single page IO would be inefficient for write
|
|
* but it would be not bad as starter.
|
|
*/
|
|
- ret = submit_bio_wait(&bio);
|
|
- if (ret) {
|
|
+ err = submit_bio_wait(&bio);
|
|
+ if (err) {
|
|
zram_slot_lock(zram, index);
|
|
zram_clear_flag(zram, index, ZRAM_UNDER_WB);
|
|
zram_clear_flag(zram, index, ZRAM_IDLE);
|
|
zram_slot_unlock(zram, index);
|
|
+ /*
|
|
+ * Return last IO error unless every IO were
|
|
+ * not suceeded.
|
|
+ */
|
|
+ ret = err;
|
|
continue;
|
|
}
|
|
|
|
diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
|
|
index af26e0695b866..51ed640e527b4 100644
|
|
--- a/drivers/clk/qcom/gdsc.c
|
|
+++ b/drivers/clk/qcom/gdsc.c
|
|
@@ -183,7 +183,10 @@ static inline int gdsc_assert_reset(struct gdsc *sc)
|
|
static inline void gdsc_force_mem_on(struct gdsc *sc)
|
|
{
|
|
int i;
|
|
- u32 mask = RETAIN_MEM | RETAIN_PERIPH;
|
|
+ u32 mask = RETAIN_MEM;
|
|
+
|
|
+ if (!(sc->flags & NO_RET_PERIPH))
|
|
+ mask |= RETAIN_PERIPH;
|
|
|
|
for (i = 0; i < sc->cxc_count; i++)
|
|
regmap_update_bits(sc->regmap, sc->cxcs[i], mask, mask);
|
|
@@ -192,7 +195,10 @@ static inline void gdsc_force_mem_on(struct gdsc *sc)
|
|
static inline void gdsc_clear_mem_on(struct gdsc *sc)
|
|
{
|
|
int i;
|
|
- u32 mask = RETAIN_MEM | RETAIN_PERIPH;
|
|
+ u32 mask = RETAIN_MEM;
|
|
+
|
|
+ if (!(sc->flags & NO_RET_PERIPH))
|
|
+ mask |= RETAIN_PERIPH;
|
|
|
|
for (i = 0; i < sc->cxc_count; i++)
|
|
regmap_update_bits(sc->regmap, sc->cxcs[i], mask, 0);
|
|
diff --git a/drivers/clk/qcom/gdsc.h b/drivers/clk/qcom/gdsc.h
|
|
index bd537438c7932..5bb396b344d16 100644
|
|
--- a/drivers/clk/qcom/gdsc.h
|
|
+++ b/drivers/clk/qcom/gdsc.h
|
|
@@ -42,7 +42,7 @@ struct gdsc {
|
|
#define PWRSTS_ON BIT(2)
|
|
#define PWRSTS_OFF_ON (PWRSTS_OFF | PWRSTS_ON)
|
|
#define PWRSTS_RET_ON (PWRSTS_RET | PWRSTS_ON)
|
|
- const u8 flags;
|
|
+ const u16 flags;
|
|
#define VOTABLE BIT(0)
|
|
#define CLAMP_IO BIT(1)
|
|
#define HW_CTRL BIT(2)
|
|
@@ -51,6 +51,7 @@ struct gdsc {
|
|
#define POLL_CFG_GDSCR BIT(5)
|
|
#define ALWAYS_ON BIT(6)
|
|
#define RETAIN_FF_ENABLE BIT(7)
|
|
+#define NO_RET_PERIPH BIT(8)
|
|
struct reset_controller_dev *rcdev;
|
|
unsigned int *resets;
|
|
unsigned int reset_count;
|
|
diff --git a/drivers/clk/qcom/gpucc-msm8998.c b/drivers/clk/qcom/gpucc-msm8998.c
|
|
index 9b3923af02a14..1a518c4915b4b 100644
|
|
--- a/drivers/clk/qcom/gpucc-msm8998.c
|
|
+++ b/drivers/clk/qcom/gpucc-msm8998.c
|
|
@@ -253,12 +253,16 @@ static struct gdsc gpu_cx_gdsc = {
|
|
static struct gdsc gpu_gx_gdsc = {
|
|
.gdscr = 0x1094,
|
|
.clamp_io_ctrl = 0x130,
|
|
+ .resets = (unsigned int []){ GPU_GX_BCR },
|
|
+ .reset_count = 1,
|
|
+ .cxcs = (unsigned int []){ 0x1098 },
|
|
+ .cxc_count = 1,
|
|
.pd = {
|
|
.name = "gpu_gx",
|
|
},
|
|
.parent = &gpu_cx_gdsc.pd,
|
|
- .pwrsts = PWRSTS_OFF_ON,
|
|
- .flags = CLAMP_IO | AON_RESET,
|
|
+ .pwrsts = PWRSTS_OFF_ON | PWRSTS_RET,
|
|
+ .flags = CLAMP_IO | SW_RESET | AON_RESET | NO_RET_PERIPH,
|
|
};
|
|
|
|
static struct clk_regmap *gpucc_msm8998_clocks[] = {
|
|
diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
|
|
index 2726e77c9e5a9..6de07556665b1 100644
|
|
--- a/drivers/cpufreq/qcom-cpufreq-hw.c
|
|
+++ b/drivers/cpufreq/qcom-cpufreq-hw.c
|
|
@@ -317,9 +317,9 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
|
}
|
|
|
|
base = ioremap(res->start, resource_size(res));
|
|
- if (IS_ERR(base)) {
|
|
+ if (!base) {
|
|
dev_err(dev, "failed to map resource %pR\n", res);
|
|
- ret = PTR_ERR(base);
|
|
+ ret = -ENOMEM;
|
|
goto release_region;
|
|
}
|
|
|
|
@@ -368,7 +368,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
|
error:
|
|
kfree(data);
|
|
unmap_base:
|
|
- iounmap(data->base);
|
|
+ iounmap(base);
|
|
release_region:
|
|
release_mem_region(res->start, resource_size(res));
|
|
return ret;
|
|
diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
|
|
index 914a343c7785c..0ab439c53eee3 100644
|
|
--- a/drivers/firmware/efi/libstub/efi-stub.c
|
|
+++ b/drivers/firmware/efi/libstub/efi-stub.c
|
|
@@ -96,6 +96,18 @@ static void install_memreserve_table(void)
|
|
efi_err("Failed to install memreserve config table!\n");
|
|
}
|
|
|
|
+static u32 get_supported_rt_services(void)
|
|
+{
|
|
+ const efi_rt_properties_table_t *rt_prop_table;
|
|
+ u32 supported = EFI_RT_SUPPORTED_ALL;
|
|
+
|
|
+ rt_prop_table = get_efi_config_table(EFI_RT_PROPERTIES_TABLE_GUID);
|
|
+ if (rt_prop_table)
|
|
+ supported &= rt_prop_table->runtime_services_supported;
|
|
+
|
|
+ return supported;
|
|
+}
|
|
+
|
|
/*
|
|
* EFI entry point for the arm/arm64 EFI stubs. This is the entrypoint
|
|
* that is described in the PE/COFF header. Most of the code is the same
|
|
@@ -250,6 +262,10 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
|
|
(prop_tbl->memory_protection_attribute &
|
|
EFI_PROPERTIES_RUNTIME_MEMORY_PROTECTION_NON_EXECUTABLE_PE_DATA);
|
|
|
|
+ /* force efi_novamap if SetVirtualAddressMap() is unsupported */
|
|
+ efi_novamap |= !(get_supported_rt_services() &
|
|
+ EFI_RT_SUPPORTED_SET_VIRTUAL_ADDRESS_MAP);
|
|
+
|
|
/* hibernation expects the runtime regions to stay in the same place */
|
|
if (!IS_ENABLED(CONFIG_HIBERNATION) && !efi_nokaslr && !flat_va_mapping) {
|
|
/*
|
|
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
|
|
index 825b362eb4b7d..6898c27f71f85 100644
|
|
--- a/drivers/gpio/gpio-pca953x.c
|
|
+++ b/drivers/gpio/gpio-pca953x.c
|
|
@@ -112,8 +112,29 @@ MODULE_DEVICE_TABLE(i2c, pca953x_id);
|
|
#ifdef CONFIG_GPIO_PCA953X_IRQ
|
|
|
|
#include <linux/dmi.h>
|
|
-#include <linux/gpio.h>
|
|
-#include <linux/list.h>
|
|
+
|
|
+static const struct acpi_gpio_params pca953x_irq_gpios = { 0, 0, true };
|
|
+
|
|
+static const struct acpi_gpio_mapping pca953x_acpi_irq_gpios[] = {
|
|
+ { "irq-gpios", &pca953x_irq_gpios, 1, ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER },
|
|
+ { }
|
|
+};
|
|
+
|
|
+static int pca953x_acpi_get_irq(struct device *dev)
|
|
+{
|
|
+ int ret;
|
|
+
|
|
+ ret = devm_acpi_dev_add_driver_gpios(dev, pca953x_acpi_irq_gpios);
|
|
+ if (ret)
|
|
+ dev_warn(dev, "can't add GPIO ACPI mapping\n");
|
|
+
|
|
+ ret = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(dev), "irq-gpios", 0);
|
|
+ if (ret < 0)
|
|
+ return ret;
|
|
+
|
|
+ dev_info(dev, "ACPI interrupt quirk (IRQ %d)\n", ret);
|
|
+ return ret;
|
|
+}
|
|
|
|
static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
|
|
{
|
|
@@ -132,59 +153,6 @@ static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {
|
|
},
|
|
{}
|
|
};
|
|
-
|
|
-#ifdef CONFIG_ACPI
|
|
-static int pca953x_acpi_get_pin(struct acpi_resource *ares, void *data)
|
|
-{
|
|
- struct acpi_resource_gpio *agpio;
|
|
- int *pin = data;
|
|
-
|
|
- if (acpi_gpio_get_irq_resource(ares, &agpio))
|
|
- *pin = agpio->pin_table[0];
|
|
- return 1;
|
|
-}
|
|
-
|
|
-static int pca953x_acpi_find_pin(struct device *dev)
|
|
-{
|
|
- struct acpi_device *adev = ACPI_COMPANION(dev);
|
|
- int pin = -ENOENT, ret;
|
|
- LIST_HEAD(r);
|
|
-
|
|
- ret = acpi_dev_get_resources(adev, &r, pca953x_acpi_get_pin, &pin);
|
|
- acpi_dev_free_resource_list(&r);
|
|
- if (ret < 0)
|
|
- return ret;
|
|
-
|
|
- return pin;
|
|
-}
|
|
-#else
|
|
-static inline int pca953x_acpi_find_pin(struct device *dev) { return -ENXIO; }
|
|
-#endif
|
|
-
|
|
-static int pca953x_acpi_get_irq(struct device *dev)
|
|
-{
|
|
- int pin, ret;
|
|
-
|
|
- pin = pca953x_acpi_find_pin(dev);
|
|
- if (pin < 0)
|
|
- return pin;
|
|
-
|
|
- dev_info(dev, "Applying ACPI interrupt quirk (GPIO %d)\n", pin);
|
|
-
|
|
- if (!gpio_is_valid(pin))
|
|
- return -EINVAL;
|
|
-
|
|
- ret = gpio_request(pin, "pca953x interrupt");
|
|
- if (ret)
|
|
- return ret;
|
|
-
|
|
- ret = gpio_to_irq(pin);
|
|
-
|
|
- /* When pin is used as an IRQ, no need to keep it requested */
|
|
- gpio_free(pin);
|
|
-
|
|
- return ret;
|
|
-}
|
|
#endif
|
|
|
|
static const struct acpi_device_id pca953x_acpi_ids[] = {
|
|
diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
|
|
index 834a12f3219e5..49a1f8ce4baa6 100644
|
|
--- a/drivers/gpio/gpiolib-acpi.c
|
|
+++ b/drivers/gpio/gpiolib-acpi.c
|
|
@@ -649,6 +649,7 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
|
|
if (!lookup->desc) {
|
|
const struct acpi_resource_gpio *agpio = &ares->data.gpio;
|
|
bool gpioint = agpio->connection_type == ACPI_RESOURCE_GPIO_TYPE_INT;
|
|
+ struct gpio_desc *desc;
|
|
int pin_index;
|
|
|
|
if (lookup->info.quirks & ACPI_GPIO_QUIRK_ONLY_GPIOIO && gpioint)
|
|
@@ -661,8 +662,12 @@ static int acpi_populate_gpio_lookup(struct acpi_resource *ares, void *data)
|
|
if (pin_index >= agpio->pin_table_length)
|
|
return 1;
|
|
|
|
- lookup->desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
|
|
+ if (lookup->info.quirks & ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER)
|
|
+ desc = gpio_to_desc(agpio->pin_table[pin_index]);
|
|
+ else
|
|
+ desc = acpi_get_gpiod(agpio->resource_source.string_ptr,
|
|
agpio->pin_table[pin_index]);
|
|
+ lookup->desc = desc;
|
|
lookup->info.pin_config = agpio->pin_config;
|
|
lookup->info.gpioint = gpioint;
|
|
|
|
@@ -911,8 +916,9 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
|
|
}
|
|
|
|
/**
|
|
- * acpi_dev_gpio_irq_get() - Find GpioInt and translate it to Linux IRQ number
|
|
+ * acpi_dev_gpio_irq_get_by() - Find GpioInt and translate it to Linux IRQ number
|
|
* @adev: pointer to a ACPI device to get IRQ from
|
|
+ * @name: optional name of GpioInt resource
|
|
* @index: index of GpioInt resource (starting from %0)
|
|
*
|
|
* If the device has one or more GpioInt resources, this function can be
|
|
@@ -922,9 +928,12 @@ struct gpio_desc *acpi_node_get_gpiod(struct fwnode_handle *fwnode,
|
|
* The function is idempotent, though each time it runs it will configure GPIO
|
|
* pin direction according to the flags in GpioInt resource.
|
|
*
|
|
+ * The function takes optional @name parameter. If the resource has a property
|
|
+ * name, then only those will be taken into account.
|
|
+ *
|
|
* Return: Linux IRQ number (> %0) on success, negative errno on failure.
|
|
*/
|
|
-int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
+int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index)
|
|
{
|
|
int idx, i;
|
|
unsigned int irq_flags;
|
|
@@ -934,7 +943,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
struct acpi_gpio_info info;
|
|
struct gpio_desc *desc;
|
|
|
|
- desc = acpi_get_gpiod_by_index(adev, NULL, i, &info);
|
|
+ desc = acpi_get_gpiod_by_index(adev, name, i, &info);
|
|
|
|
/* Ignore -EPROBE_DEFER, it only matters if idx matches */
|
|
if (IS_ERR(desc) && PTR_ERR(desc) != -EPROBE_DEFER)
|
|
@@ -971,7 +980,7 @@ int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
}
|
|
return -ENOENT;
|
|
}
|
|
-EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get);
|
|
+EXPORT_SYMBOL_GPL(acpi_dev_gpio_irq_get_by);
|
|
|
|
static acpi_status
|
|
acpi_gpio_adr_space_handler(u32 function, acpi_physical_address address,
|
|
diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
|
|
index 7e17d4edccb12..7f557ea905424 100644
|
|
--- a/drivers/gpio/gpiolib.c
|
|
+++ b/drivers/gpio/gpiolib.c
|
|
@@ -472,8 +472,12 @@ EXPORT_SYMBOL_GPL(gpiochip_line_is_valid);
|
|
static void gpiodevice_release(struct device *dev)
|
|
{
|
|
struct gpio_device *gdev = dev_get_drvdata(dev);
|
|
+ unsigned long flags;
|
|
|
|
+ spin_lock_irqsave(&gpio_lock, flags);
|
|
list_del(&gdev->list);
|
|
+ spin_unlock_irqrestore(&gpio_lock, flags);
|
|
+
|
|
ida_free(&gpio_ida, gdev->id);
|
|
kfree_const(gdev->label);
|
|
kfree(gdev->descs);
|
|
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
|
|
index 87f095dc385c7..76c31aa7b84df 100644
|
|
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
|
|
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
|
|
@@ -178,6 +178,7 @@ extern uint amdgpu_smu_memory_pool_size;
|
|
extern uint amdgpu_dc_feature_mask;
|
|
extern uint amdgpu_dc_debug_mask;
|
|
extern uint amdgpu_dm_abm_level;
|
|
+extern int amdgpu_backlight;
|
|
extern struct amdgpu_mgpu_info mgpu_info;
|
|
extern int amdgpu_ras_enable;
|
|
extern uint amdgpu_ras_mask;
|
|
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
|
|
index 0b786d8dd8bc7..1a880cb48d19e 100644
|
|
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
|
|
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
|
|
@@ -768,6 +768,10 @@ uint amdgpu_dm_abm_level = 0;
|
|
MODULE_PARM_DESC(abmlevel, "ABM level (0 = off (default), 1-4 = backlight reduction level) ");
|
|
module_param_named(abmlevel, amdgpu_dm_abm_level, uint, 0444);
|
|
|
|
+int amdgpu_backlight = -1;
|
|
+MODULE_PARM_DESC(backlight, "Backlight control (0 = pwm, 1 = aux, -1 auto (default))");
|
|
+module_param_named(backlight, amdgpu_backlight, bint, 0444);
|
|
+
|
|
/**
|
|
* DOC: tmz (int)
|
|
* Trusted Memory Zone (TMZ) is a method to protect data being written
|
|
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
|
|
index bffaefaf5a292..ea1ea147f6073 100644
|
|
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
|
|
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
|
|
@@ -2140,6 +2140,11 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
|
|
caps->ext_caps->bits.hdr_aux_backlight_control == 1)
|
|
caps->aux_support = true;
|
|
|
|
+ if (amdgpu_backlight == 0)
|
|
+ caps->aux_support = false;
|
|
+ else if (amdgpu_backlight == 1)
|
|
+ caps->aux_support = true;
|
|
+
|
|
/* From the specification (CTA-861-G), for calculating the maximum
|
|
* luminance we need to use:
|
|
* Luminance = 50*2**(CV/32)
|
|
@@ -3038,19 +3043,6 @@ static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm)
|
|
#endif
|
|
}
|
|
|
|
-static int set_backlight_via_aux(struct dc_link *link, uint32_t brightness)
|
|
-{
|
|
- bool rc;
|
|
-
|
|
- if (!link)
|
|
- return 1;
|
|
-
|
|
- rc = dc_link_set_backlight_level_nits(link, true, brightness,
|
|
- AUX_BL_DEFAULT_TRANSITION_TIME_MS);
|
|
-
|
|
- return rc ? 0 : 1;
|
|
-}
|
|
-
|
|
static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
|
|
unsigned *min, unsigned *max)
|
|
{
|
|
@@ -3113,9 +3105,10 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
|
|
brightness = convert_brightness_from_user(&caps, bd->props.brightness);
|
|
// Change brightness based on AUX property
|
|
if (caps.aux_support)
|
|
- return set_backlight_via_aux(link, brightness);
|
|
-
|
|
- rc = dc_link_set_backlight_level(dm->backlight_link, brightness, 0);
|
|
+ rc = dc_link_set_backlight_level_nits(link, true, brightness,
|
|
+ AUX_BL_DEFAULT_TRANSITION_TIME_MS);
|
|
+ else
|
|
+ rc = dc_link_set_backlight_level(dm->backlight_link, brightness, 0);
|
|
|
|
return rc ? 0 : 1;
|
|
}
|
|
@@ -3123,11 +3116,27 @@ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
|
|
static int amdgpu_dm_backlight_get_brightness(struct backlight_device *bd)
|
|
{
|
|
struct amdgpu_display_manager *dm = bl_get_data(bd);
|
|
- int ret = dc_link_get_backlight_level(dm->backlight_link);
|
|
+ struct amdgpu_dm_backlight_caps caps;
|
|
+
|
|
+ amdgpu_dm_update_backlight_caps(dm);
|
|
+ caps = dm->backlight_caps;
|
|
+
|
|
+ if (caps.aux_support) {
|
|
+ struct dc_link *link = (struct dc_link *)dm->backlight_link;
|
|
+ u32 avg, peak;
|
|
+ bool rc;
|
|
|
|
- if (ret == DC_ERROR_UNEXPECTED)
|
|
- return bd->props.brightness;
|
|
- return convert_brightness_to_user(&dm->backlight_caps, ret);
|
|
+ rc = dc_link_get_backlight_level_nits(link, &avg, &peak);
|
|
+ if (!rc)
|
|
+ return bd->props.brightness;
|
|
+ return convert_brightness_to_user(&caps, avg);
|
|
+ } else {
|
|
+ int ret = dc_link_get_backlight_level(dm->backlight_link);
|
|
+
|
|
+ if (ret == DC_ERROR_UNEXPECTED)
|
|
+ return bd->props.brightness;
|
|
+ return convert_brightness_to_user(&caps, ret);
|
|
+ }
|
|
}
|
|
|
|
static const struct backlight_ops amdgpu_dm_backlight_ops = {
|
|
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
|
|
index 21c7b642a8b4e..f0039599e02f7 100644
|
|
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
|
|
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
|
|
@@ -2555,7 +2555,6 @@ bool dc_link_set_backlight_level(const struct dc_link *link,
|
|
if (pipe_ctx->plane_state == NULL)
|
|
frame_ramp = 0;
|
|
} else {
|
|
- ASSERT(false);
|
|
return false;
|
|
}
|
|
|
|
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
|
|
index 4e2dcf259428f..b5fe2a008bd47 100644
|
|
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
|
|
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
|
|
@@ -1058,8 +1058,6 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
|
|
{
|
|
int i;
|
|
|
|
- DC_FP_START();
|
|
-
|
|
if (dc->bb_overrides.sr_exit_time_ns) {
|
|
for (i = 0; i < WM_SET_COUNT; i++) {
|
|
dc->clk_mgr->bw_params->wm_table.entries[i].sr_exit_time_us =
|
|
@@ -1084,8 +1082,6 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
|
|
dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
|
|
}
|
|
}
|
|
-
|
|
- DC_FP_END();
|
|
}
|
|
|
|
void dcn21_calculate_wm(
|
|
@@ -1183,7 +1179,7 @@ static noinline bool dcn21_validate_bandwidth_fp(struct dc *dc,
|
|
int vlevel = 0;
|
|
int pipe_split_from[MAX_PIPES];
|
|
int pipe_cnt = 0;
|
|
- display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
|
|
+ display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_ATOMIC);
|
|
DC_LOGGER_INIT(dc->ctx->logger);
|
|
|
|
BW_VAL_TRACE_COUNT();
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
|
|
index 7eada3098ffcc..18e4eb8884c26 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
|
|
@@ -1506,6 +1506,48 @@ static int vega10_populate_single_lclk_level(struct pp_hwmgr *hwmgr,
|
|
return 0;
|
|
}
|
|
|
|
+static int vega10_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
+{
|
|
+ struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
|
|
+ struct vega10_hwmgr *data =
|
|
+ (struct vega10_hwmgr *)(hwmgr->backend);
|
|
+ uint32_t pcie_gen = 0, pcie_width = 0;
|
|
+ PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
|
+ int i;
|
|
+
|
|
+ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
|
|
+ pcie_gen = 3;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
|
|
+ pcie_gen = 2;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
|
|
+ pcie_gen = 1;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
|
|
+ pcie_gen = 0;
|
|
+
|
|
+ if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
|
|
+ pcie_width = 6;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
|
|
+ pcie_width = 5;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
|
|
+ pcie_width = 4;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
|
|
+ pcie_width = 3;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
|
|
+ pcie_width = 2;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
|
|
+ pcie_width = 1;
|
|
+
|
|
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
+ if (pp_table->PcieGenSpeed[i] > pcie_gen)
|
|
+ pp_table->PcieGenSpeed[i] = pcie_gen;
|
|
+
|
|
+ if (pp_table->PcieLaneCount[i] > pcie_width)
|
|
+ pp_table->PcieLaneCount[i] = pcie_width;
|
|
+ }
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static int vega10_populate_smc_link_levels(struct pp_hwmgr *hwmgr)
|
|
{
|
|
int result = -1;
|
|
@@ -2557,6 +2599,11 @@ static int vega10_init_smc_table(struct pp_hwmgr *hwmgr)
|
|
"Failed to initialize Link Level!",
|
|
return result);
|
|
|
|
+ result = vega10_override_pcie_parameters(hwmgr);
|
|
+ PP_ASSERT_WITH_CODE(!result,
|
|
+ "Failed to override pcie parameters!",
|
|
+ return result);
|
|
+
|
|
result = vega10_populate_all_graphic_levels(hwmgr);
|
|
PP_ASSERT_WITH_CODE(!result,
|
|
"Failed to initialize Graphics Level!",
|
|
@@ -2923,6 +2970,7 @@ static int vega10_start_dpm(struct pp_hwmgr *hwmgr, uint32_t bitmap)
|
|
return 0;
|
|
}
|
|
|
|
+
|
|
static int vega10_enable_disable_PCC_limit_feature(struct pp_hwmgr *hwmgr, bool enable)
|
|
{
|
|
struct vega10_hwmgr *data = hwmgr->backend;
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
|
|
index dc206fa88c5e5..62076035029ac 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
|
|
@@ -481,6 +481,67 @@ static void vega12_init_dpm_state(struct vega12_dpm_state *dpm_state)
|
|
dpm_state->hard_max_level = 0xffff;
|
|
}
|
|
|
|
+static int vega12_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
+{
|
|
+ struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
|
|
+ struct vega12_hwmgr *data =
|
|
+ (struct vega12_hwmgr *)(hwmgr->backend);
|
|
+ uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg, pcie_gen_arg, pcie_width_arg;
|
|
+ PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
|
+ int i;
|
|
+ int ret;
|
|
+
|
|
+ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
|
|
+ pcie_gen = 3;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
|
|
+ pcie_gen = 2;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
|
|
+ pcie_gen = 1;
|
|
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1)
|
|
+ pcie_gen = 0;
|
|
+
|
|
+ if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
|
|
+ pcie_width = 6;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
|
|
+ pcie_width = 5;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X8)
|
|
+ pcie_width = 4;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X4)
|
|
+ pcie_width = 3;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X2)
|
|
+ pcie_width = 2;
|
|
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X1)
|
|
+ pcie_width = 1;
|
|
+
|
|
+ /* Bit 31:16: LCLK DPM level. 0 is DPM0, and 1 is DPM1
|
|
+ * Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
|
|
+ * Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32
|
|
+ */
|
|
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
+ pcie_gen_arg = (pp_table->PcieGenSpeed[i] > pcie_gen) ? pcie_gen :
|
|
+ pp_table->PcieGenSpeed[i];
|
|
+ pcie_width_arg = (pp_table->PcieLaneCount[i] > pcie_width) ? pcie_width :
|
|
+ pp_table->PcieLaneCount[i];
|
|
+
|
|
+ if (pcie_gen_arg != pp_table->PcieGenSpeed[i] || pcie_width_arg !=
|
|
+ pp_table->PcieLaneCount[i]) {
|
|
+ smu_pcie_arg = (i << 16) | (pcie_gen_arg << 8) | pcie_width_arg;
|
|
+ ret = smum_send_msg_to_smc_with_parameter(hwmgr,
|
|
+ PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
|
|
+ NULL);
|
|
+ PP_ASSERT_WITH_CODE(!ret,
|
|
+ "[OverridePcieParameters] Attempt to override pcie params failed!",
|
|
+ return ret);
|
|
+ }
|
|
+
|
|
+ /* update the pptable */
|
|
+ pp_table->PcieGenSpeed[i] = pcie_gen_arg;
|
|
+ pp_table->PcieLaneCount[i] = pcie_width_arg;
|
|
+ }
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static int vega12_get_number_of_dpm_level(struct pp_hwmgr *hwmgr,
|
|
PPCLK_e clk_id, uint32_t *num_of_levels)
|
|
{
|
|
@@ -969,6 +1030,11 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
|
"Failed to enable all smu features!",
|
|
return result);
|
|
|
|
+ result = vega12_override_pcie_parameters(hwmgr);
|
|
+ PP_ASSERT_WITH_CODE(!result,
|
|
+ "[EnableDPMTasks] Failed to override pcie parameters!",
|
|
+ return result);
|
|
+
|
|
tmp_result = vega12_power_control_set_level(hwmgr);
|
|
PP_ASSERT_WITH_CODE(!tmp_result,
|
|
"Failed to power control set level!",
|
|
diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
|
|
index da84012b7fd51..251979c059c8b 100644
|
|
--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
|
|
+++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
|
|
@@ -832,7 +832,9 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
struct amdgpu_device *adev = (struct amdgpu_device *)(hwmgr->adev);
|
|
struct vega20_hwmgr *data =
|
|
(struct vega20_hwmgr *)(hwmgr->backend);
|
|
- uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg;
|
|
+ uint32_t pcie_gen = 0, pcie_width = 0, smu_pcie_arg, pcie_gen_arg, pcie_width_arg;
|
|
+ PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
|
+ int i;
|
|
int ret;
|
|
|
|
if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
|
|
@@ -861,17 +863,27 @@ static int vega20_override_pcie_parameters(struct pp_hwmgr *hwmgr)
|
|
* Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
|
|
* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32
|
|
*/
|
|
- smu_pcie_arg = (1 << 16) | (pcie_gen << 8) | pcie_width;
|
|
- ret = smum_send_msg_to_smc_with_parameter(hwmgr,
|
|
- PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
|
|
- NULL);
|
|
- PP_ASSERT_WITH_CODE(!ret,
|
|
- "[OverridePcieParameters] Attempt to override pcie params failed!",
|
|
- return ret);
|
|
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
+ pcie_gen_arg = (pp_table->PcieGenSpeed[i] > pcie_gen) ? pcie_gen :
|
|
+ pp_table->PcieGenSpeed[i];
|
|
+ pcie_width_arg = (pp_table->PcieLaneCount[i] > pcie_width) ? pcie_width :
|
|
+ pp_table->PcieLaneCount[i];
|
|
+
|
|
+ if (pcie_gen_arg != pp_table->PcieGenSpeed[i] || pcie_width_arg !=
|
|
+ pp_table->PcieLaneCount[i]) {
|
|
+ smu_pcie_arg = (i << 16) | (pcie_gen_arg << 8) | pcie_width_arg;
|
|
+ ret = smum_send_msg_to_smc_with_parameter(hwmgr,
|
|
+ PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,
|
|
+ NULL);
|
|
+ PP_ASSERT_WITH_CODE(!ret,
|
|
+ "[OverridePcieParameters] Attempt to override pcie params failed!",
|
|
+ return ret);
|
|
+ }
|
|
|
|
- data->pcie_parameters_override = true;
|
|
- data->pcie_gen_level1 = pcie_gen;
|
|
- data->pcie_width_level1 = pcie_width;
|
|
+ /* update the pptable */
|
|
+ pp_table->PcieGenSpeed[i] = pcie_gen_arg;
|
|
+ pp_table->PcieLaneCount[i] = pcie_width_arg;
|
|
+ }
|
|
|
|
return 0;
|
|
}
|
|
@@ -3320,9 +3332,7 @@ static int vega20_print_clock_levels(struct pp_hwmgr *hwmgr,
|
|
data->od8_settings.od8_settings_array;
|
|
OverDriveTable_t *od_table =
|
|
&(data->smc_state_table.overdrive_table);
|
|
- struct phm_ppt_v3_information *pptable_information =
|
|
- (struct phm_ppt_v3_information *)hwmgr->pptable;
|
|
- PPTable_t *pptable = (PPTable_t *)pptable_information->smc_pptable;
|
|
+ PPTable_t *pptable = &(data->smc_state_table.pp_table);
|
|
struct pp_clock_levels_with_latency clocks;
|
|
struct vega20_single_dpm_table *fclk_dpm_table =
|
|
&(data->dpm_table.fclk_table);
|
|
@@ -3421,13 +3431,9 @@ static int vega20_print_clock_levels(struct pp_hwmgr *hwmgr,
|
|
current_lane_width =
|
|
vega20_get_current_pcie_link_width_level(hwmgr);
|
|
for (i = 0; i < NUM_LINK_LEVELS; i++) {
|
|
- if (i == 1 && data->pcie_parameters_override) {
|
|
- gen_speed = data->pcie_gen_level1;
|
|
- lane_width = data->pcie_width_level1;
|
|
- } else {
|
|
- gen_speed = pptable->PcieGenSpeed[i];
|
|
- lane_width = pptable->PcieLaneCount[i];
|
|
- }
|
|
+ gen_speed = pptable->PcieGenSpeed[i];
|
|
+ lane_width = pptable->PcieLaneCount[i];
|
|
+
|
|
size += sprintf(buf + size, "%d: %s %s %dMhz %s\n", i,
|
|
(gen_speed == 0) ? "2.5GT/s," :
|
|
(gen_speed == 1) ? "5.0GT/s," :
|
|
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
|
|
index e00616d94f26e..cfacce0418a49 100644
|
|
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
|
|
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
|
|
@@ -340,13 +340,14 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
|
|
if (--shmem->vmap_use_count > 0)
|
|
return;
|
|
|
|
- if (obj->import_attach)
|
|
+ if (obj->import_attach) {
|
|
dma_buf_vunmap(obj->import_attach->dmabuf, shmem->vaddr);
|
|
- else
|
|
+ } else {
|
|
vunmap(shmem->vaddr);
|
|
+ drm_gem_shmem_put_pages(shmem);
|
|
+ }
|
|
|
|
shmem->vaddr = NULL;
|
|
- drm_gem_shmem_put_pages(shmem);
|
|
}
|
|
|
|
/*
|
|
@@ -534,14 +535,28 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
|
struct drm_gem_object *obj = vma->vm_private_data;
|
|
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
|
loff_t num_pages = obj->size >> PAGE_SHIFT;
|
|
+ vm_fault_t ret;
|
|
struct page *page;
|
|
+ pgoff_t page_offset;
|
|
|
|
- if (vmf->pgoff >= num_pages || WARN_ON_ONCE(!shmem->pages))
|
|
- return VM_FAULT_SIGBUS;
|
|
+ /* We don't use vmf->pgoff since that has the fake offset */
|
|
+ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
|
|
|
|
- page = shmem->pages[vmf->pgoff];
|
|
+ mutex_lock(&shmem->pages_lock);
|
|
|
|
- return vmf_insert_page(vma, vmf->address, page);
|
|
+ if (page_offset >= num_pages ||
|
|
+ WARN_ON_ONCE(!shmem->pages) ||
|
|
+ shmem->madv < 0) {
|
|
+ ret = VM_FAULT_SIGBUS;
|
|
+ } else {
|
|
+ page = shmem->pages[page_offset];
|
|
+
|
|
+ ret = vmf_insert_page(vma, vmf->address, page);
|
|
+ }
|
|
+
|
|
+ mutex_unlock(&shmem->pages_lock);
|
|
+
|
|
+ return ret;
|
|
}
|
|
|
|
static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
|
|
@@ -590,9 +605,6 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
|
|
struct drm_gem_shmem_object *shmem;
|
|
int ret;
|
|
|
|
- /* Remove the fake offset */
|
|
- vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
|
|
-
|
|
if (obj->import_attach) {
|
|
/* Drop the reference drm_gem_mmap_obj() acquired.*/
|
|
drm_gem_object_put(obj);
|
|
diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
|
|
index f86448ab1fe04..dc734d4828a17 100644
|
|
--- a/drivers/gpu/drm/drm_ioc32.c
|
|
+++ b/drivers/gpu/drm/drm_ioc32.c
|
|
@@ -99,6 +99,8 @@ static int compat_drm_version(struct file *file, unsigned int cmd,
|
|
if (copy_from_user(&v32, (void __user *)arg, sizeof(v32)))
|
|
return -EFAULT;
|
|
|
|
+ memset(&v, 0, sizeof(v));
|
|
+
|
|
v = (struct drm_version) {
|
|
.name_len = v32.name_len,
|
|
.name = compat_ptr(v32.name),
|
|
@@ -137,6 +139,9 @@ static int compat_drm_getunique(struct file *file, unsigned int cmd,
|
|
|
|
if (copy_from_user(&uq32, (void __user *)arg, sizeof(uq32)))
|
|
return -EFAULT;
|
|
+
|
|
+ memset(&uq, 0, sizeof(uq));
|
|
+
|
|
uq = (struct drm_unique){
|
|
.unique_len = uq32.unique_len,
|
|
.unique = compat_ptr(uq32.unique),
|
|
@@ -265,6 +270,8 @@ static int compat_drm_getclient(struct file *file, unsigned int cmd,
|
|
if (copy_from_user(&c32, argp, sizeof(c32)))
|
|
return -EFAULT;
|
|
|
|
+ memset(&client, 0, sizeof(client));
|
|
+
|
|
client.idx = c32.idx;
|
|
|
|
err = drm_ioctl_kernel(file, drm_getclient, &client, 0);
|
|
@@ -852,6 +859,8 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
|
|
if (copy_from_user(&req32, argp, sizeof(req32)))
|
|
return -EFAULT;
|
|
|
|
+ memset(&req, 0, sizeof(req));
|
|
+
|
|
req.request.type = req32.request.type;
|
|
req.request.sequence = req32.request.sequence;
|
|
req.request.signal = req32.request.signal;
|
|
@@ -889,6 +898,8 @@ static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd,
|
|
struct drm_mode_fb_cmd2 req64;
|
|
int err;
|
|
|
|
+ memset(&req64, 0, sizeof(req64));
|
|
+
|
|
if (copy_from_user(&req64, argp,
|
|
offsetof(drm_mode_fb_cmd232_t, modifier)))
|
|
return -EFAULT;
|
|
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
|
|
index efdeb7b7b2a0a..a19537706ed1f 100644
|
|
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
|
|
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
|
|
@@ -708,9 +708,12 @@ static int engine_setup_common(struct intel_engine_cs *engine)
|
|
goto err_status;
|
|
}
|
|
|
|
+ err = intel_engine_init_cmd_parser(engine);
|
|
+ if (err)
|
|
+ goto err_cmd_parser;
|
|
+
|
|
intel_engine_init_active(engine, ENGINE_PHYSICAL);
|
|
intel_engine_init_execlists(engine);
|
|
- intel_engine_init_cmd_parser(engine);
|
|
intel_engine_init__pm(engine);
|
|
intel_engine_init_retire(engine);
|
|
|
|
@@ -724,6 +727,8 @@ static int engine_setup_common(struct intel_engine_cs *engine)
|
|
|
|
return 0;
|
|
|
|
+err_cmd_parser:
|
|
+ intel_breadcrumbs_free(engine->breadcrumbs);
|
|
err_status:
|
|
cleanup_status_page(engine);
|
|
return err;
|
|
diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
|
|
index e7362ec22aded..9ce174950340b 100644
|
|
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
|
|
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
|
|
@@ -939,7 +939,7 @@ static void fini_hash_table(struct intel_engine_cs *engine)
|
|
* struct intel_engine_cs based on whether the platform requires software
|
|
* command parsing.
|
|
*/
|
|
-void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
+int intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
{
|
|
const struct drm_i915_cmd_table *cmd_tables;
|
|
int cmd_table_count;
|
|
@@ -947,7 +947,7 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
|
|
if (!IS_GEN(engine->i915, 7) && !(IS_GEN(engine->i915, 9) &&
|
|
engine->class == COPY_ENGINE_CLASS))
|
|
- return;
|
|
+ return 0;
|
|
|
|
switch (engine->class) {
|
|
case RENDER_CLASS:
|
|
@@ -1012,19 +1012,19 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
break;
|
|
default:
|
|
MISSING_CASE(engine->class);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
|
|
if (!validate_cmds_sorted(engine, cmd_tables, cmd_table_count)) {
|
|
drm_err(&engine->i915->drm,
|
|
"%s: command descriptions are not sorted\n",
|
|
engine->name);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
if (!validate_regs_sorted(engine)) {
|
|
drm_err(&engine->i915->drm,
|
|
"%s: registers are not sorted\n", engine->name);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
|
|
ret = init_hash_table(engine, cmd_tables, cmd_table_count);
|
|
@@ -1032,10 +1032,17 @@ void intel_engine_init_cmd_parser(struct intel_engine_cs *engine)
|
|
drm_err(&engine->i915->drm,
|
|
"%s: initialised failed!\n", engine->name);
|
|
fini_hash_table(engine);
|
|
- return;
|
|
+ goto out;
|
|
}
|
|
|
|
engine->flags |= I915_ENGINE_USING_CMD_PARSER;
|
|
+
|
|
+out:
|
|
+ if (intel_engine_requires_cmd_parser(engine) &&
|
|
+ !intel_engine_using_cmd_parser(engine))
|
|
+ return -EINVAL;
|
|
+
|
|
+ return 0;
|
|
}
|
|
|
|
/**
|
|
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
|
|
index fa830e77bb648..6909901b35513 100644
|
|
--- a/drivers/gpu/drm/i915/i915_drv.h
|
|
+++ b/drivers/gpu/drm/i915/i915_drv.h
|
|
@@ -1946,7 +1946,7 @@ const char *i915_cache_level_str(struct drm_i915_private *i915, int type);
|
|
|
|
/* i915_cmd_parser.c */
|
|
int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv);
|
|
-void intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
|
|
+int intel_engine_init_cmd_parser(struct intel_engine_cs *engine);
|
|
void intel_engine_cleanup_cmd_parser(struct intel_engine_cs *engine);
|
|
int intel_engine_cmd_parser(struct intel_engine_cs *engine,
|
|
struct i915_vma *batch,
|
|
diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
|
|
index 3d1de9cbb1c8d..db56732bdd260 100644
|
|
--- a/drivers/gpu/drm/meson/meson_drv.c
|
|
+++ b/drivers/gpu/drm/meson/meson_drv.c
|
|
@@ -482,6 +482,16 @@ static int meson_probe_remote(struct platform_device *pdev,
|
|
return count;
|
|
}
|
|
|
|
+static void meson_drv_shutdown(struct platform_device *pdev)
|
|
+{
|
|
+ struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
|
|
+ struct drm_device *drm = priv->drm;
|
|
+
|
|
+ DRM_DEBUG_DRIVER("\n");
|
|
+ drm_kms_helper_poll_fini(drm);
|
|
+ drm_atomic_helper_shutdown(drm);
|
|
+}
|
|
+
|
|
static int meson_drv_probe(struct platform_device *pdev)
|
|
{
|
|
struct component_match *match = NULL;
|
|
@@ -553,6 +563,7 @@ static const struct dev_pm_ops meson_drv_pm_ops = {
|
|
|
|
static struct platform_driver meson_drm_platform_driver = {
|
|
.probe = meson_drv_probe,
|
|
+ .shutdown = meson_drv_shutdown,
|
|
.driver = {
|
|
.name = "meson-drm",
|
|
.of_match_table = dt_match,
|
|
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
|
|
index 6063f3a153290..862ef59d4d033 100644
|
|
--- a/drivers/gpu/drm/qxl/qxl_display.c
|
|
+++ b/drivers/gpu/drm/qxl/qxl_display.c
|
|
@@ -327,6 +327,7 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc *crtc,
|
|
|
|
head.id = i;
|
|
head.flags = 0;
|
|
+ head.surface_id = 0;
|
|
oldcount = qdev->monitors_config->count;
|
|
if (crtc->state->active) {
|
|
struct drm_display_mode *mode = &crtc->mode;
|
|
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
|
|
index cc397671f6898..0f5d1e598d75f 100644
|
|
--- a/drivers/gpu/drm/tiny/gm12u320.c
|
|
+++ b/drivers/gpu/drm/tiny/gm12u320.c
|
|
@@ -83,6 +83,7 @@ MODULE_PARM_DESC(eco_mode, "Turn on Eco mode (less bright, more silent)");
|
|
|
|
struct gm12u320_device {
|
|
struct drm_device dev;
|
|
+ struct device *dmadev;
|
|
struct drm_simple_display_pipe pipe;
|
|
struct drm_connector conn;
|
|
struct usb_device *udev;
|
|
@@ -598,6 +599,22 @@ static const uint64_t gm12u320_pipe_modifiers[] = {
|
|
DRM_FORMAT_MOD_INVALID
|
|
};
|
|
|
|
+/*
|
|
+ * FIXME: Dma-buf sharing requires DMA support by the importing device.
|
|
+ * This function is a workaround to make USB devices work as well.
|
|
+ * See todo.rst for how to fix the issue in the dma-buf framework.
|
|
+ */
|
|
+static struct drm_gem_object *gm12u320_gem_prime_import(struct drm_device *dev,
|
|
+ struct dma_buf *dma_buf)
|
|
+{
|
|
+ struct gm12u320_device *gm12u320 = to_gm12u320(dev);
|
|
+
|
|
+ if (!gm12u320->dmadev)
|
|
+ return ERR_PTR(-ENODEV);
|
|
+
|
|
+ return drm_gem_prime_import_dev(dev, dma_buf, gm12u320->dmadev);
|
|
+}
|
|
+
|
|
DEFINE_DRM_GEM_FOPS(gm12u320_fops);
|
|
|
|
static struct drm_driver gm12u320_drm_driver = {
|
|
@@ -611,6 +628,7 @@ static struct drm_driver gm12u320_drm_driver = {
|
|
|
|
.fops = &gm12u320_fops,
|
|
DRM_GEM_SHMEM_DRIVER_OPS,
|
|
+ .gem_prime_import = gm12u320_gem_prime_import,
|
|
};
|
|
|
|
static const struct drm_mode_config_funcs gm12u320_mode_config_funcs = {
|
|
@@ -637,16 +655,19 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
|
|
struct gm12u320_device, dev);
|
|
if (IS_ERR(gm12u320))
|
|
return PTR_ERR(gm12u320);
|
|
+ dev = &gm12u320->dev;
|
|
+
|
|
+ gm12u320->dmadev = usb_intf_get_dma_device(to_usb_interface(dev->dev));
|
|
+ if (!gm12u320->dmadev)
|
|
+ drm_warn(dev, "buffer sharing not supported"); /* not an error */
|
|
|
|
gm12u320->udev = interface_to_usbdev(interface);
|
|
INIT_DELAYED_WORK(&gm12u320->fb_update.work, gm12u320_fb_update_work);
|
|
mutex_init(&gm12u320->fb_update.lock);
|
|
|
|
- dev = &gm12u320->dev;
|
|
-
|
|
ret = drmm_mode_config_init(dev);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
dev->mode_config.min_width = GM12U320_USER_WIDTH;
|
|
dev->mode_config.max_width = GM12U320_USER_WIDTH;
|
|
@@ -656,15 +677,15 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
|
|
|
|
ret = gm12u320_usb_alloc(gm12u320);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
ret = gm12u320_set_ecomode(gm12u320);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
ret = gm12u320_conn_init(gm12u320);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
ret = drm_simple_display_pipe_init(&gm12u320->dev,
|
|
&gm12u320->pipe,
|
|
@@ -674,24 +695,31 @@ static int gm12u320_usb_probe(struct usb_interface *interface,
|
|
gm12u320_pipe_modifiers,
|
|
&gm12u320->conn);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
drm_mode_config_reset(dev);
|
|
|
|
usb_set_intfdata(interface, dev);
|
|
ret = drm_dev_register(dev, 0);
|
|
if (ret)
|
|
- return ret;
|
|
+ goto err_put_device;
|
|
|
|
drm_fbdev_generic_setup(dev, 0);
|
|
|
|
return 0;
|
|
+
|
|
+err_put_device:
|
|
+ put_device(gm12u320->dmadev);
|
|
+ return ret;
|
|
}
|
|
|
|
static void gm12u320_usb_disconnect(struct usb_interface *interface)
|
|
{
|
|
struct drm_device *dev = usb_get_intfdata(interface);
|
|
+ struct gm12u320_device *gm12u320 = to_gm12u320(dev);
|
|
|
|
+ put_device(gm12u320->dmadev);
|
|
+ gm12u320->dmadev = NULL;
|
|
drm_dev_unplug(dev);
|
|
drm_atomic_helper_shutdown(dev);
|
|
}
|
|
diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
|
|
index 96d4317a2c1bd..bcf32d188c1b1 100644
|
|
--- a/drivers/gpu/drm/udl/udl_drv.c
|
|
+++ b/drivers/gpu/drm/udl/udl_drv.c
|
|
@@ -32,6 +32,22 @@ static int udl_usb_resume(struct usb_interface *interface)
|
|
return drm_mode_config_helper_resume(dev);
|
|
}
|
|
|
|
+/*
|
|
+ * FIXME: Dma-buf sharing requires DMA support by the importing device.
|
|
+ * This function is a workaround to make USB devices work as well.
|
|
+ * See todo.rst for how to fix the issue in the dma-buf framework.
|
|
+ */
|
|
+static struct drm_gem_object *udl_driver_gem_prime_import(struct drm_device *dev,
|
|
+ struct dma_buf *dma_buf)
|
|
+{
|
|
+ struct udl_device *udl = to_udl(dev);
|
|
+
|
|
+ if (!udl->dmadev)
|
|
+ return ERR_PTR(-ENODEV);
|
|
+
|
|
+ return drm_gem_prime_import_dev(dev, dma_buf, udl->dmadev);
|
|
+}
|
|
+
|
|
DEFINE_DRM_GEM_FOPS(udl_driver_fops);
|
|
|
|
static struct drm_driver driver = {
|
|
@@ -42,6 +58,7 @@ static struct drm_driver driver = {
|
|
|
|
.fops = &udl_driver_fops,
|
|
DRM_GEM_SHMEM_DRIVER_OPS,
|
|
+ .gem_prime_import = udl_driver_gem_prime_import,
|
|
|
|
.name = DRIVER_NAME,
|
|
.desc = DRIVER_DESC,
|
|
diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
|
|
index b1461f30780bc..8aab14871e1b7 100644
|
|
--- a/drivers/gpu/drm/udl/udl_drv.h
|
|
+++ b/drivers/gpu/drm/udl/udl_drv.h
|
|
@@ -50,6 +50,7 @@ struct urb_list {
|
|
struct udl_device {
|
|
struct drm_device drm;
|
|
struct device *dev;
|
|
+ struct device *dmadev;
|
|
struct usb_device *udev;
|
|
|
|
struct drm_simple_display_pipe display_pipe;
|
|
diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
|
|
index f5d27f2a56543..5f1d3891ed549 100644
|
|
--- a/drivers/gpu/drm/udl/udl_main.c
|
|
+++ b/drivers/gpu/drm/udl/udl_main.c
|
|
@@ -314,6 +314,10 @@ int udl_init(struct udl_device *udl)
|
|
|
|
DRM_DEBUG("\n");
|
|
|
|
+ udl->dmadev = usb_intf_get_dma_device(to_usb_interface(dev->dev));
|
|
+ if (!udl->dmadev)
|
|
+ drm_warn(dev, "buffer sharing not supported"); /* not an error */
|
|
+
|
|
mutex_init(&udl->gem_lock);
|
|
|
|
if (!udl_parse_vendor_descriptor(dev, udl->udev)) {
|
|
@@ -342,12 +346,18 @@ int udl_init(struct udl_device *udl)
|
|
err:
|
|
if (udl->urbs.count)
|
|
udl_free_urb_list(dev);
|
|
+ put_device(udl->dmadev);
|
|
DRM_ERROR("%d\n", ret);
|
|
return ret;
|
|
}
|
|
|
|
int udl_drop_usb(struct drm_device *dev)
|
|
{
|
|
+ struct udl_device *udl = to_udl(dev);
|
|
+
|
|
udl_free_urb_list(dev);
|
|
+ put_device(udl->dmadev);
|
|
+ udl->dmadev = NULL;
|
|
+
|
|
return 0;
|
|
}
|
|
diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
|
|
index fcdc922bc9733..271bd8d243395 100644
|
|
--- a/drivers/hid/hid-logitech-dj.c
|
|
+++ b/drivers/hid/hid-logitech-dj.c
|
|
@@ -995,7 +995,12 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
|
|
workitem.reports_supported |= STD_KEYBOARD;
|
|
break;
|
|
case 0x0d:
|
|
- device_type = "eQUAD Lightspeed 1_1";
|
|
+ device_type = "eQUAD Lightspeed 1.1";
|
|
+ logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
|
|
+ workitem.reports_supported |= STD_KEYBOARD;
|
|
+ break;
|
|
+ case 0x0f:
|
|
+ device_type = "eQUAD Lightspeed 1.2";
|
|
logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
|
|
workitem.reports_supported |= STD_KEYBOARD;
|
|
break;
|
|
diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
|
|
index 217def2d7cb44..ad6630e3cc779 100644
|
|
--- a/drivers/i2c/busses/i2c-rcar.c
|
|
+++ b/drivers/i2c/busses/i2c-rcar.c
|
|
@@ -91,7 +91,6 @@
|
|
|
|
#define RCAR_BUS_PHASE_START (MDBS | MIE | ESG)
|
|
#define RCAR_BUS_PHASE_DATA (MDBS | MIE)
|
|
-#define RCAR_BUS_MASK_DATA (~(ESG | FSB) & 0xFF)
|
|
#define RCAR_BUS_PHASE_STOP (MDBS | MIE | FSB)
|
|
|
|
#define RCAR_IRQ_SEND (MNR | MAL | MST | MAT | MDE)
|
|
@@ -120,6 +119,7 @@ enum rcar_i2c_type {
|
|
};
|
|
|
|
struct rcar_i2c_priv {
|
|
+ u32 flags;
|
|
void __iomem *io;
|
|
struct i2c_adapter adap;
|
|
struct i2c_msg *msg;
|
|
@@ -130,7 +130,6 @@ struct rcar_i2c_priv {
|
|
|
|
int pos;
|
|
u32 icccr;
|
|
- u32 flags;
|
|
u8 recovery_icmcr; /* protected by adapter lock */
|
|
enum rcar_i2c_type devtype;
|
|
struct i2c_client *slave;
|
|
@@ -621,7 +620,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
|
/*
|
|
* This driver has a lock-free design because there are IP cores (at least
|
|
* R-Car Gen2) which have an inherent race condition in their hardware design.
|
|
- * There, we need to clear RCAR_BUS_MASK_DATA bits as soon as possible after
|
|
+ * There, we need to switch to RCAR_BUS_PHASE_DATA as soon as possible after
|
|
* the interrupt was generated, otherwise an unwanted repeated message gets
|
|
* generated. It turned out that taking a spinlock at the beginning of the ISR
|
|
* was already causing repeated messages. Thus, this driver was converted to
|
|
@@ -630,13 +629,11 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
|
static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
|
|
{
|
|
struct rcar_i2c_priv *priv = ptr;
|
|
- u32 msr, val;
|
|
+ u32 msr;
|
|
|
|
/* Clear START or STOP immediately, except for REPSTART after read */
|
|
- if (likely(!(priv->flags & ID_P_REP_AFTER_RD))) {
|
|
- val = rcar_i2c_read(priv, ICMCR);
|
|
- rcar_i2c_write(priv, ICMCR, val & RCAR_BUS_MASK_DATA);
|
|
- }
|
|
+ if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
|
|
+ rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
|
|
|
|
msr = rcar_i2c_read(priv, ICMSR);
|
|
|
|
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
|
|
index 5157ae29a4460..076526710fe30 100644
|
|
--- a/drivers/infiniband/core/umem.c
|
|
+++ b/drivers/infiniband/core/umem.c
|
|
@@ -220,10 +220,10 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
|
|
|
|
cur_base += ret * PAGE_SIZE;
|
|
npages -= ret;
|
|
- sg = __sg_alloc_table_from_pages(
|
|
- &umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
|
|
- dma_get_max_seg_size(device->dma_device), sg, npages,
|
|
- GFP_KERNEL);
|
|
+ sg = __sg_alloc_table_from_pages(&umem->sg_head, page_list, ret,
|
|
+ 0, ret << PAGE_SHIFT,
|
|
+ ib_dma_max_seg_size(device), sg, npages,
|
|
+ GFP_KERNEL);
|
|
umem->sg_nents = umem->sg_head.nents;
|
|
if (IS_ERR(sg)) {
|
|
unpin_user_pages_dirty_lock(page_list, ret, 0);
|
|
diff --git a/drivers/input/keyboard/applespi.c b/drivers/input/keyboard/applespi.c
|
|
index 14362ebab9a9d..0b46bc014cde7 100644
|
|
--- a/drivers/input/keyboard/applespi.c
|
|
+++ b/drivers/input/keyboard/applespi.c
|
|
@@ -48,6 +48,7 @@
|
|
#include <linux/efi.h>
|
|
#include <linux/input.h>
|
|
#include <linux/input/mt.h>
|
|
+#include <linux/ktime.h>
|
|
#include <linux/leds.h>
|
|
#include <linux/module.h>
|
|
#include <linux/spinlock.h>
|
|
@@ -400,7 +401,7 @@ struct applespi_data {
|
|
unsigned int cmd_msg_cntr;
|
|
/* lock to protect the above parameters and flags below */
|
|
spinlock_t cmd_msg_lock;
|
|
- bool cmd_msg_queued;
|
|
+ ktime_t cmd_msg_queued;
|
|
enum applespi_evt_type cmd_evt_type;
|
|
|
|
struct led_classdev backlight_info;
|
|
@@ -716,7 +717,7 @@ static void applespi_msg_complete(struct applespi_data *applespi,
|
|
wake_up_all(&applespi->drain_complete);
|
|
|
|
if (is_write_msg) {
|
|
- applespi->cmd_msg_queued = false;
|
|
+ applespi->cmd_msg_queued = 0;
|
|
applespi_send_cmd_msg(applespi);
|
|
}
|
|
|
|
@@ -758,8 +759,16 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
|
|
return 0;
|
|
|
|
/* check whether send is in progress */
|
|
- if (applespi->cmd_msg_queued)
|
|
- return 0;
|
|
+ if (applespi->cmd_msg_queued) {
|
|
+ if (ktime_ms_delta(ktime_get(), applespi->cmd_msg_queued) < 1000)
|
|
+ return 0;
|
|
+
|
|
+ dev_warn(&applespi->spi->dev, "Command %d timed out\n",
|
|
+ applespi->cmd_evt_type);
|
|
+
|
|
+ applespi->cmd_msg_queued = 0;
|
|
+ applespi->write_active = false;
|
|
+ }
|
|
|
|
/* set up packet */
|
|
memset(packet, 0, APPLESPI_PACKET_SIZE);
|
|
@@ -856,7 +865,7 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
|
|
return sts;
|
|
}
|
|
|
|
- applespi->cmd_msg_queued = true;
|
|
+ applespi->cmd_msg_queued = ktime_get_coarse();
|
|
applespi->write_active = true;
|
|
|
|
return 0;
|
|
@@ -1908,7 +1917,7 @@ static int __maybe_unused applespi_resume(struct device *dev)
|
|
applespi->drain = false;
|
|
applespi->have_cl_led_on = false;
|
|
applespi->have_bl_level = 0;
|
|
- applespi->cmd_msg_queued = false;
|
|
+ applespi->cmd_msg_queued = 0;
|
|
applespi->read_active = false;
|
|
applespi->write_active = false;
|
|
|
|
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
|
|
index c842545368fdd..3c215f0a6052b 100644
|
|
--- a/drivers/iommu/amd/init.c
|
|
+++ b/drivers/iommu/amd/init.c
|
|
@@ -12,6 +12,7 @@
|
|
#include <linux/acpi.h>
|
|
#include <linux/list.h>
|
|
#include <linux/bitmap.h>
|
|
+#include <linux/delay.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/syscore_ops.h>
|
|
#include <linux/interrupt.h>
|
|
@@ -254,6 +255,8 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
|
|
static int amd_iommu_enable_interrupts(void);
|
|
static int __init iommu_go_to_state(enum iommu_init_state state);
|
|
static void init_device_table_dma(void);
|
|
+static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
|
|
+ u8 fxn, u64 *value, bool is_write);
|
|
|
|
static bool amd_iommu_pre_enabled = true;
|
|
|
|
@@ -1717,13 +1720,11 @@ static int __init init_iommu_all(struct acpi_table_header *table)
|
|
return 0;
|
|
}
|
|
|
|
-static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
|
|
- u8 fxn, u64 *value, bool is_write);
|
|
-
|
|
-static void init_iommu_perf_ctr(struct amd_iommu *iommu)
|
|
+static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
|
|
{
|
|
+ int retry;
|
|
struct pci_dev *pdev = iommu->dev;
|
|
- u64 val = 0xabcd, val2 = 0, save_reg = 0;
|
|
+ u64 val = 0xabcd, val2 = 0, save_reg, save_src;
|
|
|
|
if (!iommu_feature(iommu, FEATURE_PC))
|
|
return;
|
|
@@ -1731,17 +1732,39 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
|
|
amd_iommu_pc_present = true;
|
|
|
|
/* save the value to restore, if writable */
|
|
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false))
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
|
|
+ iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
|
|
goto pc_false;
|
|
|
|
- /* Check if the performance counters can be written to */
|
|
- if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
|
|
- (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
|
|
- (val != val2))
|
|
+ /*
|
|
+ * Disable power gating by programing the performance counter
|
|
+ * source to 20 (i.e. counts the reads and writes from/to IOMMU
|
|
+ * Reserved Register [MMIO Offset 1FF8h] that are ignored.),
|
|
+ * which never get incremented during this init phase.
|
|
+ * (Note: The event is also deprecated.)
|
|
+ */
|
|
+ val = 20;
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
|
|
goto pc_false;
|
|
|
|
+ /* Check if the performance counters can be written to */
|
|
+ val = 0xabcd;
|
|
+ for (retry = 5; retry; retry--) {
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
|
|
+ iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
|
|
+ val2)
|
|
+ break;
|
|
+
|
|
+ /* Wait about 20 msec for power gating to disable and retry. */
|
|
+ msleep(20);
|
|
+ }
|
|
+
|
|
/* restore */
|
|
- if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true))
|
|
+ if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
|
|
+ iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
|
|
+ goto pc_false;
|
|
+
|
|
+ if (val != val2)
|
|
goto pc_false;
|
|
|
|
pci_info(pdev, "IOMMU performance counters supported\n");
|
|
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
|
|
index 43f392d27d318..b200a3acc6ed9 100644
|
|
--- a/drivers/iommu/intel/svm.c
|
|
+++ b/drivers/iommu/intel/svm.c
|
|
@@ -1079,8 +1079,17 @@ prq_advance:
|
|
* Clear the page request overflow bit and wake up all threads that
|
|
* are waiting for the completion of this handling.
|
|
*/
|
|
- if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO)
|
|
- writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
|
|
+ if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
|
|
+ pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n",
|
|
+ iommu->name);
|
|
+ head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
|
|
+ tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
|
|
+ if (head == tail) {
|
|
+ writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
|
|
+ pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared",
|
|
+ iommu->name);
|
|
+ }
|
|
+ }
|
|
|
|
if (!completion_done(&iommu->prq_complete))
|
|
complete(&iommu->prq_complete);
|
|
diff --git a/drivers/media/platform/vsp1/vsp1_drm.c b/drivers/media/platform/vsp1/vsp1_drm.c
|
|
index 86d5e3f4b1ffc..06f74d410973e 100644
|
|
--- a/drivers/media/platform/vsp1/vsp1_drm.c
|
|
+++ b/drivers/media/platform/vsp1/vsp1_drm.c
|
|
@@ -245,7 +245,7 @@ static int vsp1_du_pipeline_setup_brx(struct vsp1_device *vsp1,
|
|
brx = &vsp1->bru->entity;
|
|
else if (pipe->brx && !drm_pipe->force_brx_release)
|
|
brx = pipe->brx;
|
|
- else if (!vsp1->bru->entity.pipe)
|
|
+ else if (vsp1_feature(vsp1, VSP1_HAS_BRU) && !vsp1->bru->entity.pipe)
|
|
brx = &vsp1->bru->entity;
|
|
else
|
|
brx = &vsp1->brs->entity;
|
|
@@ -462,9 +462,9 @@ static int vsp1_du_pipeline_setup_inputs(struct vsp1_device *vsp1,
|
|
* make sure it is present in the pipeline's list of entities if it
|
|
* wasn't already.
|
|
*/
|
|
- if (!use_uif) {
|
|
+ if (drm_pipe->uif && !use_uif) {
|
|
drm_pipe->uif->pipe = NULL;
|
|
- } else if (!drm_pipe->uif->pipe) {
|
|
+ } else if (drm_pipe->uif && !drm_pipe->uif->pipe) {
|
|
drm_pipe->uif->pipe = pipe;
|
|
list_add_tail(&drm_pipe->uif->list_pipe, &pipe->entities);
|
|
}
|
|
diff --git a/drivers/media/rc/Makefile b/drivers/media/rc/Makefile
|
|
index 5bb2932ab1195..ff6a8fc4c38e5 100644
|
|
--- a/drivers/media/rc/Makefile
|
|
+++ b/drivers/media/rc/Makefile
|
|
@@ -5,6 +5,7 @@ obj-y += keymaps/
|
|
obj-$(CONFIG_RC_CORE) += rc-core.o
|
|
rc-core-y := rc-main.o rc-ir-raw.o
|
|
rc-core-$(CONFIG_LIRC) += lirc_dev.o
|
|
+rc-core-$(CONFIG_MEDIA_CEC_RC) += keymaps/rc-cec.o
|
|
rc-core-$(CONFIG_BPF_LIRC_MODE2) += bpf-lirc.o
|
|
obj-$(CONFIG_IR_NEC_DECODER) += ir-nec-decoder.o
|
|
obj-$(CONFIG_IR_RC5_DECODER) += ir-rc5-decoder.o
|
|
diff --git a/drivers/media/rc/keymaps/Makefile b/drivers/media/rc/keymaps/Makefile
|
|
index aaa1bf81d00d4..3581761e9797d 100644
|
|
--- a/drivers/media/rc/keymaps/Makefile
|
|
+++ b/drivers/media/rc/keymaps/Makefile
|
|
@@ -21,7 +21,6 @@ obj-$(CONFIG_RC_MAP) += rc-adstech-dvb-t-pci.o \
|
|
rc-behold.o \
|
|
rc-behold-columbus.o \
|
|
rc-budget-ci-old.o \
|
|
- rc-cec.o \
|
|
rc-cinergy-1400.o \
|
|
rc-cinergy.o \
|
|
rc-d680-dmb.o \
|
|
diff --git a/drivers/media/rc/keymaps/rc-cec.c b/drivers/media/rc/keymaps/rc-cec.c
|
|
index 3e3bd11092b45..068e22aeac8c3 100644
|
|
--- a/drivers/media/rc/keymaps/rc-cec.c
|
|
+++ b/drivers/media/rc/keymaps/rc-cec.c
|
|
@@ -1,5 +1,15 @@
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
|
/* Keytable for the CEC remote control
|
|
+ *
|
|
+ * This keymap is unusual in that it can't be built as a module,
|
|
+ * instead it is registered directly in rc-main.c if CONFIG_MEDIA_CEC_RC
|
|
+ * is set. This is because it can be called from drm_dp_cec_set_edid() via
|
|
+ * cec_register_adapter() in an asynchronous context, and it is not
|
|
+ * allowed to use request_module() to load rc-cec.ko in that case.
|
|
+ *
|
|
+ * Since this keymap is only used if CONFIG_MEDIA_CEC_RC is set, we
|
|
+ * just compile this keymap into the rc-core module and never as a
|
|
+ * separate module.
|
|
*
|
|
* Copyright (c) 2015 by Kamil Debski
|
|
*/
|
|
@@ -152,7 +162,7 @@ static struct rc_map_table cec[] = {
|
|
/* 0x77-0xff: Reserved */
|
|
};
|
|
|
|
-static struct rc_map_list cec_map = {
|
|
+struct rc_map_list cec_map = {
|
|
.map = {
|
|
.scan = cec,
|
|
.size = ARRAY_SIZE(cec),
|
|
@@ -160,19 +170,3 @@ static struct rc_map_list cec_map = {
|
|
.name = RC_MAP_CEC,
|
|
}
|
|
};
|
|
-
|
|
-static int __init init_rc_map_cec(void)
|
|
-{
|
|
- return rc_map_register(&cec_map);
|
|
-}
|
|
-
|
|
-static void __exit exit_rc_map_cec(void)
|
|
-{
|
|
- rc_map_unregister(&cec_map);
|
|
-}
|
|
-
|
|
-module_init(init_rc_map_cec);
|
|
-module_exit(exit_rc_map_cec);
|
|
-
|
|
-MODULE_LICENSE("GPL");
|
|
-MODULE_AUTHOR("Kamil Debski");
|
|
diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
|
|
index 1fd62c1dac768..8e88dc8ea6c5e 100644
|
|
--- a/drivers/media/rc/rc-main.c
|
|
+++ b/drivers/media/rc/rc-main.c
|
|
@@ -2069,6 +2069,9 @@ static int __init rc_core_init(void)
|
|
|
|
led_trigger_register_simple("rc-feedback", &led_feedback);
|
|
rc_map_register(&empty_map);
|
|
+#ifdef CONFIG_MEDIA_CEC_RC
|
|
+ rc_map_register(&cec_map);
|
|
+#endif
|
|
|
|
return 0;
|
|
}
|
|
@@ -2078,6 +2081,9 @@ static void __exit rc_core_exit(void)
|
|
lirc_dev_exit();
|
|
class_unregister(&rc_class);
|
|
led_trigger_unregister_simple(led_feedback);
|
|
+#ifdef CONFIG_MEDIA_CEC_RC
|
|
+ rc_map_unregister(&cec_map);
|
|
+#endif
|
|
rc_map_unregister(&empty_map);
|
|
}
|
|
|
|
diff --git a/drivers/media/usb/usbtv/usbtv-audio.c b/drivers/media/usb/usbtv/usbtv-audio.c
|
|
index b57e94fb19770..333bd305a4f9f 100644
|
|
--- a/drivers/media/usb/usbtv/usbtv-audio.c
|
|
+++ b/drivers/media/usb/usbtv/usbtv-audio.c
|
|
@@ -371,7 +371,7 @@ void usbtv_audio_free(struct usbtv *usbtv)
|
|
cancel_work_sync(&usbtv->snd_trigger);
|
|
|
|
if (usbtv->snd && usbtv->udev) {
|
|
- snd_card_free(usbtv->snd);
|
|
+ snd_card_free_when_closed(usbtv->snd);
|
|
usbtv->snd = NULL;
|
|
}
|
|
}
|
|
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
|
|
index 815d01f785dff..273d9c1591793 100644
|
|
--- a/drivers/misc/fastrpc.c
|
|
+++ b/drivers/misc/fastrpc.c
|
|
@@ -948,6 +948,11 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
|
|
if (!fl->cctx->rpdev)
|
|
return -EPIPE;
|
|
|
|
+ if (handle == FASTRPC_INIT_HANDLE && !kernel) {
|
|
+ dev_warn_ratelimited(fl->sctx->dev, "user app trying to send a kernel RPC message (%d)\n", handle);
|
|
+ return -EPERM;
|
|
+ }
|
|
+
|
|
ctx = fastrpc_context_alloc(fl, kernel, sc, args);
|
|
if (IS_ERR(ctx))
|
|
return PTR_ERR(ctx);
|
|
diff --git a/drivers/misc/pvpanic.c b/drivers/misc/pvpanic.c
|
|
index e16a5e51006e5..d9140e75602d7 100644
|
|
--- a/drivers/misc/pvpanic.c
|
|
+++ b/drivers/misc/pvpanic.c
|
|
@@ -166,6 +166,7 @@ static const struct of_device_id pvpanic_mmio_match[] = {
|
|
{ .compatible = "qemu,pvpanic-mmio", },
|
|
{}
|
|
};
|
|
+MODULE_DEVICE_TABLE(of, pvpanic_mmio_match);
|
|
|
|
static struct platform_driver pvpanic_mmio_driver = {
|
|
.driver = {
|
|
diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
|
|
index c2e70b757dd12..4383c262b3f5a 100644
|
|
--- a/drivers/mmc/core/bus.c
|
|
+++ b/drivers/mmc/core/bus.c
|
|
@@ -399,11 +399,6 @@ void mmc_remove_card(struct mmc_card *card)
|
|
mmc_remove_card_debugfs(card);
|
|
#endif
|
|
|
|
- if (host->cqe_enabled) {
|
|
- host->cqe_ops->cqe_disable(host);
|
|
- host->cqe_enabled = false;
|
|
- }
|
|
-
|
|
if (mmc_card_present(card)) {
|
|
if (mmc_host_is_spi(card->host)) {
|
|
pr_info("%s: SPI card removed\n",
|
|
@@ -416,6 +411,10 @@ void mmc_remove_card(struct mmc_card *card)
|
|
of_node_put(card->dev.of_node);
|
|
}
|
|
|
|
+ if (host->cqe_enabled) {
|
|
+ host->cqe_ops->cqe_disable(host);
|
|
+ host->cqe_enabled = false;
|
|
+ }
|
|
+
|
|
put_device(&card->dev);
|
|
}
|
|
-
|
|
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
|
|
index ff3063ce2acda..9ce34e8800335 100644
|
|
--- a/drivers/mmc/core/mmc.c
|
|
+++ b/drivers/mmc/core/mmc.c
|
|
@@ -423,10 +423,6 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
|
|
|
/* EXT_CSD value is in units of 10ms, but we store in ms */
|
|
card->ext_csd.part_time = 10 * ext_csd[EXT_CSD_PART_SWITCH_TIME];
|
|
- /* Some eMMC set the value too low so set a minimum */
|
|
- if (card->ext_csd.part_time &&
|
|
- card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
|
|
- card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
|
|
|
|
/* Sleep / awake timeout in 100ns units */
|
|
if (sa_shift > 0 && sa_shift <= 0x17)
|
|
@@ -616,6 +612,17 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
|
card->ext_csd.data_sector_size = 512;
|
|
}
|
|
|
|
+ /*
|
|
+ * GENERIC_CMD6_TIME is to be used "unless a specific timeout is defined
|
|
+ * when accessing a specific field", so use it here if there is no
|
|
+ * PARTITION_SWITCH_TIME.
|
|
+ */
|
|
+ if (!card->ext_csd.part_time)
|
|
+ card->ext_csd.part_time = card->ext_csd.generic_cmd6_time;
|
|
+ /* Some eMMC set the value too low so set a minimum */
|
|
+ if (card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
|
|
+ card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
|
|
+
|
|
/* eMMC v5 or later */
|
|
if (card->ext_csd.rev >= 7) {
|
|
memcpy(card->ext_csd.fwrev, &ext_csd[EXT_CSD_FIRMWARE_VERSION],
|
|
diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
|
|
index b5a41a7ce1658..9bde0def114b5 100644
|
|
--- a/drivers/mmc/host/mmci.c
|
|
+++ b/drivers/mmc/host/mmci.c
|
|
@@ -1241,7 +1241,11 @@ mmci_start_command(struct mmci_host *host, struct mmc_command *cmd, u32 c)
|
|
if (!cmd->busy_timeout)
|
|
cmd->busy_timeout = 10 * MSEC_PER_SEC;
|
|
|
|
- clks = (unsigned long long)cmd->busy_timeout * host->cclk;
|
|
+ if (cmd->busy_timeout > host->mmc->max_busy_timeout)
|
|
+ clks = (unsigned long long)host->mmc->max_busy_timeout * host->cclk;
|
|
+ else
|
|
+ clks = (unsigned long long)cmd->busy_timeout * host->cclk;
|
|
+
|
|
do_div(clks, MSEC_PER_SEC);
|
|
writel_relaxed(clks, host->base + MMCIDATATIMER);
|
|
}
|
|
@@ -2091,6 +2095,10 @@ static int mmci_probe(struct amba_device *dev,
|
|
mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY;
|
|
}
|
|
|
|
+ /* Variants with mandatory busy timeout in HW needs R1B responses. */
|
|
+ if (variant->busy_timeout)
|
|
+ mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
|
|
+
|
|
/* Prepare a CMD12 - needed to clear the DPSM on some variants. */
|
|
host->stop_abort.opcode = MMC_STOP_TRANSMISSION;
|
|
host->stop_abort.arg = 0;
|
|
diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
|
|
index 004fbfc236721..dc84e2dff4085 100644
|
|
--- a/drivers/mmc/host/mtk-sd.c
|
|
+++ b/drivers/mmc/host/mtk-sd.c
|
|
@@ -1101,13 +1101,13 @@ static void msdc_track_cmd_data(struct msdc_host *host,
|
|
static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
|
|
{
|
|
unsigned long flags;
|
|
- bool ret;
|
|
|
|
- ret = cancel_delayed_work(&host->req_timeout);
|
|
- if (!ret) {
|
|
- /* delay work already running */
|
|
- return;
|
|
- }
|
|
+ /*
|
|
+ * No need check the return value of cancel_delayed_work, as only ONE
|
|
+ * path will go here!
|
|
+ */
|
|
+ cancel_delayed_work(&host->req_timeout);
|
|
+
|
|
spin_lock_irqsave(&host->lock, flags);
|
|
host->mrq = NULL;
|
|
spin_unlock_irqrestore(&host->lock, flags);
|
|
@@ -1129,7 +1129,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
|
|
bool done = false;
|
|
bool sbc_error;
|
|
unsigned long flags;
|
|
- u32 *rsp = cmd->resp;
|
|
+ u32 *rsp;
|
|
|
|
if (mrq->sbc && cmd == mrq->cmd &&
|
|
(events & (MSDC_INT_ACMDRDY | MSDC_INT_ACMDCRCERR
|
|
@@ -1150,6 +1150,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
|
|
|
|
if (done)
|
|
return true;
|
|
+ rsp = cmd->resp;
|
|
|
|
sdr_clr_bits(host->base + MSDC_INTEN, cmd_ints_mask);
|
|
|
|
@@ -1337,7 +1338,7 @@ static void msdc_data_xfer_next(struct msdc_host *host,
|
|
static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
|
|
struct mmc_request *mrq, struct mmc_data *data)
|
|
{
|
|
- struct mmc_command *stop = data->stop;
|
|
+ struct mmc_command *stop;
|
|
unsigned long flags;
|
|
bool done;
|
|
unsigned int check_data = events &
|
|
@@ -1353,6 +1354,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
|
|
|
|
if (done)
|
|
return true;
|
|
+ stop = data->stop;
|
|
|
|
if (check_data || (stop && stop->error)) {
|
|
dev_dbg(host->dev, "DMA status: 0x%8X\n",
|
|
diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c
|
|
index 75007f61df972..4fbbff03137c3 100644
|
|
--- a/drivers/mmc/host/mxs-mmc.c
|
|
+++ b/drivers/mmc/host/mxs-mmc.c
|
|
@@ -643,7 +643,7 @@ static int mxs_mmc_probe(struct platform_device *pdev)
|
|
|
|
ret = mmc_of_parse(mmc);
|
|
if (ret)
|
|
- goto out_clk_disable;
|
|
+ goto out_free_dma;
|
|
|
|
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
|
|
|
|
diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
|
|
index c9434b461aabc..ddeaf8e1f72f9 100644
|
|
--- a/drivers/mmc/host/sdhci-iproc.c
|
|
+++ b/drivers/mmc/host/sdhci-iproc.c
|
|
@@ -296,9 +296,27 @@ static const struct of_device_id sdhci_iproc_of_match[] = {
|
|
MODULE_DEVICE_TABLE(of, sdhci_iproc_of_match);
|
|
|
|
#ifdef CONFIG_ACPI
|
|
+/*
|
|
+ * This is a duplicate of bcm2835_(pltfrm_)data without caps quirks
|
|
+ * which are provided by the ACPI table.
|
|
+ */
|
|
+static const struct sdhci_pltfm_data sdhci_bcm_arasan_data = {
|
|
+ .quirks = SDHCI_QUIRK_BROKEN_CARD_DETECTION |
|
|
+ SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
|
|
+ SDHCI_QUIRK_NO_HISPD_BIT,
|
|
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
|
|
+ .ops = &sdhci_iproc_32only_ops,
|
|
+};
|
|
+
|
|
+static const struct sdhci_iproc_data bcm_arasan_data = {
|
|
+ .pdata = &sdhci_bcm_arasan_data,
|
|
+};
|
|
+
|
|
static const struct acpi_device_id sdhci_iproc_acpi_ids[] = {
|
|
{ .id = "BRCM5871", .driver_data = (kernel_ulong_t)&iproc_cygnus_data },
|
|
{ .id = "BRCM5872", .driver_data = (kernel_ulong_t)&iproc_data },
|
|
+ { .id = "BCM2847", .driver_data = (kernel_ulong_t)&bcm_arasan_data },
|
|
+ { .id = "BRCME88C", .driver_data = (kernel_ulong_t)&bcm2711_data },
|
|
{ /* sentinel */ }
|
|
};
|
|
MODULE_DEVICE_TABLE(acpi, sdhci_iproc_acpi_ids);
|
|
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
|
|
index 3561ae8a481a0..6edf9fffd934a 100644
|
|
--- a/drivers/mmc/host/sdhci.c
|
|
+++ b/drivers/mmc/host/sdhci.c
|
|
@@ -3994,10 +3994,10 @@ void __sdhci_read_caps(struct sdhci_host *host, const u16 *ver,
|
|
if (host->v4_mode)
|
|
sdhci_do_enable_v4_mode(host);
|
|
|
|
- of_property_read_u64(mmc_dev(host->mmc)->of_node,
|
|
- "sdhci-caps-mask", &dt_caps_mask);
|
|
- of_property_read_u64(mmc_dev(host->mmc)->of_node,
|
|
- "sdhci-caps", &dt_caps);
|
|
+ device_property_read_u64_array(mmc_dev(host->mmc),
|
|
+ "sdhci-caps-mask", &dt_caps_mask, 1);
|
|
+ device_property_read_u64_array(mmc_dev(host->mmc),
|
|
+ "sdhci-caps", &dt_caps, 1);
|
|
|
|
v = ver ? *ver : sdhci_readw(host, SDHCI_HOST_VERSION);
|
|
host->version = (v & SDHCI_SPEC_VER_MASK) >> SDHCI_SPEC_VER_SHIFT;
|
|
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
|
|
index 13e0a8caf3b6f..600b9d09ec087 100644
|
|
--- a/drivers/net/Kconfig
|
|
+++ b/drivers/net/Kconfig
|
|
@@ -92,7 +92,7 @@ config WIREGUARD
|
|
select CRYPTO_POLY1305_ARM if ARM
|
|
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
|
|
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
|
|
- select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
|
|
+ select CRYPTO_POLY1305_MIPS if MIPS
|
|
help
|
|
WireGuard is a secure, fast, and easy to use replacement for IPSec
|
|
that uses modern cryptography and clever networking tricks. It's
|
|
diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
|
|
index 99e5f272205d3..d712c6fdbc87d 100644
|
|
--- a/drivers/net/can/flexcan.c
|
|
+++ b/drivers/net/can/flexcan.c
|
|
@@ -662,7 +662,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
|
|
u32 reg;
|
|
|
|
reg = priv->read(®s->mcr);
|
|
- reg |= FLEXCAN_MCR_HALT;
|
|
+ reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
|
|
priv->write(reg, ®s->mcr);
|
|
|
|
while (timeout-- && !(priv->read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
|
@@ -1375,10 +1375,13 @@ static int flexcan_chip_start(struct net_device *dev)
|
|
|
|
flexcan_set_bittiming(dev);
|
|
|
|
+ /* set freeze, halt */
|
|
+ err = flexcan_chip_freeze(priv);
|
|
+ if (err)
|
|
+ goto out_chip_disable;
|
|
+
|
|
/* MCR
|
|
*
|
|
- * enable freeze
|
|
- * halt now
|
|
* only supervisor access
|
|
* enable warning int
|
|
* enable individual RX masking
|
|
@@ -1387,9 +1390,8 @@ static int flexcan_chip_start(struct net_device *dev)
|
|
*/
|
|
reg_mcr = priv->read(®s->mcr);
|
|
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
|
|
- reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
|
|
- FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C |
|
|
- FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
|
+ reg_mcr |= FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ |
|
|
+ FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
|
|
|
/* MCR
|
|
*
|
|
@@ -1800,10 +1802,14 @@ static int register_flexcandev(struct net_device *dev)
|
|
if (err)
|
|
goto out_chip_disable;
|
|
|
|
- /* set freeze, halt and activate FIFO, restrict register access */
|
|
+ /* set freeze, halt */
|
|
+ err = flexcan_chip_freeze(priv);
|
|
+ if (err)
|
|
+ goto out_chip_disable;
|
|
+
|
|
+ /* activate FIFO, restrict register access */
|
|
reg = priv->read(®s->mcr);
|
|
- reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT |
|
|
- FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
|
+ reg |= FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
|
priv->write(reg, ®s->mcr);
|
|
|
|
/* Currently we only support newer versions of this core
|
|
diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
|
|
index f726c5112294f..01f5b6e03a2dd 100644
|
|
--- a/drivers/net/can/m_can/tcan4x5x.c
|
|
+++ b/drivers/net/can/m_can/tcan4x5x.c
|
|
@@ -328,14 +328,14 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
|
|
if (ret)
|
|
return ret;
|
|
|
|
+ /* Zero out the MCAN buffers */
|
|
+ m_can_init_ram(cdev);
|
|
+
|
|
ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
|
|
TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
|
|
if (ret)
|
|
return ret;
|
|
|
|
- /* Zero out the MCAN buffers */
|
|
- m_can_init_ram(cdev);
|
|
-
|
|
return ret;
|
|
}
|
|
|
|
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
|
|
index 4ca0296509936..1a855816cbc9d 100644
|
|
--- a/drivers/net/dsa/sja1105/sja1105_main.c
|
|
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
|
|
@@ -1834,7 +1834,7 @@ out_unlock_ptp:
|
|
speed = SPEED_1000;
|
|
else if (bmcr & BMCR_SPEED100)
|
|
speed = SPEED_100;
|
|
- else if (bmcr & BMCR_SPEED10)
|
|
+ else
|
|
speed = SPEED_10;
|
|
|
|
sja1105_sgmii_pcs_force_speed(priv, speed);
|
|
diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
|
|
index 9b7f1af5f5747..9e02f88645931 100644
|
|
--- a/drivers/net/ethernet/atheros/alx/main.c
|
|
+++ b/drivers/net/ethernet/atheros/alx/main.c
|
|
@@ -1894,13 +1894,16 @@ static int alx_resume(struct device *dev)
|
|
|
|
if (!netif_running(alx->dev))
|
|
return 0;
|
|
- netif_device_attach(alx->dev);
|
|
|
|
rtnl_lock();
|
|
err = __alx_open(alx, true);
|
|
rtnl_unlock();
|
|
+ if (err)
|
|
+ return err;
|
|
|
|
- return err;
|
|
+ netif_device_attach(alx->dev);
|
|
+
|
|
+ return 0;
|
|
}
|
|
|
|
static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
|
|
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
|
|
index c7c5c01a783a0..a59c1f1fb31ed 100644
|
|
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
|
|
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
|
|
@@ -8430,10 +8430,18 @@ static void bnxt_setup_inta(struct bnxt *bp)
|
|
bp->irq_tbl[0].handler = bnxt_inta;
|
|
}
|
|
|
|
+static int bnxt_init_int_mode(struct bnxt *bp);
|
|
+
|
|
static int bnxt_setup_int_mode(struct bnxt *bp)
|
|
{
|
|
int rc;
|
|
|
|
+ if (!bp->irq_tbl) {
|
|
+ rc = bnxt_init_int_mode(bp);
|
|
+ if (rc || !bp->irq_tbl)
|
|
+ return rc ?: -ENODEV;
|
|
+ }
|
|
+
|
|
if (bp->flags & BNXT_FLAG_USING_MSIX)
|
|
bnxt_setup_msix(bp);
|
|
else
|
|
@@ -8618,7 +8626,7 @@ static int bnxt_init_inta(struct bnxt *bp)
|
|
|
|
static int bnxt_init_int_mode(struct bnxt *bp)
|
|
{
|
|
- int rc = 0;
|
|
+ int rc = -ENODEV;
|
|
|
|
if (bp->flags & BNXT_FLAG_MSIX_CAP)
|
|
rc = bnxt_init_msix(bp);
|
|
@@ -9339,7 +9347,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
|
{
|
|
struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
|
|
struct hwrm_func_drv_if_change_input req = {0};
|
|
- bool resc_reinit = false, fw_reset = false;
|
|
+ bool fw_reset = !bp->irq_tbl;
|
|
+ bool resc_reinit = false;
|
|
u32 flags = 0;
|
|
int rc;
|
|
|
|
@@ -9367,6 +9376,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
|
|
|
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
|
|
netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
|
|
+ set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
|
|
return -ENODEV;
|
|
}
|
|
if (resc_reinit || fw_reset) {
|
|
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
|
|
index 5c6c8c5ec7471..ba7f857d1710d 100644
|
|
--- a/drivers/net/ethernet/davicom/dm9000.c
|
|
+++ b/drivers/net/ethernet/davicom/dm9000.c
|
|
@@ -133,6 +133,8 @@ struct board_info {
|
|
u32 wake_state;
|
|
|
|
int ip_summed;
|
|
+
|
|
+ struct regulator *power_supply;
|
|
};
|
|
|
|
/* debug code */
|
|
@@ -1452,7 +1454,7 @@ dm9000_probe(struct platform_device *pdev)
|
|
if (ret) {
|
|
dev_err(dev, "failed to request reset gpio %d: %d\n",
|
|
reset_gpios, ret);
|
|
- return -ENODEV;
|
|
+ goto out_regulator_disable;
|
|
}
|
|
|
|
/* According to manual PWRST# Low Period Min 1ms */
|
|
@@ -1464,8 +1466,10 @@ dm9000_probe(struct platform_device *pdev)
|
|
|
|
if (!pdata) {
|
|
pdata = dm9000_parse_dt(&pdev->dev);
|
|
- if (IS_ERR(pdata))
|
|
- return PTR_ERR(pdata);
|
|
+ if (IS_ERR(pdata)) {
|
|
+ ret = PTR_ERR(pdata);
|
|
+ goto out_regulator_disable;
|
|
+ }
|
|
}
|
|
|
|
/* Init network device */
|
|
@@ -1482,6 +1486,8 @@ dm9000_probe(struct platform_device *pdev)
|
|
|
|
db->dev = &pdev->dev;
|
|
db->ndev = ndev;
|
|
+ if (!IS_ERR(power))
|
|
+ db->power_supply = power;
|
|
|
|
spin_lock_init(&db->lock);
|
|
mutex_init(&db->addr_lock);
|
|
@@ -1706,6 +1712,10 @@ out:
|
|
dm9000_release_board(pdev, db);
|
|
free_netdev(ndev);
|
|
|
|
+out_regulator_disable:
|
|
+ if (!IS_ERR(power))
|
|
+ regulator_disable(power);
|
|
+
|
|
return ret;
|
|
}
|
|
|
|
@@ -1763,10 +1773,13 @@ static int
|
|
dm9000_drv_remove(struct platform_device *pdev)
|
|
{
|
|
struct net_device *ndev = platform_get_drvdata(pdev);
|
|
+ struct board_info *dm = to_dm9000_board(ndev);
|
|
|
|
unregister_netdev(ndev);
|
|
- dm9000_release_board(pdev, netdev_priv(ndev));
|
|
+ dm9000_release_board(pdev, dm);
|
|
free_netdev(ndev); /* free device structure */
|
|
+ if (dm->power_supply)
|
|
+ regulator_disable(dm->power_supply);
|
|
|
|
dev_dbg(&pdev->dev, "released and freed device\n");
|
|
return 0;
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
|
|
index fc2075ea57fea..df4a858c80015 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
|
|
@@ -321,6 +321,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|
int work_done;
|
|
int i;
|
|
|
|
+ enetc_lock_mdio();
|
|
+
|
|
for (i = 0; i < v->count_tx_rings; i++)
|
|
if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
|
|
complete = false;
|
|
@@ -331,8 +333,10 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|
if (work_done)
|
|
v->rx_napi_work = true;
|
|
|
|
- if (!complete)
|
|
+ if (!complete) {
|
|
+ enetc_unlock_mdio();
|
|
return budget;
|
|
+ }
|
|
|
|
napi_complete_done(napi, work_done);
|
|
|
|
@@ -341,8 +345,6 @@ static int enetc_poll(struct napi_struct *napi, int budget)
|
|
|
|
v->rx_napi_work = false;
|
|
|
|
- enetc_lock_mdio();
|
|
-
|
|
/* enable interrupts */
|
|
enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
|
|
|
|
@@ -367,8 +369,8 @@ static void enetc_get_tx_tstamp(struct enetc_hw *hw, union enetc_tx_bd *txbd,
|
|
{
|
|
u32 lo, hi, tstamp_lo;
|
|
|
|
- lo = enetc_rd(hw, ENETC_SICTR0);
|
|
- hi = enetc_rd(hw, ENETC_SICTR1);
|
|
+ lo = enetc_rd_hot(hw, ENETC_SICTR0);
|
|
+ hi = enetc_rd_hot(hw, ENETC_SICTR1);
|
|
tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
|
|
if (lo <= tstamp_lo)
|
|
hi -= 1;
|
|
@@ -382,6 +384,12 @@ static void enetc_tstamp_tx(struct sk_buff *skb, u64 tstamp)
|
|
if (skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) {
|
|
memset(&shhwtstamps, 0, sizeof(shhwtstamps));
|
|
shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
|
|
+ /* Ensure skb_mstamp_ns, which might have been populated with
|
|
+ * the txtime, is not mistaken for a software timestamp,
|
|
+ * because this will prevent the dispatch of our hardware
|
|
+ * timestamp to the socket.
|
|
+ */
|
|
+ skb->tstamp = ktime_set(0, 0);
|
|
skb_tstamp_tx(skb, &shhwtstamps);
|
|
}
|
|
}
|
|
@@ -398,9 +406,7 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|
i = tx_ring->next_to_clean;
|
|
tx_swbd = &tx_ring->tx_swbd[i];
|
|
|
|
- enetc_lock_mdio();
|
|
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
|
|
- enetc_unlock_mdio();
|
|
|
|
do_tstamp = false;
|
|
|
|
@@ -443,8 +449,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|
tx_swbd = tx_ring->tx_swbd;
|
|
}
|
|
|
|
- enetc_lock_mdio();
|
|
-
|
|
/* BD iteration loop end */
|
|
if (is_eof) {
|
|
tx_frm_cnt++;
|
|
@@ -455,8 +459,6 @@ static bool enetc_clean_tx_ring(struct enetc_bdr *tx_ring, int napi_budget)
|
|
|
|
if (unlikely(!bds_to_clean))
|
|
bds_to_clean = enetc_bd_ready_count(tx_ring, i);
|
|
-
|
|
- enetc_unlock_mdio();
|
|
}
|
|
|
|
tx_ring->next_to_clean = i;
|
|
@@ -567,9 +569,8 @@ static void enetc_get_rx_tstamp(struct net_device *ndev,
|
|
static void enetc_get_offloads(struct enetc_bdr *rx_ring,
|
|
union enetc_rx_bd *rxbd, struct sk_buff *skb)
|
|
{
|
|
-#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
|
|
struct enetc_ndev_priv *priv = netdev_priv(rx_ring->ndev);
|
|
-#endif
|
|
+
|
|
/* TODO: hashing */
|
|
if (rx_ring->ndev->features & NETIF_F_RXCSUM) {
|
|
u16 inet_csum = le16_to_cpu(rxbd->r.inet_csum);
|
|
@@ -578,12 +579,31 @@ static void enetc_get_offloads(struct enetc_bdr *rx_ring,
|
|
skb->ip_summed = CHECKSUM_COMPLETE;
|
|
}
|
|
|
|
- /* copy VLAN to skb, if one is extracted, for now we assume it's a
|
|
- * standard TPID, but HW also supports custom values
|
|
- */
|
|
- if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN)
|
|
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
|
|
- le16_to_cpu(rxbd->r.vlan_opt));
|
|
+ if (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_VLAN) {
|
|
+ __be16 tpid = 0;
|
|
+
|
|
+ switch (le16_to_cpu(rxbd->r.flags) & ENETC_RXBD_FLAG_TPID) {
|
|
+ case 0:
|
|
+ tpid = htons(ETH_P_8021Q);
|
|
+ break;
|
|
+ case 1:
|
|
+ tpid = htons(ETH_P_8021AD);
|
|
+ break;
|
|
+ case 2:
|
|
+ tpid = htons(enetc_port_rd(&priv->si->hw,
|
|
+ ENETC_PCVLANR1));
|
|
+ break;
|
|
+ case 3:
|
|
+ tpid = htons(enetc_port_rd(&priv->si->hw,
|
|
+ ENETC_PCVLANR2));
|
|
+ break;
|
|
+ default:
|
|
+ break;
|
|
+ }
|
|
+
|
|
+ __vlan_hwaccel_put_tag(skb, tpid, le16_to_cpu(rxbd->r.vlan_opt));
|
|
+ }
|
|
+
|
|
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK
|
|
if (priv->active_offloads & ENETC_F_RX_TSTAMP)
|
|
enetc_get_rx_tstamp(rx_ring->ndev, rxbd, skb);
|
|
@@ -700,8 +720,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
u32 bd_status;
|
|
u16 size;
|
|
|
|
- enetc_lock_mdio();
|
|
-
|
|
if (cleaned_cnt >= ENETC_RXBD_BUNDLE) {
|
|
int count = enetc_refill_rx_ring(rx_ring, cleaned_cnt);
|
|
|
|
@@ -712,19 +730,15 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
|
|
rxbd = enetc_rxbd(rx_ring, i);
|
|
bd_status = le32_to_cpu(rxbd->r.lstatus);
|
|
- if (!bd_status) {
|
|
- enetc_unlock_mdio();
|
|
+ if (!bd_status)
|
|
break;
|
|
- }
|
|
|
|
enetc_wr_reg_hot(rx_ring->idr, BIT(rx_ring->index));
|
|
dma_rmb(); /* for reading other rxbd fields */
|
|
size = le16_to_cpu(rxbd->r.buf_len);
|
|
skb = enetc_map_rx_buff_to_skb(rx_ring, i, size);
|
|
- if (!skb) {
|
|
- enetc_unlock_mdio();
|
|
+ if (!skb)
|
|
break;
|
|
- }
|
|
|
|
enetc_get_offloads(rx_ring, rxbd, skb);
|
|
|
|
@@ -736,7 +750,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
|
|
if (unlikely(bd_status &
|
|
ENETC_RXBD_LSTATUS(ENETC_RXBD_ERR_MASK))) {
|
|
- enetc_unlock_mdio();
|
|
dev_kfree_skb(skb);
|
|
while (!(bd_status & ENETC_RXBD_LSTATUS_F)) {
|
|
dma_rmb();
|
|
@@ -776,8 +789,6 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
|
|
|
|
enetc_process_skb(rx_ring, skb);
|
|
|
|
- enetc_unlock_mdio();
|
|
-
|
|
napi_gro_receive(napi, skb);
|
|
|
|
rx_frm_cnt++;
|
|
@@ -1024,7 +1035,7 @@ static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
|
|
enetc_free_tx_ring(priv->tx_ring[i]);
|
|
}
|
|
|
|
-static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
+int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
{
|
|
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
|
|
|
@@ -1045,7 +1056,7 @@ static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
return 0;
|
|
}
|
|
|
|
-static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
+void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
{
|
|
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
|
|
|
@@ -1053,7 +1064,7 @@ static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
|
cbdr->bd_base = NULL;
|
|
}
|
|
|
|
-static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|
+void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|
{
|
|
/* set CBDR cache attributes */
|
|
enetc_wr(hw, ENETC_SICAR2,
|
|
@@ -1073,7 +1084,7 @@ static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
|
cbdr->cir = hw->reg + ENETC_SICBDRCIR;
|
|
}
|
|
|
|
-static void enetc_clear_cbdr(struct enetc_hw *hw)
|
|
+void enetc_clear_cbdr(struct enetc_hw *hw)
|
|
{
|
|
enetc_wr(hw, ENETC_SICBDRMR, 0);
|
|
}
|
|
@@ -1098,13 +1109,12 @@ static int enetc_setup_default_rss_table(struct enetc_si *si, int num_groups)
|
|
return 0;
|
|
}
|
|
|
|
-static int enetc_configure_si(struct enetc_ndev_priv *priv)
|
|
+int enetc_configure_si(struct enetc_ndev_priv *priv)
|
|
{
|
|
struct enetc_si *si = priv->si;
|
|
struct enetc_hw *hw = &si->hw;
|
|
int err;
|
|
|
|
- enetc_setup_cbdr(hw, &si->cbd_ring);
|
|
/* set SI cache attributes */
|
|
enetc_wr(hw, ENETC_SICAR0,
|
|
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
|
|
@@ -1152,6 +1162,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
|
if (err)
|
|
return err;
|
|
|
|
+ enetc_setup_cbdr(&si->hw, &si->cbd_ring);
|
|
+
|
|
priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules),
|
|
GFP_KERNEL);
|
|
if (!priv->cls_rules) {
|
|
@@ -1159,14 +1171,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
|
goto err_alloc_cls;
|
|
}
|
|
|
|
- err = enetc_configure_si(priv);
|
|
- if (err)
|
|
- goto err_config_si;
|
|
-
|
|
return 0;
|
|
|
|
-err_config_si:
|
|
- kfree(priv->cls_rules);
|
|
err_alloc_cls:
|
|
enetc_clear_cbdr(&si->hw);
|
|
enetc_free_cbdr(priv->dev, &si->cbd_ring);
|
|
@@ -1252,7 +1258,8 @@ static void enetc_setup_rxbdr(struct enetc_hw *hw, struct enetc_bdr *rx_ring)
|
|
rx_ring->idr = hw->reg + ENETC_SIRXIDR;
|
|
|
|
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring));
|
|
- enetc_wr(hw, ENETC_SIRXIDR, rx_ring->next_to_use);
|
|
+ /* update ENETC's consumer index */
|
|
+ enetc_rxbdr_wr(hw, idx, ENETC_RBCIR, rx_ring->next_to_use);
|
|
|
|
/* enable ring */
|
|
enetc_rxbdr_wr(hw, idx, ENETC_RBMR, rbmr);
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
|
|
index dd0fb0c066d75..15d19cbd5a954 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc.h
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc.h
|
|
@@ -293,6 +293,7 @@ void enetc_get_si_caps(struct enetc_si *si);
|
|
void enetc_init_si_rings_params(struct enetc_ndev_priv *priv);
|
|
int enetc_alloc_si_resources(struct enetc_ndev_priv *priv);
|
|
void enetc_free_si_resources(struct enetc_ndev_priv *priv);
|
|
+int enetc_configure_si(struct enetc_ndev_priv *priv);
|
|
|
|
int enetc_open(struct net_device *ndev);
|
|
int enetc_close(struct net_device *ndev);
|
|
@@ -310,6 +311,10 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
|
|
void enetc_set_ethtool_ops(struct net_device *ndev);
|
|
|
|
/* control buffer descriptor ring (CBDR) */
|
|
+int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
|
+void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
|
+void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr);
|
|
+void enetc_clear_cbdr(struct enetc_hw *hw);
|
|
int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
|
|
char *mac_addr, int si_map);
|
|
int enetc_clear_mac_flt_entry(struct enetc_si *si, int index);
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
|
|
index 014ca6ae121f8..21a6ce415cb22 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
|
|
@@ -172,6 +172,8 @@ enum enetc_bdr_type {TX, RX};
|
|
#define ENETC_PSIPMAR0(n) (0x0100 + (n) * 0x8) /* n = SI index */
|
|
#define ENETC_PSIPMAR1(n) (0x0104 + (n) * 0x8)
|
|
#define ENETC_PVCLCTR 0x0208
|
|
+#define ENETC_PCVLANR1 0x0210
|
|
+#define ENETC_PCVLANR2 0x0214
|
|
#define ENETC_VLAN_TYPE_C BIT(0)
|
|
#define ENETC_VLAN_TYPE_S BIT(1)
|
|
#define ENETC_PVCLCTR_OVTPIDL(bmp) ((bmp) & 0xff) /* VLAN_TYPE */
|
|
@@ -236,10 +238,17 @@ enum enetc_bdr_type {TX, RX};
|
|
#define ENETC_PM_IMDIO_BASE 0x8030
|
|
|
|
#define ENETC_PM0_IF_MODE 0x8300
|
|
-#define ENETC_PMO_IFM_RG BIT(2)
|
|
+#define ENETC_PM0_IFM_RG BIT(2)
|
|
#define ENETC_PM0_IFM_RLP (BIT(5) | BIT(11))
|
|
-#define ENETC_PM0_IFM_RGAUTO (BIT(15) | ENETC_PMO_IFM_RG | BIT(1))
|
|
-#define ENETC_PM0_IFM_XGMII BIT(12)
|
|
+#define ENETC_PM0_IFM_EN_AUTO BIT(15)
|
|
+#define ENETC_PM0_IFM_SSP_MASK GENMASK(14, 13)
|
|
+#define ENETC_PM0_IFM_SSP_1000 (2 << 13)
|
|
+#define ENETC_PM0_IFM_SSP_100 (0 << 13)
|
|
+#define ENETC_PM0_IFM_SSP_10 (1 << 13)
|
|
+#define ENETC_PM0_IFM_FULL_DPX BIT(12)
|
|
+#define ENETC_PM0_IFM_IFMODE_MASK GENMASK(1, 0)
|
|
+#define ENETC_PM0_IFM_IFMODE_XGMII 0
|
|
+#define ENETC_PM0_IFM_IFMODE_GMII 2
|
|
#define ENETC_PSIDCAPR 0x1b08
|
|
#define ENETC_PSIDCAPR_MSK GENMASK(15, 0)
|
|
#define ENETC_PSFCAPR 0x1b18
|
|
@@ -453,6 +462,8 @@ static inline u64 _enetc_rd_reg64_wa(void __iomem *reg)
|
|
#define enetc_wr_reg(reg, val) _enetc_wr_reg_wa((reg), (val))
|
|
#define enetc_rd(hw, off) enetc_rd_reg((hw)->reg + (off))
|
|
#define enetc_wr(hw, off, val) enetc_wr_reg((hw)->reg + (off), val)
|
|
+#define enetc_rd_hot(hw, off) enetc_rd_reg_hot((hw)->reg + (off))
|
|
+#define enetc_wr_hot(hw, off, val) enetc_wr_reg_hot((hw)->reg + (off), val)
|
|
#define enetc_rd64(hw, off) _enetc_rd_reg64_wa((hw)->reg + (off))
|
|
/* port register accessors - PF only */
|
|
#define enetc_port_rd(hw, off) enetc_rd_reg((hw)->port + (off))
|
|
@@ -573,6 +584,7 @@ union enetc_rx_bd {
|
|
#define ENETC_RXBD_LSTATUS(flags) ((flags) << 16)
|
|
#define ENETC_RXBD_FLAG_VLAN BIT(9)
|
|
#define ENETC_RXBD_FLAG_TSTMP BIT(10)
|
|
+#define ENETC_RXBD_FLAG_TPID GENMASK(1, 0)
|
|
|
|
#define ENETC_MAC_ADDR_FILT_CNT 8 /* # of supported entries per port */
|
|
#define EMETC_MAC_ADDR_FILT_RES 3 /* # of reserved entries at the beginning */
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
|
|
index 796e3d6f23f09..83187cd59fddd 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c
|
|
@@ -190,7 +190,6 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
|
|
{
|
|
struct enetc_ndev_priv *priv = netdev_priv(ndev);
|
|
struct enetc_pf *pf = enetc_si_priv(priv->si);
|
|
- char vlan_promisc_simap = pf->vlan_promisc_simap;
|
|
struct enetc_hw *hw = &priv->si->hw;
|
|
bool uprom = false, mprom = false;
|
|
struct enetc_mac_filter *filter;
|
|
@@ -203,16 +202,12 @@ static void enetc_pf_set_rx_mode(struct net_device *ndev)
|
|
psipmr = ENETC_PSIPMR_SET_UP(0) | ENETC_PSIPMR_SET_MP(0);
|
|
uprom = true;
|
|
mprom = true;
|
|
- /* Enable VLAN promiscuous mode for SI0 (PF) */
|
|
- vlan_promisc_simap |= BIT(0);
|
|
} else if (ndev->flags & IFF_ALLMULTI) {
|
|
/* enable multi cast promisc mode for SI0 (PF) */
|
|
psipmr = ENETC_PSIPMR_SET_MP(0);
|
|
mprom = true;
|
|
}
|
|
|
|
- enetc_set_vlan_promisc(&pf->si->hw, vlan_promisc_simap);
|
|
-
|
|
/* first 2 filter entries belong to PF */
|
|
if (!uprom) {
|
|
/* Update unicast filters */
|
|
@@ -320,7 +315,7 @@ static void enetc_set_loopback(struct net_device *ndev, bool en)
|
|
u32 reg;
|
|
|
|
reg = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
|
- if (reg & ENETC_PMO_IFM_RG) {
|
|
+ if (reg & ENETC_PM0_IFM_RG) {
|
|
/* RGMII mode */
|
|
reg = (reg & ~ENETC_PM0_IFM_RLP) |
|
|
(en ? ENETC_PM0_IFM_RLP : 0);
|
|
@@ -499,13 +494,20 @@ static void enetc_configure_port_mac(struct enetc_hw *hw)
|
|
|
|
static void enetc_mac_config(struct enetc_hw *hw, phy_interface_t phy_mode)
|
|
{
|
|
- /* set auto-speed for RGMII */
|
|
- if (enetc_port_rd(hw, ENETC_PM0_IF_MODE) & ENETC_PMO_IFM_RG ||
|
|
- phy_interface_mode_is_rgmii(phy_mode))
|
|
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_RGAUTO);
|
|
+ u32 val;
|
|
|
|
- if (phy_mode == PHY_INTERFACE_MODE_USXGMII)
|
|
- enetc_port_wr(hw, ENETC_PM0_IF_MODE, ENETC_PM0_IFM_XGMII);
|
|
+ if (phy_interface_mode_is_rgmii(phy_mode)) {
|
|
+ val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
|
+ val &= ~ENETC_PM0_IFM_EN_AUTO;
|
|
+ val &= ENETC_PM0_IFM_IFMODE_MASK;
|
|
+ val |= ENETC_PM0_IFM_IFMODE_GMII | ENETC_PM0_IFM_RG;
|
|
+ enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
|
+ }
|
|
+
|
|
+ if (phy_mode == PHY_INTERFACE_MODE_USXGMII) {
|
|
+ val = ENETC_PM0_IFM_FULL_DPX | ENETC_PM0_IFM_IFMODE_XGMII;
|
|
+ enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
|
+ }
|
|
}
|
|
|
|
static void enetc_mac_enable(struct enetc_hw *hw, bool en)
|
|
@@ -857,13 +859,12 @@ static bool enetc_port_has_pcs(struct enetc_pf *pf)
|
|
pf->if_mode == PHY_INTERFACE_MODE_USXGMII);
|
|
}
|
|
|
|
-static int enetc_mdiobus_create(struct enetc_pf *pf)
|
|
+static int enetc_mdiobus_create(struct enetc_pf *pf, struct device_node *node)
|
|
{
|
|
- struct device *dev = &pf->si->pdev->dev;
|
|
struct device_node *mdio_np;
|
|
int err;
|
|
|
|
- mdio_np = of_get_child_by_name(dev->of_node, "mdio");
|
|
+ mdio_np = of_get_child_by_name(node, "mdio");
|
|
if (mdio_np) {
|
|
err = enetc_mdio_probe(pf, mdio_np);
|
|
|
|
@@ -944,6 +945,34 @@ static void enetc_pl_mac_config(struct phylink_config *config,
|
|
phylink_set_pcs(priv->phylink, &pf->pcs->pcs);
|
|
}
|
|
|
|
+static void enetc_force_rgmii_mac(struct enetc_hw *hw, int speed, int duplex)
|
|
+{
|
|
+ u32 old_val, val;
|
|
+
|
|
+ old_val = val = enetc_port_rd(hw, ENETC_PM0_IF_MODE);
|
|
+
|
|
+ if (speed == SPEED_1000) {
|
|
+ val &= ~ENETC_PM0_IFM_SSP_MASK;
|
|
+ val |= ENETC_PM0_IFM_SSP_1000;
|
|
+ } else if (speed == SPEED_100) {
|
|
+ val &= ~ENETC_PM0_IFM_SSP_MASK;
|
|
+ val |= ENETC_PM0_IFM_SSP_100;
|
|
+ } else if (speed == SPEED_10) {
|
|
+ val &= ~ENETC_PM0_IFM_SSP_MASK;
|
|
+ val |= ENETC_PM0_IFM_SSP_10;
|
|
+ }
|
|
+
|
|
+ if (duplex == DUPLEX_FULL)
|
|
+ val |= ENETC_PM0_IFM_FULL_DPX;
|
|
+ else
|
|
+ val &= ~ENETC_PM0_IFM_FULL_DPX;
|
|
+
|
|
+ if (val == old_val)
|
|
+ return;
|
|
+
|
|
+ enetc_port_wr(hw, ENETC_PM0_IF_MODE, val);
|
|
+}
|
|
+
|
|
static void enetc_pl_mac_link_up(struct phylink_config *config,
|
|
struct phy_device *phy, unsigned int mode,
|
|
phy_interface_t interface, int speed,
|
|
@@ -956,6 +985,10 @@ static void enetc_pl_mac_link_up(struct phylink_config *config,
|
|
if (priv->active_offloads & ENETC_F_QBV)
|
|
enetc_sched_speed_set(priv, speed);
|
|
|
|
+ if (!phylink_autoneg_inband(mode) &&
|
|
+ phy_interface_mode_is_rgmii(interface))
|
|
+ enetc_force_rgmii_mac(&pf->si->hw, speed, duplex);
|
|
+
|
|
enetc_mac_enable(&pf->si->hw, true);
|
|
}
|
|
|
|
@@ -975,18 +1008,17 @@ static const struct phylink_mac_ops enetc_mac_phylink_ops = {
|
|
.mac_link_down = enetc_pl_mac_link_down,
|
|
};
|
|
|
|
-static int enetc_phylink_create(struct enetc_ndev_priv *priv)
|
|
+static int enetc_phylink_create(struct enetc_ndev_priv *priv,
|
|
+ struct device_node *node)
|
|
{
|
|
struct enetc_pf *pf = enetc_si_priv(priv->si);
|
|
- struct device *dev = &pf->si->pdev->dev;
|
|
struct phylink *phylink;
|
|
int err;
|
|
|
|
pf->phylink_config.dev = &priv->ndev->dev;
|
|
pf->phylink_config.type = PHYLINK_NETDEV;
|
|
|
|
- phylink = phylink_create(&pf->phylink_config,
|
|
- of_fwnode_handle(dev->of_node),
|
|
+ phylink = phylink_create(&pf->phylink_config, of_fwnode_handle(node),
|
|
pf->if_mode, &enetc_mac_phylink_ops);
|
|
if (IS_ERR(phylink)) {
|
|
err = PTR_ERR(phylink);
|
|
@@ -1049,20 +1081,36 @@ static int enetc_init_port_rss_memory(struct enetc_si *si)
|
|
return err;
|
|
}
|
|
|
|
+static void enetc_init_unused_port(struct enetc_si *si)
|
|
+{
|
|
+ struct device *dev = &si->pdev->dev;
|
|
+ struct enetc_hw *hw = &si->hw;
|
|
+ int err;
|
|
+
|
|
+ si->cbd_ring.bd_count = ENETC_CBDR_DEFAULT_SIZE;
|
|
+ err = enetc_alloc_cbdr(dev, &si->cbd_ring);
|
|
+ if (err)
|
|
+ return;
|
|
+
|
|
+ enetc_setup_cbdr(hw, &si->cbd_ring);
|
|
+
|
|
+ enetc_init_port_rfs_memory(si);
|
|
+ enetc_init_port_rss_memory(si);
|
|
+
|
|
+ enetc_clear_cbdr(hw);
|
|
+ enetc_free_cbdr(dev, &si->cbd_ring);
|
|
+}
|
|
+
|
|
static int enetc_pf_probe(struct pci_dev *pdev,
|
|
const struct pci_device_id *ent)
|
|
{
|
|
+ struct device_node *node = pdev->dev.of_node;
|
|
struct enetc_ndev_priv *priv;
|
|
struct net_device *ndev;
|
|
struct enetc_si *si;
|
|
struct enetc_pf *pf;
|
|
int err;
|
|
|
|
- if (pdev->dev.of_node && !of_device_is_available(pdev->dev.of_node)) {
|
|
- dev_info(&pdev->dev, "device is disabled, skipping\n");
|
|
- return -ENODEV;
|
|
- }
|
|
-
|
|
err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf));
|
|
if (err) {
|
|
dev_err(&pdev->dev, "PCI probing failed\n");
|
|
@@ -1076,6 +1124,13 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|
goto err_map_pf_space;
|
|
}
|
|
|
|
+ if (node && !of_device_is_available(node)) {
|
|
+ enetc_init_unused_port(si);
|
|
+ dev_info(&pdev->dev, "device is disabled, skipping\n");
|
|
+ err = -ENODEV;
|
|
+ goto err_device_disabled;
|
|
+ }
|
|
+
|
|
pf = enetc_si_priv(si);
|
|
pf->si = si;
|
|
pf->total_vfs = pci_sriov_get_totalvfs(pdev);
|
|
@@ -1115,18 +1170,24 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
|
goto err_init_port_rss;
|
|
}
|
|
|
|
+ err = enetc_configure_si(priv);
|
|
+ if (err) {
|
|
+ dev_err(&pdev->dev, "Failed to configure SI\n");
|
|
+ goto err_config_si;
|
|
+ }
|
|
+
|
|
err = enetc_alloc_msix(priv);
|
|
if (err) {
|
|
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
|
goto err_alloc_msix;
|
|
}
|
|
|
|
- if (!of_get_phy_mode(pdev->dev.of_node, &pf->if_mode)) {
|
|
- err = enetc_mdiobus_create(pf);
|
|
+ if (!of_get_phy_mode(node, &pf->if_mode)) {
|
|
+ err = enetc_mdiobus_create(pf, node);
|
|
if (err)
|
|
goto err_mdiobus_create;
|
|
|
|
- err = enetc_phylink_create(priv);
|
|
+ err = enetc_phylink_create(priv, node);
|
|
if (err)
|
|
goto err_phylink_create;
|
|
}
|
|
@@ -1143,6 +1204,7 @@ err_phylink_create:
|
|
enetc_mdiobus_destroy(pf);
|
|
err_mdiobus_create:
|
|
enetc_free_msix(priv);
|
|
+err_config_si:
|
|
err_init_port_rss:
|
|
err_init_port_rfs:
|
|
err_alloc_msix:
|
|
@@ -1151,6 +1213,7 @@ err_alloc_si_res:
|
|
si->ndev = NULL;
|
|
free_netdev(ndev);
|
|
err_alloc_netdev:
|
|
+err_device_disabled:
|
|
err_map_pf_space:
|
|
enetc_pci_remove(pdev);
|
|
|
|
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_vf.c b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
|
|
index 7b5c82c7e4e5a..33c125735db7e 100644
|
|
--- a/drivers/net/ethernet/freescale/enetc/enetc_vf.c
|
|
+++ b/drivers/net/ethernet/freescale/enetc/enetc_vf.c
|
|
@@ -177,6 +177,12 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
|
goto err_alloc_si_res;
|
|
}
|
|
|
|
+ err = enetc_configure_si(priv);
|
|
+ if (err) {
|
|
+ dev_err(&pdev->dev, "Failed to configure SI\n");
|
|
+ goto err_config_si;
|
|
+ }
|
|
+
|
|
err = enetc_alloc_msix(priv);
|
|
if (err) {
|
|
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
|
@@ -193,6 +199,7 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
|
|
|
err_reg_netdev:
|
|
enetc_free_msix(priv);
|
|
+err_config_si:
|
|
err_alloc_msix:
|
|
enetc_free_si_resources(priv);
|
|
err_alloc_si_res:
|
|
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
|
|
index 096e26a2e16b4..36690fc5c1aff 100644
|
|
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
|
|
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
|
|
@@ -1031,16 +1031,16 @@ struct hclge_fd_tcam_config_3_cmd {
|
|
#define HCLGE_FD_AD_DROP_B 0
|
|
#define HCLGE_FD_AD_DIRECT_QID_B 1
|
|
#define HCLGE_FD_AD_QID_S 2
|
|
-#define HCLGE_FD_AD_QID_M GENMASK(12, 2)
|
|
+#define HCLGE_FD_AD_QID_M GENMASK(11, 2)
|
|
#define HCLGE_FD_AD_USE_COUNTER_B 12
|
|
#define HCLGE_FD_AD_COUNTER_NUM_S 13
|
|
#define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13)
|
|
#define HCLGE_FD_AD_NXT_STEP_B 20
|
|
#define HCLGE_FD_AD_NXT_KEY_S 21
|
|
-#define HCLGE_FD_AD_NXT_KEY_M GENMASK(26, 21)
|
|
+#define HCLGE_FD_AD_NXT_KEY_M GENMASK(25, 21)
|
|
#define HCLGE_FD_AD_WR_RULE_ID_B 0
|
|
#define HCLGE_FD_AD_RULE_ID_S 1
|
|
-#define HCLGE_FD_AD_RULE_ID_M GENMASK(13, 1)
|
|
+#define HCLGE_FD_AD_RULE_ID_M GENMASK(12, 1)
|
|
|
|
struct hclge_fd_ad_config_cmd {
|
|
u8 stage;
|
|
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
|
|
index c40820baf48a6..b856dbe4db73b 100644
|
|
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
|
|
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
|
|
@@ -5115,9 +5115,9 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
|
|
case BIT(INNER_SRC_MAC):
|
|
for (i = 0; i < ETH_ALEN; i++) {
|
|
calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
|
- rule->tuples.src_mac[i]);
|
|
+ rule->tuples_mask.src_mac[i]);
|
|
calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
|
- rule->tuples.src_mac[i]);
|
|
+ rule->tuples_mask.src_mac[i]);
|
|
}
|
|
|
|
return true;
|
|
@@ -6183,8 +6183,7 @@ static void hclge_fd_get_ext_info(struct ethtool_rx_flow_spec *fs,
|
|
fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1);
|
|
fs->m_ext.vlan_tci =
|
|
rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ?
|
|
- cpu_to_be16(VLAN_VID_MASK) :
|
|
- cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
|
+ 0 : cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
|
}
|
|
|
|
if (fs->flow_type & FLOW_MAC_EXT) {
|
|
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
|
|
index 5e1f4e71af7bc..f184f4a79cc39 100644
|
|
--- a/drivers/net/ethernet/ibm/ibmvnic.c
|
|
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
|
|
@@ -1832,10 +1832,9 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
|
|
if (!is_valid_ether_addr(addr->sa_data))
|
|
return -EADDRNOTAVAIL;
|
|
|
|
- if (adapter->state != VNIC_PROBED) {
|
|
- ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
|
+ ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
|
+ if (adapter->state != VNIC_PROBED)
|
|
rc = __ibmvnic_set_mac(netdev, addr->sa_data);
|
|
- }
|
|
|
|
return rc;
|
|
}
|
|
@@ -5176,16 +5175,14 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset)
|
|
{
|
|
struct device *dev = &adapter->vdev->dev;
|
|
unsigned long timeout = msecs_to_jiffies(20000);
|
|
- u64 old_num_rx_queues, old_num_tx_queues;
|
|
+ u64 old_num_rx_queues = adapter->req_rx_queues;
|
|
+ u64 old_num_tx_queues = adapter->req_tx_queues;
|
|
int rc;
|
|
|
|
adapter->from_passive_init = false;
|
|
|
|
- if (reset) {
|
|
- old_num_rx_queues = adapter->req_rx_queues;
|
|
- old_num_tx_queues = adapter->req_tx_queues;
|
|
+ if (reset)
|
|
reinit_completion(&adapter->init_done);
|
|
- }
|
|
|
|
adapter->init_done_rc = 0;
|
|
rc = ibmvnic_send_crq_init(adapter);
|
|
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
|
|
index 59971f62e6268..3e4a4d6f0419c 100644
|
|
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
|
|
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
|
|
@@ -15100,6 +15100,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|
if (err) {
|
|
dev_info(&pdev->dev,
|
|
"setup of misc vector failed: %d\n", err);
|
|
+ i40e_cloud_filter_exit(pf);
|
|
+ i40e_fdir_teardown(pf);
|
|
goto err_vsis;
|
|
}
|
|
}
|
|
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
|
|
index eca73526ac86b..54d47265a7ac1 100644
|
|
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
|
|
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
|
|
@@ -575,6 +575,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
|
|
return -EINVAL;
|
|
}
|
|
|
|
+ if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
|
+ netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
if (ixgbe_ipsec_check_mgmt_ip(xs)) {
|
|
netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
|
|
return -EINVAL;
|
|
diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
|
|
index 5170dd9d8705b..caaea2c920a6e 100644
|
|
--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
|
|
+++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
|
|
@@ -272,6 +272,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
|
return -EINVAL;
|
|
}
|
|
|
|
+ if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
|
+ netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
|
|
struct rx_sa rsa;
|
|
|
|
diff --git a/drivers/net/ethernet/mediatek/mtk_star_emac.c b/drivers/net/ethernet/mediatek/mtk_star_emac.c
|
|
index a8641a407c06a..96d2891f1675a 100644
|
|
--- a/drivers/net/ethernet/mediatek/mtk_star_emac.c
|
|
+++ b/drivers/net/ethernet/mediatek/mtk_star_emac.c
|
|
@@ -1225,8 +1225,6 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
|
|
goto push_new_skb;
|
|
}
|
|
|
|
- desc_data.dma_addr = new_dma_addr;
|
|
-
|
|
/* We can't fail anymore at this point: it's safe to unmap the skb. */
|
|
mtk_star_dma_unmap_rx(priv, &desc_data);
|
|
|
|
@@ -1236,6 +1234,9 @@ static int mtk_star_receive_packet(struct mtk_star_priv *priv)
|
|
desc_data.skb->dev = ndev;
|
|
netif_receive_skb(desc_data.skb);
|
|
|
|
+ /* update dma_addr for new skb */
|
|
+ desc_data.dma_addr = new_dma_addr;
|
|
+
|
|
push_new_skb:
|
|
desc_data.len = skb_tailroom(new_skb);
|
|
desc_data.skb = new_skb;
|
|
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
|
|
index 23849f2b9c252..1434df66fcf2e 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
|
|
@@ -47,7 +47,7 @@
|
|
#define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff)
|
|
#define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff)
|
|
|
|
-static int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
|
+int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
|
{
|
|
int i, t;
|
|
int err = 0;
|
|
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
|
|
index 6f290319b6178..d8a20e83d9040 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
|
|
@@ -3559,6 +3559,8 @@ int mlx4_en_reset_config(struct net_device *dev,
|
|
en_err(priv, "Failed starting port\n");
|
|
}
|
|
|
|
+ if (!err)
|
|
+ err = mlx4_en_moderation_update(priv);
|
|
out:
|
|
mutex_unlock(&mdev->state_lock);
|
|
kfree(tmp);
|
|
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
|
|
index 30378e4c90b5b..0aa4a23ad3def 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
|
|
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
|
|
@@ -795,6 +795,7 @@ void mlx4_en_ptp_overflow_check(struct mlx4_en_dev *mdev);
|
|
#define DEV_FEATURE_CHANGED(dev, new_features, feature) \
|
|
((dev->features & feature) ^ (new_features & feature))
|
|
|
|
+int mlx4_en_moderation_update(struct mlx4_en_priv *priv);
|
|
int mlx4_en_reset_config(struct net_device *dev,
|
|
struct hwtstamp_config ts_config,
|
|
netdev_features_t new_features);
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
|
|
index 39eff6a57ba22..3c3069afc0a31 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
|
|
@@ -4208,6 +4208,7 @@ MLXSW_ITEM32(reg, ptys, ext_eth_proto_cap, 0x08, 0, 32);
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 BIT(20)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 BIT(21)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 BIT(22)
|
|
+#define MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4 BIT(23)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_CR BIT(27)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_KR BIT(28)
|
|
#define MLXSW_REG_PTYS_ETH_SPEED_25GBASE_SR BIT(29)
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
|
|
index 540616469e284..68333ecf6151e 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
|
|
@@ -1171,6 +1171,11 @@ static const struct mlxsw_sp1_port_link_mode mlxsw_sp1_port_link_mode[] = {
|
|
.mask_ethtool = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
|
|
.speed = SPEED_100000,
|
|
},
|
|
+ {
|
|
+ .mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
|
|
+ .mask_ethtool = ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
|
|
+ .speed = SPEED_100000,
|
|
+ },
|
|
};
|
|
|
|
#define MLXSW_SP1_PORT_LINK_MODE_LEN ARRAY_SIZE(mlxsw_sp1_port_link_mode)
|
|
diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
|
|
index 5023d91269f45..28bfe1ea9d947 100644
|
|
--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
|
|
+++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
|
|
@@ -612,7 +612,8 @@ static const struct mlxsw_sx_port_link_mode mlxsw_sx_port_link_mode[] = {
|
|
{
|
|
.mask = MLXSW_REG_PTYS_ETH_SPEED_100GBASE_CR4 |
|
|
MLXSW_REG_PTYS_ETH_SPEED_100GBASE_SR4 |
|
|
- MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4,
|
|
+ MLXSW_REG_PTYS_ETH_SPEED_100GBASE_KR4 |
|
|
+ MLXSW_REG_PTYS_ETH_SPEED_100GBASE_LR4_ER4,
|
|
.speed = 100000,
|
|
},
|
|
};
|
|
diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
|
|
index 729495a1a77ee..3655503352928 100644
|
|
--- a/drivers/net/ethernet/mscc/ocelot_flower.c
|
|
+++ b/drivers/net/ethernet/mscc/ocelot_flower.c
|
|
@@ -540,13 +540,14 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
|
|
return -EOPNOTSUPP;
|
|
}
|
|
|
|
+ flow_rule_match_ipv4_addrs(rule, &match);
|
|
+
|
|
if (filter->block_id == VCAP_IS1 && *(u32 *)&match.mask->dst) {
|
|
NL_SET_ERR_MSG_MOD(extack,
|
|
"Key type S1_NORMAL cannot match on destination IP");
|
|
return -EOPNOTSUPP;
|
|
}
|
|
|
|
- flow_rule_match_ipv4_addrs(rule, &match);
|
|
tmp = &filter->key.ipv4.sip.value.addr[0];
|
|
memcpy(tmp, &match.key->src, 4);
|
|
|
|
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
|
|
index b7d5eaa70a67b..1591715c97177 100644
|
|
--- a/drivers/net/ethernet/realtek/r8169_main.c
|
|
+++ b/drivers/net/ethernet/realtek/r8169_main.c
|
|
@@ -1042,7 +1042,7 @@ static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int typ
|
|
{
|
|
/* based on RTL8168FP_OOBMAC_BASE in vendor driver */
|
|
if (tp->mac_version == RTL_GIGA_MAC_VER_52 && type == ERIAR_OOB)
|
|
- *cmd |= 0x7f0 << 18;
|
|
+ *cmd |= 0xf70 << 18;
|
|
}
|
|
|
|
DECLARE_RTL_COND(rtl_eriar_cond)
|
|
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
|
|
index d5d236d687e9e..6d84266c03caf 100644
|
|
--- a/drivers/net/ethernet/renesas/sh_eth.c
|
|
+++ b/drivers/net/ethernet/renesas/sh_eth.c
|
|
@@ -560,6 +560,8 @@ static struct sh_eth_cpu_data r7s72100_data = {
|
|
EESR_TDE,
|
|
.fdr_value = 0x0000070f,
|
|
|
|
+ .trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
|
+
|
|
.no_psr = 1,
|
|
.apr = 1,
|
|
.mpr = 1,
|
|
@@ -780,6 +782,8 @@ static struct sh_eth_cpu_data r7s9210_data = {
|
|
|
|
.fdr_value = 0x0000070f,
|
|
|
|
+ .trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
|
+
|
|
.apr = 1,
|
|
.mpr = 1,
|
|
.tpauser = 1,
|
|
@@ -1089,6 +1093,9 @@ static struct sh_eth_cpu_data sh771x_data = {
|
|
EESIPR_CEEFIP | EESIPR_CELFIP |
|
|
EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP |
|
|
EESIPR_PREIP | EESIPR_CERFIP,
|
|
+
|
|
+ .trscer_err_mask = DESC_I_RINT8,
|
|
+
|
|
.tsu = 1,
|
|
.dual_port = 1,
|
|
};
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
|
|
index 103d2448e9e0d..a9087dae767de 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
|
|
@@ -233,6 +233,7 @@ static void common_default_data(struct plat_stmmacenet_data *plat)
|
|
static int intel_mgbe_common_data(struct pci_dev *pdev,
|
|
struct plat_stmmacenet_data *plat)
|
|
{
|
|
+ char clk_name[20];
|
|
int ret;
|
|
int i;
|
|
|
|
@@ -300,8 +301,10 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
|
|
plat->eee_usecs_rate = plat->clk_ptp_rate;
|
|
|
|
/* Set system clock */
|
|
+ sprintf(clk_name, "%s-%s", "stmmac", pci_name(pdev));
|
|
+
|
|
plat->stmmac_clk = clk_register_fixed_rate(&pdev->dev,
|
|
- "stmmac-clk", NULL, 0,
|
|
+ clk_name, NULL, 0,
|
|
plat->clk_ptp_rate);
|
|
|
|
if (IS_ERR(plat->stmmac_clk)) {
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
|
|
index c6540b003b430..2ecd3a8a690c2 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
|
|
@@ -499,10 +499,15 @@ static void dwmac4_get_rx_header_len(struct dma_desc *p, unsigned int *len)
|
|
*len = le32_to_cpu(p->des2) & RDES2_HL;
|
|
}
|
|
|
|
-static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
|
|
+static void dwmac4_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool buf2_valid)
|
|
{
|
|
p->des2 = cpu_to_le32(lower_32_bits(addr));
|
|
- p->des3 = cpu_to_le32(upper_32_bits(addr) | RDES3_BUFFER2_VALID_ADDR);
|
|
+ p->des3 = cpu_to_le32(upper_32_bits(addr));
|
|
+
|
|
+ if (buf2_valid)
|
|
+ p->des3 |= cpu_to_le32(RDES3_BUFFER2_VALID_ADDR);
|
|
+ else
|
|
+ p->des3 &= cpu_to_le32(~RDES3_BUFFER2_VALID_ADDR);
|
|
}
|
|
|
|
static void dwmac4_set_tbs(struct dma_edesc *p, u32 sec, u32 nsec)
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
|
|
index bb29bfcd62c34..62aa0e95beb70 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
|
|
@@ -124,6 +124,23 @@ static void dwmac4_dma_init_channel(void __iomem *ioaddr,
|
|
ioaddr + DMA_CHAN_INTR_ENA(chan));
|
|
}
|
|
|
|
+static void dwmac410_dma_init_channel(void __iomem *ioaddr,
|
|
+ struct stmmac_dma_cfg *dma_cfg, u32 chan)
|
|
+{
|
|
+ u32 value;
|
|
+
|
|
+ /* common channel control register config */
|
|
+ value = readl(ioaddr + DMA_CHAN_CONTROL(chan));
|
|
+ if (dma_cfg->pblx8)
|
|
+ value = value | DMA_BUS_MODE_PBL;
|
|
+
|
|
+ writel(value, ioaddr + DMA_CHAN_CONTROL(chan));
|
|
+
|
|
+ /* Mask interrupts by writing to CSR7 */
|
|
+ writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
|
|
+ ioaddr + DMA_CHAN_INTR_ENA(chan));
|
|
+}
|
|
+
|
|
static void dwmac4_dma_init(void __iomem *ioaddr,
|
|
struct stmmac_dma_cfg *dma_cfg, int atds)
|
|
{
|
|
@@ -523,7 +540,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = {
|
|
const struct stmmac_dma_ops dwmac410_dma_ops = {
|
|
.reset = dwmac4_dma_reset,
|
|
.init = dwmac4_dma_init,
|
|
- .init_chan = dwmac4_dma_init_channel,
|
|
+ .init_chan = dwmac410_dma_init_channel,
|
|
.init_rx_chan = dwmac4_dma_init_rx_chan,
|
|
.init_tx_chan = dwmac4_dma_init_tx_chan,
|
|
.axi = dwmac4_dma_axi,
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
|
|
index 0b4ee2dbb691d..71e50751ef2dc 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
|
|
@@ -53,10 +53,6 @@ void dwmac4_dma_stop_tx(void __iomem *ioaddr, u32 chan)
|
|
|
|
value &= ~DMA_CONTROL_ST;
|
|
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
|
|
-
|
|
- value = readl(ioaddr + GMAC_CONFIG);
|
|
- value &= ~GMAC_CONFIG_TE;
|
|
- writel(value, ioaddr + GMAC_CONFIG);
|
|
}
|
|
|
|
void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan)
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
|
|
index 0aaf19ab56729..ccfb0102dde49 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
|
|
@@ -292,7 +292,7 @@ static void dwxgmac2_get_rx_header_len(struct dma_desc *p, unsigned int *len)
|
|
*len = le32_to_cpu(p->des2) & XGMAC_RDES2_HL;
|
|
}
|
|
|
|
-static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr)
|
|
+static void dwxgmac2_set_sec_addr(struct dma_desc *p, dma_addr_t addr, bool is_valid)
|
|
{
|
|
p->des2 = cpu_to_le32(lower_32_bits(addr));
|
|
p->des3 = cpu_to_le32(upper_32_bits(addr));
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
|
|
index e2dca9b6e9926..afe7ec496545a 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
|
|
@@ -91,7 +91,7 @@ struct stmmac_desc_ops {
|
|
int (*get_rx_hash)(struct dma_desc *p, u32 *hash,
|
|
enum pkt_hash_types *type);
|
|
void (*get_rx_header_len)(struct dma_desc *p, unsigned int *len);
|
|
- void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr);
|
|
+ void (*set_sec_addr)(struct dma_desc *p, dma_addr_t addr, bool buf2_valid);
|
|
void (*set_sarc)(struct dma_desc *p, u32 sarc_type);
|
|
void (*set_vlan_tag)(struct dma_desc *p, u16 tag, u16 inner_tag,
|
|
u32 inner_type);
|
|
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
|
|
index b3d6d8e3f4de9..7d01c5cf60c96 100644
|
|
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
|
|
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
|
|
@@ -1279,9 +1279,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
|
|
return -ENOMEM;
|
|
|
|
buf->sec_addr = page_pool_get_dma_addr(buf->sec_page);
|
|
- stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
|
} else {
|
|
buf->sec_page = NULL;
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
|
}
|
|
|
|
buf->addr = page_pool_get_dma_addr(buf->page);
|
|
@@ -3618,7 +3619,10 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
|
|
DMA_FROM_DEVICE);
|
|
|
|
stmmac_set_desc_addr(priv, p, buf->addr);
|
|
- stmmac_set_desc_sec_addr(priv, p, buf->sec_addr);
|
|
+ if (priv->sph)
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, true);
|
|
+ else
|
|
+ stmmac_set_desc_sec_addr(priv, p, buf->sec_addr, false);
|
|
stmmac_refill_desc3(priv, rx_q, p);
|
|
|
|
rx_q->rx_count_frames++;
|
|
@@ -5114,13 +5118,16 @@ int stmmac_dvr_remove(struct device *dev)
|
|
netdev_info(priv->dev, "%s: removing driver", __func__);
|
|
|
|
stmmac_stop_all_dma(priv);
|
|
+ stmmac_mac_set(priv, priv->ioaddr, false);
|
|
+ netif_carrier_off(ndev);
|
|
+ unregister_netdev(ndev);
|
|
|
|
+ /* Serdes power down needs to happen after VLAN filter
|
|
+ * is deleted that is triggered by unregister_netdev().
|
|
+ */
|
|
if (priv->plat->serdes_powerdown)
|
|
priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
|
|
|
|
- stmmac_mac_set(priv, priv->ioaddr, false);
|
|
- netif_carrier_off(ndev);
|
|
- unregister_netdev(ndev);
|
|
#ifdef CONFIG_DEBUG_FS
|
|
stmmac_exit_fs(ndev);
|
|
#endif
|
|
@@ -5227,6 +5234,8 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv)
|
|
tx_q->cur_tx = 0;
|
|
tx_q->dirty_tx = 0;
|
|
tx_q->mss = 0;
|
|
+
|
|
+ netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue));
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
|
|
index 7178468302c8f..ad6dbf0110526 100644
|
|
--- a/drivers/net/netdevsim/netdev.c
|
|
+++ b/drivers/net/netdevsim/netdev.c
|
|
@@ -296,6 +296,7 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
|
|
dev_net_set(dev, nsim_dev_net(nsim_dev));
|
|
ns = netdev_priv(dev);
|
|
ns->netdev = dev;
|
|
+ u64_stats_init(&ns->syncp);
|
|
ns->nsim_dev = nsim_dev;
|
|
ns->nsim_dev_port = nsim_dev_port;
|
|
ns->nsim_bus_dev = nsim_dev->nsim_bus_dev;
|
|
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
|
|
index 35525a671400d..49e96ca585fff 100644
|
|
--- a/drivers/net/phy/phy.c
|
|
+++ b/drivers/net/phy/phy.c
|
|
@@ -293,14 +293,16 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
|
|
|
|
phydev->autoneg = autoneg;
|
|
|
|
- phydev->speed = speed;
|
|
+ if (autoneg == AUTONEG_DISABLE) {
|
|
+ phydev->speed = speed;
|
|
+ phydev->duplex = duplex;
|
|
+ }
|
|
|
|
linkmode_copy(phydev->advertising, advertising);
|
|
|
|
linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
|
|
phydev->advertising, autoneg == AUTONEG_ENABLE);
|
|
|
|
- phydev->duplex = duplex;
|
|
phydev->master_slave_set = cmd->base.master_slave_cfg;
|
|
phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
|
|
|
|
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
|
|
index dd1f711140c3d..2d4eed2d61ce9 100644
|
|
--- a/drivers/net/phy/phy_device.c
|
|
+++ b/drivers/net/phy/phy_device.c
|
|
@@ -230,7 +230,6 @@ static struct phy_driver genphy_driver;
|
|
static LIST_HEAD(phy_fixup_list);
|
|
static DEFINE_MUTEX(phy_fixup_lock);
|
|
|
|
-#ifdef CONFIG_PM
|
|
static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
|
|
{
|
|
struct device_driver *drv = phydev->mdio.dev.driver;
|
|
@@ -270,7 +269,7 @@ out:
|
|
return !phydev->suspended;
|
|
}
|
|
|
|
-static int mdio_bus_phy_suspend(struct device *dev)
|
|
+static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
|
|
{
|
|
struct phy_device *phydev = to_phy_device(dev);
|
|
|
|
@@ -290,7 +289,7 @@ static int mdio_bus_phy_suspend(struct device *dev)
|
|
return phy_suspend(phydev);
|
|
}
|
|
|
|
-static int mdio_bus_phy_resume(struct device *dev)
|
|
+static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
|
|
{
|
|
struct phy_device *phydev = to_phy_device(dev);
|
|
int ret;
|
|
@@ -316,7 +315,6 @@ no_resume:
|
|
|
|
static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend,
|
|
mdio_bus_phy_resume);
|
|
-#endif /* CONFIG_PM */
|
|
|
|
/**
|
|
* phy_register_fixup - creates a new phy_fixup and adds it to the list
|
|
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
|
|
index c7320861943b4..6e033ba717030 100644
|
|
--- a/drivers/net/usb/qmi_wwan.c
|
|
+++ b/drivers/net/usb/qmi_wwan.c
|
|
@@ -419,13 +419,6 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c
|
|
goto err;
|
|
}
|
|
|
|
- /* we don't want to modify a running netdev */
|
|
- if (netif_running(dev->net)) {
|
|
- netdev_err(dev->net, "Cannot change a running device\n");
|
|
- ret = -EBUSY;
|
|
- goto err;
|
|
- }
|
|
-
|
|
ret = qmimux_register_device(dev->net, mux_id);
|
|
if (!ret) {
|
|
info->flags |= QMI_WWAN_FLAG_MUX;
|
|
@@ -455,13 +448,6 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c
|
|
if (!rtnl_trylock())
|
|
return restart_syscall();
|
|
|
|
- /* we don't want to modify a running netdev */
|
|
- if (netif_running(dev->net)) {
|
|
- netdev_err(dev->net, "Cannot change a running device\n");
|
|
- ret = -EBUSY;
|
|
- goto err;
|
|
- }
|
|
-
|
|
del_dev = qmimux_find_dev(dev, mux_id);
|
|
if (!del_dev) {
|
|
netdev_err(dev->net, "mux_id not present\n");
|
|
diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
|
|
index b6be2454b8bdd..605c01fb73f15 100644
|
|
--- a/drivers/net/wan/lapbether.c
|
|
+++ b/drivers/net/wan/lapbether.c
|
|
@@ -283,7 +283,6 @@ static int lapbeth_open(struct net_device *dev)
|
|
return -ENODEV;
|
|
}
|
|
|
|
- netif_start_queue(dev);
|
|
return 0;
|
|
}
|
|
|
|
@@ -291,8 +290,6 @@ static int lapbeth_close(struct net_device *dev)
|
|
{
|
|
int err;
|
|
|
|
- netif_stop_queue(dev);
|
|
-
|
|
if ((err = lapb_unregister(dev)) != LAPB_OK)
|
|
pr_err("lapb_unregister error: %d\n", err);
|
|
|
|
diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
|
|
index ebd6886a8c184..a68fe3a45a744 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/core.c
|
|
+++ b/drivers/net/wireless/ath/ath11k/core.c
|
|
@@ -774,6 +774,7 @@ static void ath11k_core_restart(struct work_struct *work)
|
|
complete(&ar->scan.started);
|
|
complete(&ar->scan.completed);
|
|
complete(&ar->peer_assoc_done);
|
|
+ complete(&ar->peer_delete_done);
|
|
complete(&ar->install_key_done);
|
|
complete(&ar->vdev_setup_done);
|
|
complete(&ar->bss_survey_done);
|
|
diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
|
|
index 5a7915f75e1e2..c8e36251068c9 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/core.h
|
|
+++ b/drivers/net/wireless/ath/ath11k/core.h
|
|
@@ -502,6 +502,7 @@ struct ath11k {
|
|
u8 lmac_id;
|
|
|
|
struct completion peer_assoc_done;
|
|
+ struct completion peer_delete_done;
|
|
|
|
int install_key_status;
|
|
struct completion install_key_done;
|
|
diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
|
|
index b5bd9b06da89e..ee0edd9185604 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/mac.c
|
|
+++ b/drivers/net/wireless/ath/ath11k/mac.c
|
|
@@ -2986,6 +2986,7 @@ static int ath11k_mac_station_add(struct ath11k *ar,
|
|
}
|
|
|
|
if (ab->hw_params.vdev_start_delay &&
|
|
+ !arvif->is_started &&
|
|
arvif->vdev_type != WMI_VDEV_TYPE_AP) {
|
|
ret = ath11k_start_vdev_delay(ar->hw, vif);
|
|
if (ret) {
|
|
@@ -4589,8 +4590,22 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
|
|
|
|
err_peer_del:
|
|
if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
|
|
+ reinit_completion(&ar->peer_delete_done);
|
|
+
|
|
+ ret = ath11k_wmi_send_peer_delete_cmd(ar, vif->addr,
|
|
+ arvif->vdev_id);
|
|
+ if (ret) {
|
|
+ ath11k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
|
|
+ arvif->vdev_id, vif->addr);
|
|
+ return ret;
|
|
+ }
|
|
+
|
|
+ ret = ath11k_wait_for_peer_delete_done(ar, arvif->vdev_id,
|
|
+ vif->addr);
|
|
+ if (ret)
|
|
+ return ret;
|
|
+
|
|
ar->num_peers--;
|
|
- ath11k_wmi_send_peer_delete_cmd(ar, vif->addr, arvif->vdev_id);
|
|
}
|
|
|
|
err_vdev_del:
|
|
@@ -5234,7 +5249,8 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
|
|
/* for QCA6390 bss peer must be created before vdev_start */
|
|
if (ab->hw_params.vdev_start_delay &&
|
|
arvif->vdev_type != WMI_VDEV_TYPE_AP &&
|
|
- arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
|
|
+ arvif->vdev_type != WMI_VDEV_TYPE_MONITOR &&
|
|
+ !ath11k_peer_find_by_vdev_id(ab, arvif->vdev_id)) {
|
|
memcpy(&arvif->chanctx, ctx, sizeof(*ctx));
|
|
ret = 0;
|
|
goto out;
|
|
@@ -5245,7 +5261,9 @@ ath11k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
|
|
goto out;
|
|
}
|
|
|
|
- if (ab->hw_params.vdev_start_delay) {
|
|
+ if (ab->hw_params.vdev_start_delay &&
|
|
+ arvif->vdev_type != WMI_VDEV_TYPE_AP &&
|
|
+ arvif->vdev_type != WMI_VDEV_TYPE_MONITOR) {
|
|
param.vdev_id = arvif->vdev_id;
|
|
param.peer_type = WMI_PEER_TYPE_DEFAULT;
|
|
param.peer_addr = ar->mac_addr;
|
|
@@ -6413,6 +6431,7 @@ int ath11k_mac_allocate(struct ath11k_base *ab)
|
|
mutex_init(&ar->conf_mutex);
|
|
init_completion(&ar->vdev_setup_done);
|
|
init_completion(&ar->peer_assoc_done);
|
|
+ init_completion(&ar->peer_delete_done);
|
|
init_completion(&ar->install_key_done);
|
|
init_completion(&ar->bss_survey_done);
|
|
init_completion(&ar->scan.started);
|
|
diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c
|
|
index 61ad9300eafb1..b69e7ebfa9303 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/peer.c
|
|
+++ b/drivers/net/wireless/ath/ath11k/peer.c
|
|
@@ -76,6 +76,23 @@ struct ath11k_peer *ath11k_peer_find_by_id(struct ath11k_base *ab,
|
|
return NULL;
|
|
}
|
|
|
|
+struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab,
|
|
+ int vdev_id)
|
|
+{
|
|
+ struct ath11k_peer *peer;
|
|
+
|
|
+ spin_lock_bh(&ab->base_lock);
|
|
+
|
|
+ list_for_each_entry(peer, &ab->peers, list) {
|
|
+ if (vdev_id == peer->vdev_id) {
|
|
+ spin_unlock_bh(&ab->base_lock);
|
|
+ return peer;
|
|
+ }
|
|
+ }
|
|
+ spin_unlock_bh(&ab->base_lock);
|
|
+ return NULL;
|
|
+}
|
|
+
|
|
void ath11k_peer_unmap_event(struct ath11k_base *ab, u16 peer_id)
|
|
{
|
|
struct ath11k_peer *peer;
|
|
@@ -177,12 +194,36 @@ static int ath11k_wait_for_peer_deleted(struct ath11k *ar, int vdev_id, const u8
|
|
return ath11k_wait_for_peer_common(ar->ab, vdev_id, addr, false);
|
|
}
|
|
|
|
+int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id,
|
|
+ const u8 *addr)
|
|
+{
|
|
+ int ret;
|
|
+ unsigned long time_left;
|
|
+
|
|
+ ret = ath11k_wait_for_peer_deleted(ar, vdev_id, addr);
|
|
+ if (ret) {
|
|
+ ath11k_warn(ar->ab, "failed wait for peer deleted");
|
|
+ return ret;
|
|
+ }
|
|
+
|
|
+ time_left = wait_for_completion_timeout(&ar->peer_delete_done,
|
|
+ 3 * HZ);
|
|
+ if (time_left == 0) {
|
|
+ ath11k_warn(ar->ab, "Timeout in receiving peer delete response\n");
|
|
+ return -ETIMEDOUT;
|
|
+ }
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
|
|
{
|
|
int ret;
|
|
|
|
lockdep_assert_held(&ar->conf_mutex);
|
|
|
|
+ reinit_completion(&ar->peer_delete_done);
|
|
+
|
|
ret = ath11k_wmi_send_peer_delete_cmd(ar, addr, vdev_id);
|
|
if (ret) {
|
|
ath11k_warn(ar->ab,
|
|
@@ -191,7 +232,7 @@ int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr)
|
|
return ret;
|
|
}
|
|
|
|
- ret = ath11k_wait_for_peer_deleted(ar, vdev_id, addr);
|
|
+ ret = ath11k_wait_for_peer_delete_done(ar, vdev_id, addr);
|
|
if (ret)
|
|
return ret;
|
|
|
|
@@ -247,8 +288,22 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
|
|
spin_unlock_bh(&ar->ab->base_lock);
|
|
ath11k_warn(ar->ab, "failed to find peer %pM on vdev %i after creation\n",
|
|
param->peer_addr, param->vdev_id);
|
|
- ath11k_wmi_send_peer_delete_cmd(ar, param->peer_addr,
|
|
- param->vdev_id);
|
|
+
|
|
+ reinit_completion(&ar->peer_delete_done);
|
|
+
|
|
+ ret = ath11k_wmi_send_peer_delete_cmd(ar, param->peer_addr,
|
|
+ param->vdev_id);
|
|
+ if (ret) {
|
|
+ ath11k_warn(ar->ab, "failed to delete peer vdev_id %d addr %pM\n",
|
|
+ param->vdev_id, param->peer_addr);
|
|
+ return ret;
|
|
+ }
|
|
+
|
|
+ ret = ath11k_wait_for_peer_delete_done(ar, param->vdev_id,
|
|
+ param->peer_addr);
|
|
+ if (ret)
|
|
+ return ret;
|
|
+
|
|
return -ENOENT;
|
|
}
|
|
|
|
diff --git a/drivers/net/wireless/ath/ath11k/peer.h b/drivers/net/wireless/ath/ath11k/peer.h
|
|
index 5d125ce8984e3..8553ed061aeaa 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/peer.h
|
|
+++ b/drivers/net/wireless/ath/ath11k/peer.h
|
|
@@ -41,5 +41,9 @@ void ath11k_peer_cleanup(struct ath11k *ar, u32 vdev_id);
|
|
int ath11k_peer_delete(struct ath11k *ar, u32 vdev_id, u8 *addr);
|
|
int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
|
|
struct ieee80211_sta *sta, struct peer_create_params *param);
|
|
+int ath11k_wait_for_peer_delete_done(struct ath11k *ar, u32 vdev_id,
|
|
+ const u8 *addr);
|
|
+struct ath11k_peer *ath11k_peer_find_by_vdev_id(struct ath11k_base *ab,
|
|
+ int vdev_id);
|
|
|
|
#endif /* _PEER_H_ */
|
|
diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
|
|
index 04b8b002edfe0..173ab6ceed1f6 100644
|
|
--- a/drivers/net/wireless/ath/ath11k/wmi.c
|
|
+++ b/drivers/net/wireless/ath/ath11k/wmi.c
|
|
@@ -5532,15 +5532,26 @@ static int ath11k_ready_event(struct ath11k_base *ab, struct sk_buff *skb)
|
|
static void ath11k_peer_delete_resp_event(struct ath11k_base *ab, struct sk_buff *skb)
|
|
{
|
|
struct wmi_peer_delete_resp_event peer_del_resp;
|
|
+ struct ath11k *ar;
|
|
|
|
if (ath11k_pull_peer_del_resp_ev(ab, skb, &peer_del_resp) != 0) {
|
|
ath11k_warn(ab, "failed to extract peer delete resp");
|
|
return;
|
|
}
|
|
|
|
- /* TODO: Do we need to validate whether ath11k_peer_find() return NULL
|
|
- * Why this is needed when there is HTT event for peer delete
|
|
- */
|
|
+ rcu_read_lock();
|
|
+ ar = ath11k_mac_get_ar_by_vdev_id(ab, peer_del_resp.vdev_id);
|
|
+ if (!ar) {
|
|
+ ath11k_warn(ab, "invalid vdev id in peer delete resp ev %d",
|
|
+ peer_del_resp.vdev_id);
|
|
+ rcu_read_unlock();
|
|
+ return;
|
|
+ }
|
|
+
|
|
+ complete(&ar->peer_delete_done);
|
|
+ rcu_read_unlock();
|
|
+ ath11k_dbg(ab, ATH11K_DBG_WMI, "peer delete resp for vdev id %d addr %pM\n",
|
|
+ peer_del_resp.vdev_id, peer_del_resp.peer_macaddr.addr);
|
|
}
|
|
|
|
static inline const char *ath11k_wmi_vdev_resp_print(u32 vdev_resp_status)
|
|
diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
|
|
index e06b74a54a697..01d85f2509362 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/ath9k.h
|
|
+++ b/drivers/net/wireless/ath/ath9k/ath9k.h
|
|
@@ -177,7 +177,8 @@ struct ath_frame_info {
|
|
s8 txq;
|
|
u8 keyix;
|
|
u8 rtscts_rate;
|
|
- u8 retries : 7;
|
|
+ u8 retries : 6;
|
|
+ u8 dyn_smps : 1;
|
|
u8 baw_tracked : 1;
|
|
u8 tx_power;
|
|
enum ath9k_key_type keytype:2;
|
|
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
|
|
index e60d4737fc6e4..5691bd6eb82c2 100644
|
|
--- a/drivers/net/wireless/ath/ath9k/xmit.c
|
|
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
|
|
@@ -1271,6 +1271,11 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
|
|
is_40, is_sgi, is_sp);
|
|
if (rix < 8 && (tx_info->flags & IEEE80211_TX_CTL_STBC))
|
|
info->rates[i].RateFlags |= ATH9K_RATESERIES_STBC;
|
|
+ if (rix >= 8 && fi->dyn_smps) {
|
|
+ info->rates[i].RateFlags |=
|
|
+ ATH9K_RATESERIES_RTS_CTS;
|
|
+ info->flags |= ATH9K_TXDESC_CTSENA;
|
|
+ }
|
|
|
|
info->txpower[i] = ath_get_rate_txpower(sc, bf, rix,
|
|
is_40, false);
|
|
@@ -2114,6 +2119,7 @@ static void setup_frame_info(struct ieee80211_hw *hw,
|
|
fi->keyix = an->ps_key;
|
|
else
|
|
fi->keyix = ATH9K_TXKEYIX_INVALID;
|
|
+ fi->dyn_smps = sta && sta->smps_mode == IEEE80211_SMPS_DYNAMIC;
|
|
fi->keytype = keytype;
|
|
fi->framelen = framelen;
|
|
fi->tx_power = txpower;
|
|
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
|
|
index 917617aad8d3c..262c40dc14a63 100644
|
|
--- a/drivers/net/wireless/mediatek/mt76/dma.c
|
|
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
|
|
@@ -521,13 +521,13 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
|
{
|
|
struct sk_buff *skb = q->rx_head;
|
|
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
|
+ int nr_frags = shinfo->nr_frags;
|
|
|
|
- if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) {
|
|
+ if (nr_frags < ARRAY_SIZE(shinfo->frags)) {
|
|
struct page *page = virt_to_head_page(data);
|
|
int offset = data - page_address(page) + q->buf_offset;
|
|
|
|
- skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len,
|
|
- q->buf_size);
|
|
+ skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
|
|
} else {
|
|
skb_free_frag(data);
|
|
}
|
|
@@ -536,7 +536,10 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
|
return;
|
|
|
|
q->rx_head = NULL;
|
|
- dev->drv->rx_skb(dev, q - dev->q_rx, skb);
|
|
+ if (nr_frags < ARRAY_SIZE(shinfo->frags))
|
|
+ dev->drv->rx_skb(dev, q - dev->q_rx, skb);
|
|
+ else
|
|
+ dev_kfree_skb(skb);
|
|
}
|
|
|
|
static int
|
|
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
|
|
index 5ead217ac2bc8..fab068c8ba026 100644
|
|
--- a/drivers/nvme/host/fc.c
|
|
+++ b/drivers/nvme/host/fc.c
|
|
@@ -2055,7 +2055,7 @@ done:
|
|
nvme_fc_complete_rq(rq);
|
|
|
|
check_error:
|
|
- if (terminate_assoc)
|
|
+ if (terminate_assoc && ctrl->ctrl.state != NVME_CTRL_RESETTING)
|
|
queue_work(nvme_reset_wq, &ctrl->ioerr_work);
|
|
}
|
|
|
|
diff --git a/drivers/pci/controller/pci-xgene-msi.c b/drivers/pci/controller/pci-xgene-msi.c
|
|
index 2470782cb01af..1c34c897a7e2a 100644
|
|
--- a/drivers/pci/controller/pci-xgene-msi.c
|
|
+++ b/drivers/pci/controller/pci-xgene-msi.c
|
|
@@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
|
|
if (!msi_group->gic_irq)
|
|
continue;
|
|
|
|
- irq_set_chained_handler(msi_group->gic_irq,
|
|
- xgene_msi_isr);
|
|
- err = irq_set_handler_data(msi_group->gic_irq, msi_group);
|
|
- if (err) {
|
|
- pr_err("failed to register GIC IRQ handler\n");
|
|
- return -EINVAL;
|
|
- }
|
|
+ irq_set_chained_handler_and_data(msi_group->gic_irq,
|
|
+ xgene_msi_isr, msi_group);
|
|
+
|
|
/*
|
|
* Statically allocate MSI GIC IRQs to each CPU core.
|
|
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
|
|
diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
|
|
index cf4c18f0c25ab..23548b517e4b6 100644
|
|
--- a/drivers/pci/controller/pcie-mediatek.c
|
|
+++ b/drivers/pci/controller/pcie-mediatek.c
|
|
@@ -1035,14 +1035,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
|
err = of_pci_get_devfn(child);
|
|
if (err < 0) {
|
|
dev_err(dev, "failed to parse devfn: %d\n", err);
|
|
- return err;
|
|
+ goto error_put_node;
|
|
}
|
|
|
|
slot = PCI_SLOT(err);
|
|
|
|
err = mtk_pcie_parse_port(pcie, child, slot);
|
|
if (err)
|
|
- return err;
|
|
+ goto error_put_node;
|
|
}
|
|
|
|
err = mtk_pcie_subsys_powerup(pcie);
|
|
@@ -1058,6 +1058,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
|
mtk_pcie_subsys_powerdown(pcie);
|
|
|
|
return 0;
|
|
+error_put_node:
|
|
+ of_node_put(child);
|
|
+ return err;
|
|
}
|
|
|
|
static int mtk_pcie_probe(struct platform_device *pdev)
|
|
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
|
|
index 5c93450725108..9e971fffeb6a3 100644
|
|
--- a/drivers/pci/pci.c
|
|
+++ b/drivers/pci/pci.c
|
|
@@ -4010,6 +4010,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
|
|
ret = logic_pio_register_range(range);
|
|
if (ret)
|
|
kfree(range);
|
|
+
|
|
+ /* Ignore duplicates due to deferred probing */
|
|
+ if (ret == -EEXIST)
|
|
+ ret = 0;
|
|
#endif
|
|
|
|
return ret;
|
|
diff --git a/drivers/pci/pcie/Kconfig b/drivers/pci/pcie/Kconfig
|
|
index 3946555a60422..45a2ef702b45b 100644
|
|
--- a/drivers/pci/pcie/Kconfig
|
|
+++ b/drivers/pci/pcie/Kconfig
|
|
@@ -133,14 +133,6 @@ config PCIE_PTM
|
|
This is only useful if you have devices that support PTM, but it
|
|
is safe to enable even if you don't.
|
|
|
|
-config PCIE_BW
|
|
- bool "PCI Express Bandwidth Change Notification"
|
|
- depends on PCIEPORTBUS
|
|
- help
|
|
- This enables PCI Express Bandwidth Change Notification. If
|
|
- you know link width or rate changes occur only to correct
|
|
- unreliable links, you may answer Y.
|
|
-
|
|
config PCIE_EDR
|
|
bool "PCI Express Error Disconnect Recover support"
|
|
depends on PCIE_DPC && ACPI
|
|
diff --git a/drivers/pci/pcie/Makefile b/drivers/pci/pcie/Makefile
|
|
index 68da9280ff117..9a7085668466f 100644
|
|
--- a/drivers/pci/pcie/Makefile
|
|
+++ b/drivers/pci/pcie/Makefile
|
|
@@ -12,5 +12,4 @@ obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o
|
|
obj-$(CONFIG_PCIE_PME) += pme.o
|
|
obj-$(CONFIG_PCIE_DPC) += dpc.o
|
|
obj-$(CONFIG_PCIE_PTM) += ptm.o
|
|
-obj-$(CONFIG_PCIE_BW) += bw_notification.o
|
|
obj-$(CONFIG_PCIE_EDR) += edr.o
|
|
diff --git a/drivers/pci/pcie/bw_notification.c b/drivers/pci/pcie/bw_notification.c
|
|
deleted file mode 100644
|
|
index 565d23cccb8b5..0000000000000
|
|
--- a/drivers/pci/pcie/bw_notification.c
|
|
+++ /dev/null
|
|
@@ -1,138 +0,0 @@
|
|
-// SPDX-License-Identifier: GPL-2.0+
|
|
-/*
|
|
- * PCI Express Link Bandwidth Notification services driver
|
|
- * Author: Alexandru Gagniuc <mr.nuke.me@gmail.com>
|
|
- *
|
|
- * Copyright (C) 2019, Dell Inc
|
|
- *
|
|
- * The PCIe Link Bandwidth Notification provides a way to notify the
|
|
- * operating system when the link width or data rate changes. This
|
|
- * capability is required for all root ports and downstream ports
|
|
- * supporting links wider than x1 and/or multiple link speeds.
|
|
- *
|
|
- * This service port driver hooks into the bandwidth notification interrupt
|
|
- * and warns when links become degraded in operation.
|
|
- */
|
|
-
|
|
-#define dev_fmt(fmt) "bw_notification: " fmt
|
|
-
|
|
-#include "../pci.h"
|
|
-#include "portdrv.h"
|
|
-
|
|
-static bool pcie_link_bandwidth_notification_supported(struct pci_dev *dev)
|
|
-{
|
|
- int ret;
|
|
- u32 lnk_cap;
|
|
-
|
|
- ret = pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnk_cap);
|
|
- return (ret == PCIBIOS_SUCCESSFUL) && (lnk_cap & PCI_EXP_LNKCAP_LBNC);
|
|
-}
|
|
-
|
|
-static void pcie_enable_link_bandwidth_notification(struct pci_dev *dev)
|
|
-{
|
|
- u16 lnk_ctl;
|
|
-
|
|
- pcie_capability_write_word(dev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
|
|
-
|
|
- pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
|
- lnk_ctl |= PCI_EXP_LNKCTL_LBMIE;
|
|
- pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
|
-}
|
|
-
|
|
-static void pcie_disable_link_bandwidth_notification(struct pci_dev *dev)
|
|
-{
|
|
- u16 lnk_ctl;
|
|
-
|
|
- pcie_capability_read_word(dev, PCI_EXP_LNKCTL, &lnk_ctl);
|
|
- lnk_ctl &= ~PCI_EXP_LNKCTL_LBMIE;
|
|
- pcie_capability_write_word(dev, PCI_EXP_LNKCTL, lnk_ctl);
|
|
-}
|
|
-
|
|
-static irqreturn_t pcie_bw_notification_irq(int irq, void *context)
|
|
-{
|
|
- struct pcie_device *srv = context;
|
|
- struct pci_dev *port = srv->port;
|
|
- u16 link_status, events;
|
|
- int ret;
|
|
-
|
|
- ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status);
|
|
- events = link_status & PCI_EXP_LNKSTA_LBMS;
|
|
-
|
|
- if (ret != PCIBIOS_SUCCESSFUL || !events)
|
|
- return IRQ_NONE;
|
|
-
|
|
- pcie_capability_write_word(port, PCI_EXP_LNKSTA, events);
|
|
- pcie_update_link_speed(port->subordinate, link_status);
|
|
- return IRQ_WAKE_THREAD;
|
|
-}
|
|
-
|
|
-static irqreturn_t pcie_bw_notification_handler(int irq, void *context)
|
|
-{
|
|
- struct pcie_device *srv = context;
|
|
- struct pci_dev *port = srv->port;
|
|
- struct pci_dev *dev;
|
|
-
|
|
- /*
|
|
- * Print status from downstream devices, not this root port or
|
|
- * downstream switch port.
|
|
- */
|
|
- down_read(&pci_bus_sem);
|
|
- list_for_each_entry(dev, &port->subordinate->devices, bus_list)
|
|
- pcie_report_downtraining(dev);
|
|
- up_read(&pci_bus_sem);
|
|
-
|
|
- return IRQ_HANDLED;
|
|
-}
|
|
-
|
|
-static int pcie_bandwidth_notification_probe(struct pcie_device *srv)
|
|
-{
|
|
- int ret;
|
|
-
|
|
- /* Single-width or single-speed ports do not have to support this. */
|
|
- if (!pcie_link_bandwidth_notification_supported(srv->port))
|
|
- return -ENODEV;
|
|
-
|
|
- ret = request_threaded_irq(srv->irq, pcie_bw_notification_irq,
|
|
- pcie_bw_notification_handler,
|
|
- IRQF_SHARED, "PCIe BW notif", srv);
|
|
- if (ret)
|
|
- return ret;
|
|
-
|
|
- pcie_enable_link_bandwidth_notification(srv->port);
|
|
- pci_info(srv->port, "enabled with IRQ %d\n", srv->irq);
|
|
-
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
|
|
-{
|
|
- pcie_disable_link_bandwidth_notification(srv->port);
|
|
- free_irq(srv->irq, srv);
|
|
-}
|
|
-
|
|
-static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
|
|
-{
|
|
- pcie_disable_link_bandwidth_notification(srv->port);
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
|
|
-{
|
|
- pcie_enable_link_bandwidth_notification(srv->port);
|
|
- return 0;
|
|
-}
|
|
-
|
|
-static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
|
|
- .name = "pcie_bw_notification",
|
|
- .port_type = PCIE_ANY_PORT,
|
|
- .service = PCIE_PORT_SERVICE_BWNOTIF,
|
|
- .probe = pcie_bandwidth_notification_probe,
|
|
- .suspend = pcie_bandwidth_notification_suspend,
|
|
- .resume = pcie_bandwidth_notification_resume,
|
|
- .remove = pcie_bandwidth_notification_remove,
|
|
-};
|
|
-
|
|
-int __init pcie_bandwidth_notification_init(void)
|
|
-{
|
|
- return pcie_port_service_register(&pcie_bandwidth_notification_driver);
|
|
-}
|
|
diff --git a/drivers/pci/pcie/portdrv.h b/drivers/pci/pcie/portdrv.h
|
|
index af7cf237432ac..2ff5724b8f13f 100644
|
|
--- a/drivers/pci/pcie/portdrv.h
|
|
+++ b/drivers/pci/pcie/portdrv.h
|
|
@@ -53,12 +53,6 @@ int pcie_dpc_init(void);
|
|
static inline int pcie_dpc_init(void) { return 0; }
|
|
#endif
|
|
|
|
-#ifdef CONFIG_PCIE_BW
|
|
-int pcie_bandwidth_notification_init(void);
|
|
-#else
|
|
-static inline int pcie_bandwidth_notification_init(void) { return 0; }
|
|
-#endif
|
|
-
|
|
/* Port Type */
|
|
#define PCIE_ANY_PORT (~0)
|
|
|
|
diff --git a/drivers/pci/pcie/portdrv_pci.c b/drivers/pci/pcie/portdrv_pci.c
|
|
index 3a3ce40ae1abd..d4559cf88f79d 100644
|
|
--- a/drivers/pci/pcie/portdrv_pci.c
|
|
+++ b/drivers/pci/pcie/portdrv_pci.c
|
|
@@ -248,7 +248,6 @@ static void __init pcie_init_services(void)
|
|
pcie_pme_init();
|
|
pcie_dpc_init();
|
|
pcie_hp_init();
|
|
- pcie_bandwidth_notification_init();
|
|
}
|
|
|
|
static int __init pcie_portdrv_init(void)
|
|
diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c
|
|
index f64b82824db28..2db7113383fdc 100644
|
|
--- a/drivers/platform/olpc/olpc-ec.c
|
|
+++ b/drivers/platform/olpc/olpc-ec.c
|
|
@@ -426,11 +426,8 @@ static int olpc_ec_probe(struct platform_device *pdev)
|
|
|
|
/* get the EC revision */
|
|
err = olpc_ec_cmd(EC_FIRMWARE_REV, NULL, 0, &ec->version, 1);
|
|
- if (err) {
|
|
- ec_priv = NULL;
|
|
- kfree(ec);
|
|
- return err;
|
|
- }
|
|
+ if (err)
|
|
+ goto error;
|
|
|
|
config.dev = pdev->dev.parent;
|
|
config.driver_data = ec;
|
|
@@ -440,12 +437,16 @@ static int olpc_ec_probe(struct platform_device *pdev)
|
|
if (IS_ERR(ec->dcon_rdev)) {
|
|
dev_err(&pdev->dev, "failed to register DCON regulator\n");
|
|
err = PTR_ERR(ec->dcon_rdev);
|
|
- kfree(ec);
|
|
- return err;
|
|
+ goto error;
|
|
}
|
|
|
|
ec->dbgfs_dir = olpc_ec_setup_debugfs();
|
|
|
|
+ return 0;
|
|
+
|
|
+error:
|
|
+ ec_priv = NULL;
|
|
+ kfree(ec);
|
|
return err;
|
|
}
|
|
|
|
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
|
|
index 217a7b84abdfa..2adfab552e22a 100644
|
|
--- a/drivers/s390/block/dasd.c
|
|
+++ b/drivers/s390/block/dasd.c
|
|
@@ -3087,7 +3087,8 @@ static blk_status_t do_dasd_request(struct blk_mq_hw_ctx *hctx,
|
|
|
|
basedev = block->base;
|
|
spin_lock_irq(&dq->lock);
|
|
- if (basedev->state < DASD_STATE_READY) {
|
|
+ if (basedev->state < DASD_STATE_READY ||
|
|
+ test_bit(DASD_FLAG_OFFLINE, &basedev->flags)) {
|
|
DBF_DEV_EVENT(DBF_ERR, basedev,
|
|
"device not ready for request %p", req);
|
|
rc = BLK_STS_IOERR;
|
|
@@ -3522,8 +3523,6 @@ void dasd_generic_remove(struct ccw_device *cdev)
|
|
struct dasd_device *device;
|
|
struct dasd_block *block;
|
|
|
|
- cdev->handler = NULL;
|
|
-
|
|
device = dasd_device_from_cdev(cdev);
|
|
if (IS_ERR(device)) {
|
|
dasd_remove_sysfs_files(cdev);
|
|
@@ -3542,6 +3541,7 @@ void dasd_generic_remove(struct ccw_device *cdev)
|
|
* no quite down yet.
|
|
*/
|
|
dasd_set_target_state(device, DASD_STATE_NEW);
|
|
+ cdev->handler = NULL;
|
|
/* dasd_delete_device destroys the device reference. */
|
|
block = device->block;
|
|
dasd_delete_device(device);
|
|
diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
|
|
index 8b3ed5b45277a..1ad5f7018ec2d 100644
|
|
--- a/drivers/s390/cio/vfio_ccw_ops.c
|
|
+++ b/drivers/s390/cio/vfio_ccw_ops.c
|
|
@@ -539,7 +539,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
|
if (ret)
|
|
return ret;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
case VFIO_DEVICE_GET_REGION_INFO:
|
|
{
|
|
@@ -557,7 +557,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
|
if (ret)
|
|
return ret;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
case VFIO_DEVICE_GET_IRQ_INFO:
|
|
{
|
|
@@ -578,7 +578,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
|
if (info.count == -1)
|
|
return -EINVAL;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
case VFIO_DEVICE_SET_IRQS:
|
|
{
|
|
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
|
|
index 7ceb6c433b3ba..72eb8f984534f 100644
|
|
--- a/drivers/s390/crypto/vfio_ap_ops.c
|
|
+++ b/drivers/s390/crypto/vfio_ap_ops.c
|
|
@@ -1279,7 +1279,7 @@ static int vfio_ap_mdev_get_device_info(unsigned long arg)
|
|
info.num_regions = 0;
|
|
info.num_irqs = 0;
|
|
|
|
- return copy_to_user((void __user *)arg, &info, minsz);
|
|
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
|
}
|
|
|
|
static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
|
|
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
|
|
index 2f7e06ec9a30e..bf8404b0e74ff 100644
|
|
--- a/drivers/s390/net/qeth_core.h
|
|
+++ b/drivers/s390/net/qeth_core.h
|
|
@@ -424,8 +424,6 @@ enum qeth_qdio_out_buffer_state {
|
|
/* Received QAOB notification on CQ: */
|
|
QETH_QDIO_BUF_QAOB_OK,
|
|
QETH_QDIO_BUF_QAOB_ERROR,
|
|
- /* Handled via transfer pending / completion queue. */
|
|
- QETH_QDIO_BUF_HANDLED_DELAYED,
|
|
};
|
|
|
|
struct qeth_qdio_out_buffer {
|
|
@@ -438,7 +436,7 @@ struct qeth_qdio_out_buffer {
|
|
int is_header[QDIO_MAX_ELEMENTS_PER_BUFFER];
|
|
|
|
struct qeth_qdio_out_q *q;
|
|
- struct qeth_qdio_out_buffer *next_pending;
|
|
+ struct list_head list_entry;
|
|
};
|
|
|
|
struct qeth_card;
|
|
@@ -502,6 +500,7 @@ struct qeth_qdio_out_q {
|
|
struct qdio_buffer *qdio_bufs[QDIO_MAX_BUFFERS_PER_Q];
|
|
struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q];
|
|
struct qdio_outbuf_state *bufstates; /* convenience pointer */
|
|
+ struct list_head pending_bufs;
|
|
struct qeth_out_q_stats stats;
|
|
spinlock_t lock;
|
|
unsigned int priority;
|
|
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
|
|
index f108232498baf..03f96177e58ee 100644
|
|
--- a/drivers/s390/net/qeth_core_main.c
|
|
+++ b/drivers/s390/net/qeth_core_main.c
|
|
@@ -73,9 +73,6 @@ static void qeth_free_qdio_queues(struct qeth_card *card);
|
|
static void qeth_notify_skbs(struct qeth_qdio_out_q *queue,
|
|
struct qeth_qdio_out_buffer *buf,
|
|
enum iucv_tx_notify notification);
|
|
-static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
|
|
- int budget);
|
|
-static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int);
|
|
|
|
static void qeth_close_dev_handler(struct work_struct *work)
|
|
{
|
|
@@ -466,42 +463,6 @@ static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15,
|
|
return n;
|
|
}
|
|
|
|
-static void qeth_cleanup_handled_pending(struct qeth_qdio_out_q *q, int bidx,
|
|
- int forced_cleanup)
|
|
-{
|
|
- if (q->card->options.cq != QETH_CQ_ENABLED)
|
|
- return;
|
|
-
|
|
- if (q->bufs[bidx]->next_pending != NULL) {
|
|
- struct qeth_qdio_out_buffer *head = q->bufs[bidx];
|
|
- struct qeth_qdio_out_buffer *c = q->bufs[bidx]->next_pending;
|
|
-
|
|
- while (c) {
|
|
- if (forced_cleanup ||
|
|
- atomic_read(&c->state) ==
|
|
- QETH_QDIO_BUF_HANDLED_DELAYED) {
|
|
- struct qeth_qdio_out_buffer *f = c;
|
|
-
|
|
- QETH_CARD_TEXT(f->q->card, 5, "fp");
|
|
- QETH_CARD_TEXT_(f->q->card, 5, "%lx", (long) f);
|
|
- /* release here to avoid interleaving between
|
|
- outbound tasklet and inbound tasklet
|
|
- regarding notifications and lifecycle */
|
|
- qeth_tx_complete_buf(c, forced_cleanup, 0);
|
|
-
|
|
- c = f->next_pending;
|
|
- WARN_ON_ONCE(head->next_pending != f);
|
|
- head->next_pending = c;
|
|
- kmem_cache_free(qeth_qdio_outbuf_cache, f);
|
|
- } else {
|
|
- head = c;
|
|
- c = c->next_pending;
|
|
- }
|
|
-
|
|
- }
|
|
- }
|
|
-}
|
|
-
|
|
static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
unsigned long phys_aob_addr)
|
|
{
|
|
@@ -517,18 +478,6 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
buffer = (struct qeth_qdio_out_buffer *) aob->user1;
|
|
QETH_CARD_TEXT_(card, 5, "%lx", aob->user1);
|
|
|
|
- /* Free dangling allocations. The attached skbs are handled by
|
|
- * qeth_cleanup_handled_pending().
|
|
- */
|
|
- for (i = 0;
|
|
- i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
|
|
- i++) {
|
|
- void *data = phys_to_virt(aob->sba[i]);
|
|
-
|
|
- if (data && buffer->is_header[i])
|
|
- kmem_cache_free(qeth_core_header_cache, data);
|
|
- }
|
|
-
|
|
if (aob->aorc) {
|
|
QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc);
|
|
new_state = QETH_QDIO_BUF_QAOB_ERROR;
|
|
@@ -536,10 +485,9 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
|
|
switch (atomic_xchg(&buffer->state, new_state)) {
|
|
case QETH_QDIO_BUF_PRIMED:
|
|
- /* Faster than TX completion code. */
|
|
- notification = qeth_compute_cq_notification(aob->aorc, 0);
|
|
- qeth_notify_skbs(buffer->q, buffer, notification);
|
|
- atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
|
|
+ /* Faster than TX completion code, let it handle the async
|
|
+ * completion for us.
|
|
+ */
|
|
break;
|
|
case QETH_QDIO_BUF_PENDING:
|
|
/* TX completion code is active and will handle the async
|
|
@@ -550,7 +498,20 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
|
/* TX completion code is already finished. */
|
|
notification = qeth_compute_cq_notification(aob->aorc, 1);
|
|
qeth_notify_skbs(buffer->q, buffer, notification);
|
|
- atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);
|
|
+
|
|
+ /* Free dangling allocations. The attached skbs are handled by
|
|
+ * qeth_tx_complete_pending_bufs().
|
|
+ */
|
|
+ for (i = 0;
|
|
+ i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
|
|
+ i++) {
|
|
+ void *data = phys_to_virt(aob->sba[i]);
|
|
+
|
|
+ if (data && buffer->is_header[i])
|
|
+ kmem_cache_free(qeth_core_header_cache, data);
|
|
+ }
|
|
+
|
|
+ atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY);
|
|
break;
|
|
default:
|
|
WARN_ON_ONCE(1);
|
|
@@ -1422,9 +1383,6 @@ static void qeth_tx_complete_buf(struct qeth_qdio_out_buffer *buf, bool error,
|
|
struct qeth_qdio_out_q *queue = buf->q;
|
|
struct sk_buff *skb;
|
|
|
|
- if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING)
|
|
- qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR);
|
|
-
|
|
/* Empty buffer? */
|
|
if (buf->next_element_to_fill == 0)
|
|
return;
|
|
@@ -1486,14 +1444,38 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
|
|
atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY);
|
|
}
|
|
|
|
+static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
|
|
+ struct qeth_qdio_out_q *queue,
|
|
+ bool drain)
|
|
+{
|
|
+ struct qeth_qdio_out_buffer *buf, *tmp;
|
|
+
|
|
+ list_for_each_entry_safe(buf, tmp, &queue->pending_bufs, list_entry) {
|
|
+ if (drain || atomic_read(&buf->state) == QETH_QDIO_BUF_EMPTY) {
|
|
+ QETH_CARD_TEXT(card, 5, "fp");
|
|
+ QETH_CARD_TEXT_(card, 5, "%lx", (long) buf);
|
|
+
|
|
+ if (drain)
|
|
+ qeth_notify_skbs(queue, buf,
|
|
+ TX_NOTIFY_GENERALERROR);
|
|
+ qeth_tx_complete_buf(buf, drain, 0);
|
|
+
|
|
+ list_del(&buf->list_entry);
|
|
+ kmem_cache_free(qeth_qdio_outbuf_cache, buf);
|
|
+ }
|
|
+ }
|
|
+}
|
|
+
|
|
static void qeth_drain_output_queue(struct qeth_qdio_out_q *q, bool free)
|
|
{
|
|
int j;
|
|
|
|
+ qeth_tx_complete_pending_bufs(q->card, q, true);
|
|
+
|
|
for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
|
|
if (!q->bufs[j])
|
|
continue;
|
|
- qeth_cleanup_handled_pending(q, j, 1);
|
|
+
|
|
qeth_clear_output_buffer(q, q->bufs[j], true, 0);
|
|
if (free) {
|
|
kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[j]);
|
|
@@ -2613,7 +2595,6 @@ static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *q, int bidx)
|
|
skb_queue_head_init(&newbuf->skb_list);
|
|
lockdep_set_class(&newbuf->skb_list.lock, &qdio_out_skb_queue_key);
|
|
newbuf->q = q;
|
|
- newbuf->next_pending = q->bufs[bidx];
|
|
atomic_set(&newbuf->state, QETH_QDIO_BUF_EMPTY);
|
|
q->bufs[bidx] = newbuf;
|
|
return 0;
|
|
@@ -2632,15 +2613,28 @@ static void qeth_free_output_queue(struct qeth_qdio_out_q *q)
|
|
static struct qeth_qdio_out_q *qeth_alloc_output_queue(void)
|
|
{
|
|
struct qeth_qdio_out_q *q = kzalloc(sizeof(*q), GFP_KERNEL);
|
|
+ unsigned int i;
|
|
|
|
if (!q)
|
|
return NULL;
|
|
|
|
- if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q)) {
|
|
- kfree(q);
|
|
- return NULL;
|
|
+ if (qdio_alloc_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q))
|
|
+ goto err_qdio_bufs;
|
|
+
|
|
+ for (i = 0; i < QDIO_MAX_BUFFERS_PER_Q; i++) {
|
|
+ if (qeth_init_qdio_out_buf(q, i))
|
|
+ goto err_out_bufs;
|
|
}
|
|
+
|
|
return q;
|
|
+
|
|
+err_out_bufs:
|
|
+ while (i > 0)
|
|
+ kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[--i]);
|
|
+ qdio_free_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
|
|
+err_qdio_bufs:
|
|
+ kfree(q);
|
|
+ return NULL;
|
|
}
|
|
|
|
static void qeth_tx_completion_timer(struct timer_list *timer)
|
|
@@ -2653,7 +2647,7 @@ static void qeth_tx_completion_timer(struct timer_list *timer)
|
|
|
|
static int qeth_alloc_qdio_queues(struct qeth_card *card)
|
|
{
|
|
- int i, j;
|
|
+ unsigned int i;
|
|
|
|
QETH_CARD_TEXT(card, 2, "allcqdbf");
|
|
|
|
@@ -2682,18 +2676,12 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
|
|
card->qdio.out_qs[i] = queue;
|
|
queue->card = card;
|
|
queue->queue_no = i;
|
|
+ INIT_LIST_HEAD(&queue->pending_bufs);
|
|
spin_lock_init(&queue->lock);
|
|
timer_setup(&queue->timer, qeth_tx_completion_timer, 0);
|
|
queue->coalesce_usecs = QETH_TX_COALESCE_USECS;
|
|
queue->max_coalesced_frames = QETH_TX_MAX_COALESCED_FRAMES;
|
|
queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT;
|
|
-
|
|
- /* give outbound qeth_qdio_buffers their qdio_buffers */
|
|
- for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) {
|
|
- WARN_ON(queue->bufs[j]);
|
|
- if (qeth_init_qdio_out_buf(queue, j))
|
|
- goto out_freeoutqbufs;
|
|
- }
|
|
}
|
|
|
|
/* completion */
|
|
@@ -2702,13 +2690,6 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card)
|
|
|
|
return 0;
|
|
|
|
-out_freeoutqbufs:
|
|
- while (j > 0) {
|
|
- --j;
|
|
- kmem_cache_free(qeth_qdio_outbuf_cache,
|
|
- card->qdio.out_qs[i]->bufs[j]);
|
|
- card->qdio.out_qs[i]->bufs[j] = NULL;
|
|
- }
|
|
out_freeoutq:
|
|
while (i > 0) {
|
|
qeth_free_output_queue(card->qdio.out_qs[--i]);
|
|
@@ -5871,9 +5852,13 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
|
|
QDIO_OUTBUF_STATE_FLAG_PENDING)) {
|
|
WARN_ON_ONCE(card->options.cq != QETH_CQ_ENABLED);
|
|
|
|
- if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED,
|
|
- QETH_QDIO_BUF_PENDING) ==
|
|
- QETH_QDIO_BUF_PRIMED) {
|
|
+ QETH_CARD_TEXT_(card, 5, "pel%u", bidx);
|
|
+
|
|
+ switch (atomic_cmpxchg(&buffer->state,
|
|
+ QETH_QDIO_BUF_PRIMED,
|
|
+ QETH_QDIO_BUF_PENDING)) {
|
|
+ case QETH_QDIO_BUF_PRIMED:
|
|
+ /* We have initial ownership, no QAOB (yet): */
|
|
qeth_notify_skbs(queue, buffer, TX_NOTIFY_PENDING);
|
|
|
|
/* Handle race with qeth_qdio_handle_aob(): */
|
|
@@ -5881,39 +5866,51 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
|
|
QETH_QDIO_BUF_NEED_QAOB)) {
|
|
case QETH_QDIO_BUF_PENDING:
|
|
/* No concurrent QAOB notification. */
|
|
- break;
|
|
+
|
|
+ /* Prepare the queue slot for immediate re-use: */
|
|
+ qeth_scrub_qdio_buffer(buffer->buffer, queue->max_elements);
|
|
+ if (qeth_init_qdio_out_buf(queue, bidx)) {
|
|
+ QETH_CARD_TEXT(card, 2, "outofbuf");
|
|
+ qeth_schedule_recovery(card);
|
|
+ }
|
|
+
|
|
+ list_add(&buffer->list_entry,
|
|
+ &queue->pending_bufs);
|
|
+ /* Skip clearing the buffer: */
|
|
+ return;
|
|
case QETH_QDIO_BUF_QAOB_OK:
|
|
qeth_notify_skbs(queue, buffer,
|
|
TX_NOTIFY_DELAYED_OK);
|
|
- atomic_set(&buffer->state,
|
|
- QETH_QDIO_BUF_HANDLED_DELAYED);
|
|
+ error = false;
|
|
break;
|
|
case QETH_QDIO_BUF_QAOB_ERROR:
|
|
qeth_notify_skbs(queue, buffer,
|
|
TX_NOTIFY_DELAYED_GENERALERROR);
|
|
- atomic_set(&buffer->state,
|
|
- QETH_QDIO_BUF_HANDLED_DELAYED);
|
|
+ error = true;
|
|
break;
|
|
default:
|
|
WARN_ON_ONCE(1);
|
|
}
|
|
- }
|
|
-
|
|
- QETH_CARD_TEXT_(card, 5, "pel%u", bidx);
|
|
|
|
- /* prepare the queue slot for re-use: */
|
|
- qeth_scrub_qdio_buffer(buffer->buffer, queue->max_elements);
|
|
- if (qeth_init_qdio_out_buf(queue, bidx)) {
|
|
- QETH_CARD_TEXT(card, 2, "outofbuf");
|
|
- qeth_schedule_recovery(card);
|
|
+ break;
|
|
+ case QETH_QDIO_BUF_QAOB_OK:
|
|
+ /* qeth_qdio_handle_aob() already received a QAOB: */
|
|
+ qeth_notify_skbs(queue, buffer, TX_NOTIFY_OK);
|
|
+ error = false;
|
|
+ break;
|
|
+ case QETH_QDIO_BUF_QAOB_ERROR:
|
|
+ /* qeth_qdio_handle_aob() already received a QAOB: */
|
|
+ qeth_notify_skbs(queue, buffer, TX_NOTIFY_GENERALERROR);
|
|
+ error = true;
|
|
+ break;
|
|
+ default:
|
|
+ WARN_ON_ONCE(1);
|
|
}
|
|
-
|
|
- return;
|
|
- }
|
|
-
|
|
- if (card->options.cq == QETH_CQ_ENABLED)
|
|
+ } else if (card->options.cq == QETH_CQ_ENABLED) {
|
|
qeth_notify_skbs(queue, buffer,
|
|
qeth_compute_cq_notification(sflags, 0));
|
|
+ }
|
|
+
|
|
qeth_clear_output_buffer(queue, buffer, error, budget);
|
|
}
|
|
|
|
@@ -5934,6 +5931,8 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
|
|
unsigned int bytes = 0;
|
|
int completed;
|
|
|
|
+ qeth_tx_complete_pending_bufs(card, queue, false);
|
|
+
|
|
if (qeth_out_queue_is_empty(queue)) {
|
|
napi_complete(napi);
|
|
return 0;
|
|
@@ -5966,7 +5965,6 @@ static int qeth_tx_poll(struct napi_struct *napi, int budget)
|
|
|
|
qeth_handle_send_error(card, buffer, error);
|
|
qeth_iqd_tx_complete(queue, bidx, error, budget);
|
|
- qeth_cleanup_handled_pending(queue, bidx, false);
|
|
}
|
|
|
|
netdev_tx_completed_queue(txq, packets, bytes);
|
|
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
|
|
index 5125a6c7f70e9..41b8192d207d0 100644
|
|
--- a/drivers/scsi/libiscsi.c
|
|
+++ b/drivers/scsi/libiscsi.c
|
|
@@ -1532,14 +1532,9 @@ check_mgmt:
|
|
}
|
|
rc = iscsi_prep_scsi_cmd_pdu(conn->task);
|
|
if (rc) {
|
|
- if (rc == -ENOMEM || rc == -EACCES) {
|
|
- spin_lock_bh(&conn->taskqueuelock);
|
|
- list_add_tail(&conn->task->running,
|
|
- &conn->cmdqueue);
|
|
- conn->task = NULL;
|
|
- spin_unlock_bh(&conn->taskqueuelock);
|
|
- goto done;
|
|
- } else
|
|
+ if (rc == -ENOMEM || rc == -EACCES)
|
|
+ fail_scsi_task(conn->task, DID_IMM_RETRY);
|
|
+ else
|
|
fail_scsi_task(conn->task, DID_ABORT);
|
|
spin_lock_bh(&conn->taskqueuelock);
|
|
continue;
|
|
diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
|
|
index bdcd27faa0547..34b424ad96a20 100644
|
|
--- a/drivers/scsi/ufs/ufs-sysfs.c
|
|
+++ b/drivers/scsi/ufs/ufs-sysfs.c
|
|
@@ -785,7 +785,8 @@ static ssize_t _pname##_show(struct device *dev, \
|
|
struct scsi_device *sdev = to_scsi_device(dev); \
|
|
struct ufs_hba *hba = shost_priv(sdev->host); \
|
|
u8 lun = ufshcd_scsi_to_upiu_lun(sdev->lun); \
|
|
- if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun)) \
|
|
+ if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun, \
|
|
+ _duname##_DESC_PARAM##_puname)) \
|
|
return -EINVAL; \
|
|
return ufs_sysfs_read_desc_param(hba, QUERY_DESC_IDN_##_duname, \
|
|
lun, _duname##_DESC_PARAM##_puname, buf, _size); \
|
|
diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
|
|
index f8ab16f30fdca..07ca39008b84b 100644
|
|
--- a/drivers/scsi/ufs/ufs.h
|
|
+++ b/drivers/scsi/ufs/ufs.h
|
|
@@ -551,13 +551,15 @@ struct ufs_dev_info {
|
|
* @return: true if the lun has a matching unit descriptor, false otherwise
|
|
*/
|
|
static inline bool ufs_is_valid_unit_desc_lun(struct ufs_dev_info *dev_info,
|
|
- u8 lun)
|
|
+ u8 lun, u8 param_offset)
|
|
{
|
|
if (!dev_info || !dev_info->max_lu_supported) {
|
|
pr_err("Max General LU supported by UFS isn't initialized\n");
|
|
return false;
|
|
}
|
|
-
|
|
+ /* WB is available only for the logical unit from 0 to 7 */
|
|
+ if (param_offset == UNIT_DESC_PARAM_WB_BUF_ALLOC_UNITS)
|
|
+ return lun < UFS_UPIU_MAX_WB_LUN_ID;
|
|
return lun == UFS_UPIU_RPMB_WLUN || (lun < dev_info->max_lu_supported);
|
|
}
|
|
|
|
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
|
|
index 5a7cc2e42ffdf..97d9d5d99adcc 100644
|
|
--- a/drivers/scsi/ufs/ufshcd.c
|
|
+++ b/drivers/scsi/ufs/ufshcd.c
|
|
@@ -3378,7 +3378,7 @@ static inline int ufshcd_read_unit_desc_param(struct ufs_hba *hba,
|
|
* Unit descriptors are only available for general purpose LUs (LUN id
|
|
* from 0 to 7) and RPMB Well known LU.
|
|
*/
|
|
- if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun))
|
|
+ if (!ufs_is_valid_unit_desc_lun(&hba->dev_info, lun, param_offset))
|
|
return -EOPNOTSUPP;
|
|
|
|
return ufshcd_read_desc_param(hba, QUERY_DESC_IDN_UNIT, lun,
|
|
diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
|
|
index 6eeb39669a866..53c4311cc6ab5 100644
|
|
--- a/drivers/spi/spi-stm32.c
|
|
+++ b/drivers/spi/spi-stm32.c
|
|
@@ -928,8 +928,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
|
mask |= STM32H7_SPI_SR_RXP;
|
|
|
|
if (!(sr & mask)) {
|
|
- dev_dbg(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
|
|
- sr, ier);
|
|
+ dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
|
|
+ sr, ier);
|
|
spin_unlock_irqrestore(&spi->lock, flags);
|
|
return IRQ_NONE;
|
|
}
|
|
@@ -956,15 +956,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
|
}
|
|
|
|
if (sr & STM32H7_SPI_SR_OVR) {
|
|
- dev_warn(spi->dev, "Overrun: received value discarded\n");
|
|
- if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
|
|
- stm32h7_spi_read_rxfifo(spi, false);
|
|
- /*
|
|
- * If overrun is detected while using DMA, it means that
|
|
- * something went wrong, so stop the current transfer
|
|
- */
|
|
- if (spi->cur_usedma)
|
|
- end = true;
|
|
+ dev_err(spi->dev, "Overrun: RX data lost\n");
|
|
+ end = true;
|
|
}
|
|
|
|
if (sr & STM32H7_SPI_SR_EOT) {
|
|
diff --git a/drivers/staging/comedi/drivers/addi_apci_1032.c b/drivers/staging/comedi/drivers/addi_apci_1032.c
|
|
index 35b75f0c9200b..81a246fbcc01f 100644
|
|
--- a/drivers/staging/comedi/drivers/addi_apci_1032.c
|
|
+++ b/drivers/staging/comedi/drivers/addi_apci_1032.c
|
|
@@ -260,6 +260,7 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
|
|
struct apci1032_private *devpriv = dev->private;
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
unsigned int ctrl;
|
|
+ unsigned short val;
|
|
|
|
/* check interrupt is from this device */
|
|
if ((inl(devpriv->amcc_iobase + AMCC_OP_REG_INTCSR) &
|
|
@@ -275,7 +276,8 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
|
|
outl(ctrl & ~APCI1032_CTRL_INT_ENA, dev->iobase + APCI1032_CTRL_REG);
|
|
|
|
s->state = inl(dev->iobase + APCI1032_STATUS_REG) & 0xffff;
|
|
- comedi_buf_write_samples(s, &s->state, 1);
|
|
+ val = s->state;
|
|
+ comedi_buf_write_samples(s, &val, 1);
|
|
comedi_handle_events(dev, s);
|
|
|
|
/* enable the interrupt */
|
|
diff --git a/drivers/staging/comedi/drivers/addi_apci_1500.c b/drivers/staging/comedi/drivers/addi_apci_1500.c
|
|
index 11efb21555e39..b04c15dcfb575 100644
|
|
--- a/drivers/staging/comedi/drivers/addi_apci_1500.c
|
|
+++ b/drivers/staging/comedi/drivers/addi_apci_1500.c
|
|
@@ -208,7 +208,7 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
|
|
struct comedi_device *dev = d;
|
|
struct apci1500_private *devpriv = dev->private;
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
- unsigned int status = 0;
|
|
+ unsigned short status = 0;
|
|
unsigned int val;
|
|
|
|
val = inl(devpriv->amcc + AMCC_OP_REG_INTCSR);
|
|
@@ -238,14 +238,14 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
|
|
*
|
|
* Mask Meaning
|
|
* ---------- ------------------------------------------
|
|
- * 0x00000001 Event 1 has occurred
|
|
- * 0x00000010 Event 2 has occurred
|
|
- * 0x00000100 Counter/timer 1 has run down (not implemented)
|
|
- * 0x00001000 Counter/timer 2 has run down (not implemented)
|
|
- * 0x00010000 Counter 3 has run down (not implemented)
|
|
- * 0x00100000 Watchdog has run down (not implemented)
|
|
- * 0x01000000 Voltage error
|
|
- * 0x10000000 Short-circuit error
|
|
+ * 0b00000001 Event 1 has occurred
|
|
+ * 0b00000010 Event 2 has occurred
|
|
+ * 0b00000100 Counter/timer 1 has run down (not implemented)
|
|
+ * 0b00001000 Counter/timer 2 has run down (not implemented)
|
|
+ * 0b00010000 Counter 3 has run down (not implemented)
|
|
+ * 0b00100000 Watchdog has run down (not implemented)
|
|
+ * 0b01000000 Voltage error
|
|
+ * 0b10000000 Short-circuit error
|
|
*/
|
|
comedi_buf_write_samples(s, &status, 1);
|
|
comedi_handle_events(dev, s);
|
|
diff --git a/drivers/staging/comedi/drivers/adv_pci1710.c b/drivers/staging/comedi/drivers/adv_pci1710.c
|
|
index 692893c7e5c3d..090607760be6b 100644
|
|
--- a/drivers/staging/comedi/drivers/adv_pci1710.c
|
|
+++ b/drivers/staging/comedi/drivers/adv_pci1710.c
|
|
@@ -300,11 +300,11 @@ static int pci1710_ai_eoc(struct comedi_device *dev,
|
|
static int pci1710_ai_read_sample(struct comedi_device *dev,
|
|
struct comedi_subdevice *s,
|
|
unsigned int cur_chan,
|
|
- unsigned int *val)
|
|
+ unsigned short *val)
|
|
{
|
|
const struct boardtype *board = dev->board_ptr;
|
|
struct pci1710_private *devpriv = dev->private;
|
|
- unsigned int sample;
|
|
+ unsigned short sample;
|
|
unsigned int chan;
|
|
|
|
sample = inw(dev->iobase + PCI171X_AD_DATA_REG);
|
|
@@ -345,7 +345,7 @@ static int pci1710_ai_insn_read(struct comedi_device *dev,
|
|
pci1710_ai_setup_chanlist(dev, s, &insn->chanspec, 1, 1);
|
|
|
|
for (i = 0; i < insn->n; i++) {
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
|
|
/* start conversion */
|
|
outw(0, dev->iobase + PCI171X_SOFTTRG_REG);
|
|
@@ -395,7 +395,7 @@ static void pci1710_handle_every_sample(struct comedi_device *dev,
|
|
{
|
|
struct comedi_cmd *cmd = &s->async->cmd;
|
|
unsigned int status;
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
int ret;
|
|
|
|
status = inw(dev->iobase + PCI171X_STATUS_REG);
|
|
@@ -455,7 +455,7 @@ static void pci1710_handle_fifo(struct comedi_device *dev,
|
|
}
|
|
|
|
for (i = 0; i < devpriv->max_samples; i++) {
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
int ret;
|
|
|
|
ret = pci1710_ai_read_sample(dev, s, s->async->cur_chan, &val);
|
|
diff --git a/drivers/staging/comedi/drivers/das6402.c b/drivers/staging/comedi/drivers/das6402.c
|
|
index 04e224f8b7793..96f4107b8054d 100644
|
|
--- a/drivers/staging/comedi/drivers/das6402.c
|
|
+++ b/drivers/staging/comedi/drivers/das6402.c
|
|
@@ -186,7 +186,7 @@ static irqreturn_t das6402_interrupt(int irq, void *d)
|
|
if (status & DAS6402_STATUS_FFULL) {
|
|
async->events |= COMEDI_CB_OVERFLOW;
|
|
} else if (status & DAS6402_STATUS_FFNE) {
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
|
|
val = das6402_ai_read_sample(dev, s);
|
|
comedi_buf_write_samples(s, &val, 1);
|
|
diff --git a/drivers/staging/comedi/drivers/das800.c b/drivers/staging/comedi/drivers/das800.c
|
|
index 4ea100ff6930f..2881808d6606c 100644
|
|
--- a/drivers/staging/comedi/drivers/das800.c
|
|
+++ b/drivers/staging/comedi/drivers/das800.c
|
|
@@ -427,7 +427,7 @@ static irqreturn_t das800_interrupt(int irq, void *d)
|
|
struct comedi_cmd *cmd;
|
|
unsigned long irq_flags;
|
|
unsigned int status;
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
bool fifo_empty;
|
|
bool fifo_overflow;
|
|
int i;
|
|
diff --git a/drivers/staging/comedi/drivers/dmm32at.c b/drivers/staging/comedi/drivers/dmm32at.c
|
|
index 17e6018918bbf..56682f01242fd 100644
|
|
--- a/drivers/staging/comedi/drivers/dmm32at.c
|
|
+++ b/drivers/staging/comedi/drivers/dmm32at.c
|
|
@@ -404,7 +404,7 @@ static irqreturn_t dmm32at_isr(int irq, void *d)
|
|
{
|
|
struct comedi_device *dev = d;
|
|
unsigned char intstat;
|
|
- unsigned int val;
|
|
+ unsigned short val;
|
|
int i;
|
|
|
|
if (!dev->attached) {
|
|
diff --git a/drivers/staging/comedi/drivers/me4000.c b/drivers/staging/comedi/drivers/me4000.c
|
|
index 726e40dc17b62..0d3d4cafce2e8 100644
|
|
--- a/drivers/staging/comedi/drivers/me4000.c
|
|
+++ b/drivers/staging/comedi/drivers/me4000.c
|
|
@@ -924,7 +924,7 @@ static irqreturn_t me4000_ai_isr(int irq, void *dev_id)
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
int i;
|
|
int c = 0;
|
|
- unsigned int lval;
|
|
+ unsigned short lval;
|
|
|
|
if (!dev->attached)
|
|
return IRQ_NONE;
|
|
diff --git a/drivers/staging/comedi/drivers/pcl711.c b/drivers/staging/comedi/drivers/pcl711.c
|
|
index 2dbf69e309650..bd6f42fe9e3ca 100644
|
|
--- a/drivers/staging/comedi/drivers/pcl711.c
|
|
+++ b/drivers/staging/comedi/drivers/pcl711.c
|
|
@@ -184,7 +184,7 @@ static irqreturn_t pcl711_interrupt(int irq, void *d)
|
|
struct comedi_device *dev = d;
|
|
struct comedi_subdevice *s = dev->read_subdev;
|
|
struct comedi_cmd *cmd = &s->async->cmd;
|
|
- unsigned int data;
|
|
+ unsigned short data;
|
|
|
|
if (!dev->attached) {
|
|
dev_err(dev->class_dev, "spurious interrupt\n");
|
|
diff --git a/drivers/staging/comedi/drivers/pcl818.c b/drivers/staging/comedi/drivers/pcl818.c
|
|
index 63e3011158f23..f4b4a686c710f 100644
|
|
--- a/drivers/staging/comedi/drivers/pcl818.c
|
|
+++ b/drivers/staging/comedi/drivers/pcl818.c
|
|
@@ -423,7 +423,7 @@ static int pcl818_ai_eoc(struct comedi_device *dev,
|
|
|
|
static bool pcl818_ai_write_sample(struct comedi_device *dev,
|
|
struct comedi_subdevice *s,
|
|
- unsigned int chan, unsigned int val)
|
|
+ unsigned int chan, unsigned short val)
|
|
{
|
|
struct pcl818_private *devpriv = dev->private;
|
|
struct comedi_cmd *cmd = &s->async->cmd;
|
|
diff --git a/drivers/staging/ks7010/ks_wlan_net.c b/drivers/staging/ks7010/ks_wlan_net.c
|
|
index dc09cc6e1c478..09e7b4cd0138c 100644
|
|
--- a/drivers/staging/ks7010/ks_wlan_net.c
|
|
+++ b/drivers/staging/ks7010/ks_wlan_net.c
|
|
@@ -1120,6 +1120,7 @@ static int ks_wlan_set_scan(struct net_device *dev,
|
|
{
|
|
struct ks_wlan_private *priv = netdev_priv(dev);
|
|
struct iw_scan_req *req = NULL;
|
|
+ int len;
|
|
|
|
if (priv->sleep_mode == SLP_SLEEP)
|
|
return -EPERM;
|
|
@@ -1129,8 +1130,9 @@ static int ks_wlan_set_scan(struct net_device *dev,
|
|
if (wrqu->data.length == sizeof(struct iw_scan_req) &&
|
|
wrqu->data.flags & IW_SCAN_THIS_ESSID) {
|
|
req = (struct iw_scan_req *)extra;
|
|
- priv->scan_ssid_len = req->essid_len;
|
|
- memcpy(priv->scan_ssid, req->essid, priv->scan_ssid_len);
|
|
+ len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
|
+ priv->scan_ssid_len = len;
|
|
+ memcpy(priv->scan_ssid, req->essid, len);
|
|
} else {
|
|
priv->scan_ssid_len = 0;
|
|
}
|
|
diff --git a/drivers/staging/media/rkisp1/rkisp1-params.c b/drivers/staging/media/rkisp1/rkisp1-params.c
|
|
index 986d293201e63..3eb3fb2d64bc1 100644
|
|
--- a/drivers/staging/media/rkisp1/rkisp1-params.c
|
|
+++ b/drivers/staging/media/rkisp1/rkisp1-params.c
|
|
@@ -1291,7 +1291,6 @@ static void rkisp1_params_config_parameter(struct rkisp1_params *params)
|
|
memset(hst.hist_weight, 0x01, sizeof(hst.hist_weight));
|
|
rkisp1_hst_config(params, &hst);
|
|
rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP,
|
|
- ~RKISP1_CIF_ISP_HIST_PROP_MODE_MASK |
|
|
rkisp1_hst_params_default_config.mode);
|
|
|
|
/* set the range */
|
|
diff --git a/drivers/staging/rtl8188eu/core/rtw_ap.c b/drivers/staging/rtl8188eu/core/rtw_ap.c
|
|
index 2078d87055bf6..d25a5734249f0 100644
|
|
--- a/drivers/staging/rtl8188eu/core/rtw_ap.c
|
|
+++ b/drivers/staging/rtl8188eu/core/rtw_ap.c
|
|
@@ -791,6 +791,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _SSID_IE_, &ie_len,
|
|
pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
|
if (p && ie_len > 0) {
|
|
+ ie_len = min_t(int, ie_len, sizeof(pbss_network->ssid.ssid));
|
|
memset(&pbss_network->ssid, 0, sizeof(struct ndis_802_11_ssid));
|
|
memcpy(pbss_network->ssid.ssid, p + 2, ie_len);
|
|
pbss_network->ssid.ssid_length = ie_len;
|
|
@@ -811,6 +812,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _SUPPORTEDRATES_IE_, &ie_len,
|
|
pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
|
if (p) {
|
|
+ ie_len = min_t(int, ie_len, NDIS_802_11_LENGTH_RATES_EX);
|
|
memcpy(supportRate, p + 2, ie_len);
|
|
supportRateNum = ie_len;
|
|
}
|
|
@@ -819,6 +821,8 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _EXT_SUPPORTEDRATES_IE_,
|
|
&ie_len, pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
|
if (p) {
|
|
+ ie_len = min_t(int, ie_len,
|
|
+ NDIS_802_11_LENGTH_RATES_EX - supportRateNum);
|
|
memcpy(supportRate + supportRateNum, p + 2, ie_len);
|
|
supportRateNum += ie_len;
|
|
}
|
|
@@ -934,6 +938,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
|
|
|
pht_cap->mcs.rx_mask[0] = 0xff;
|
|
pht_cap->mcs.rx_mask[1] = 0x0;
|
|
+ ie_len = min_t(int, ie_len, sizeof(pmlmepriv->htpriv.ht_cap));
|
|
memcpy(&pmlmepriv->htpriv.ht_cap, p + 2, ie_len);
|
|
}
|
|
|
|
diff --git a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
|
|
index 8e10462f1fbe5..d487528b5fc9a 100644
|
|
--- a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
|
|
+++ b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
|
|
@@ -1144,9 +1144,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
|
|
break;
|
|
}
|
|
sec_len = *(pos++); len -= 1;
|
|
- if (sec_len > 0 && sec_len <= len) {
|
|
+ if (sec_len > 0 &&
|
|
+ sec_len <= len &&
|
|
+ sec_len <= 32) {
|
|
ssid[ssid_index].ssid_length = sec_len;
|
|
- memcpy(ssid[ssid_index].ssid, pos, ssid[ssid_index].ssid_length);
|
|
+ memcpy(ssid[ssid_index].ssid, pos, sec_len);
|
|
ssid_index++;
|
|
}
|
|
pos += sec_len;
|
|
diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c b/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
|
|
index 16bcee13f64b5..407effde5e71a 100644
|
|
--- a/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
|
|
+++ b/drivers/staging/rtl8192e/rtl8192e/rtl_wx.c
|
|
@@ -406,9 +406,10 @@ static int _rtl92e_wx_set_scan(struct net_device *dev,
|
|
struct iw_scan_req *req = (struct iw_scan_req *)b;
|
|
|
|
if (req->essid_len) {
|
|
- ieee->current_network.ssid_len = req->essid_len;
|
|
- memcpy(ieee->current_network.ssid, req->essid,
|
|
- req->essid_len);
|
|
+ int len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
|
+
|
|
+ ieee->current_network.ssid_len = len;
|
|
+ memcpy(ieee->current_network.ssid, req->essid, len);
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/staging/rtl8192u/r8192U_wx.c b/drivers/staging/rtl8192u/r8192U_wx.c
|
|
index d853586705fc9..77bf88696a844 100644
|
|
--- a/drivers/staging/rtl8192u/r8192U_wx.c
|
|
+++ b/drivers/staging/rtl8192u/r8192U_wx.c
|
|
@@ -331,8 +331,10 @@ static int r8192_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
|
|
struct iw_scan_req *req = (struct iw_scan_req *)b;
|
|
|
|
if (req->essid_len) {
|
|
- ieee->current_network.ssid_len = req->essid_len;
|
|
- memcpy(ieee->current_network.ssid, req->essid, req->essid_len);
|
|
+ int len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
|
+
|
|
+ ieee->current_network.ssid_len = len;
|
|
+ memcpy(ieee->current_network.ssid, req->essid, len);
|
|
}
|
|
}
|
|
|
|
diff --git a/drivers/staging/rtl8712/rtl871x_cmd.c b/drivers/staging/rtl8712/rtl871x_cmd.c
|
|
index 18116469bd316..75716f59044d9 100644
|
|
--- a/drivers/staging/rtl8712/rtl871x_cmd.c
|
|
+++ b/drivers/staging/rtl8712/rtl871x_cmd.c
|
|
@@ -192,8 +192,10 @@ u8 r8712_sitesurvey_cmd(struct _adapter *padapter,
|
|
psurveyPara->ss_ssidlen = 0;
|
|
memset(psurveyPara->ss_ssid, 0, IW_ESSID_MAX_SIZE + 1);
|
|
if (pssid && pssid->SsidLength) {
|
|
- memcpy(psurveyPara->ss_ssid, pssid->Ssid, pssid->SsidLength);
|
|
- psurveyPara->ss_ssidlen = cpu_to_le32(pssid->SsidLength);
|
|
+ int len = min_t(int, pssid->SsidLength, IW_ESSID_MAX_SIZE);
|
|
+
|
|
+ memcpy(psurveyPara->ss_ssid, pssid->Ssid, len);
|
|
+ psurveyPara->ss_ssidlen = cpu_to_le32(len);
|
|
}
|
|
set_fwstate(pmlmepriv, _FW_UNDER_SURVEY);
|
|
r8712_enqueue_cmd(pcmdpriv, ph2c);
|
|
diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
|
|
index cbaa7a4897483..2a661b04cd255 100644
|
|
--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
|
|
+++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
|
|
@@ -924,7 +924,7 @@ static int r871x_wx_set_priv(struct net_device *dev,
|
|
struct iw_point *dwrq = (struct iw_point *)awrq;
|
|
|
|
len = dwrq->length;
|
|
- ext = memdup_user(dwrq->pointer, len);
|
|
+ ext = strndup_user(dwrq->pointer, len);
|
|
if (IS_ERR(ext))
|
|
return PTR_ERR(ext);
|
|
|
|
diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
|
|
index 5f79ea05f9b81..b42193c554fb2 100644
|
|
--- a/drivers/target/target_core_pr.c
|
|
+++ b/drivers/target/target_core_pr.c
|
|
@@ -3738,6 +3738,7 @@ core_scsi3_pri_read_keys(struct se_cmd *cmd)
|
|
spin_unlock(&dev->t10_pr.registration_lock);
|
|
|
|
put_unaligned_be32(add_len, &buf[4]);
|
|
+ target_set_cmd_data_length(cmd, 8 + add_len);
|
|
|
|
transport_kunmap_data_sg(cmd);
|
|
|
|
@@ -3756,7 +3757,7 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
|
|
struct t10_pr_registration *pr_reg;
|
|
unsigned char *buf;
|
|
u64 pr_res_key;
|
|
- u32 add_len = 16; /* Hardcoded to 16 when a reservation is held. */
|
|
+ u32 add_len = 0;
|
|
|
|
if (cmd->data_length < 8) {
|
|
pr_err("PRIN SA READ_RESERVATIONS SCSI Data Length: %u"
|
|
@@ -3774,8 +3775,9 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
|
|
pr_reg = dev->dev_pr_res_holder;
|
|
if (pr_reg) {
|
|
/*
|
|
- * Set the hardcoded Additional Length
|
|
+ * Set the Additional Length to 16 when a reservation is held
|
|
*/
|
|
+ add_len = 16;
|
|
put_unaligned_be32(add_len, &buf[4]);
|
|
|
|
if (cmd->data_length < 22)
|
|
@@ -3811,6 +3813,8 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
|
|
(pr_reg->pr_res_type & 0x0f);
|
|
}
|
|
|
|
+ target_set_cmd_data_length(cmd, 8 + add_len);
|
|
+
|
|
err:
|
|
spin_unlock(&dev->dev_reservation_lock);
|
|
transport_kunmap_data_sg(cmd);
|
|
@@ -3829,7 +3833,7 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
|
|
struct se_device *dev = cmd->se_dev;
|
|
struct t10_reservation *pr_tmpl = &dev->t10_pr;
|
|
unsigned char *buf;
|
|
- u16 add_len = 8; /* Hardcoded to 8. */
|
|
+ u16 len = 8; /* Hardcoded to 8. */
|
|
|
|
if (cmd->data_length < 6) {
|
|
pr_err("PRIN SA REPORT_CAPABILITIES SCSI Data Length:"
|
|
@@ -3841,7 +3845,7 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
|
|
if (!buf)
|
|
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
|
|
|
|
- put_unaligned_be16(add_len, &buf[0]);
|
|
+ put_unaligned_be16(len, &buf[0]);
|
|
buf[2] |= 0x10; /* CRH: Compatible Reservation Hanlding bit. */
|
|
buf[2] |= 0x08; /* SIP_C: Specify Initiator Ports Capable bit */
|
|
buf[2] |= 0x04; /* ATP_C: All Target Ports Capable bit */
|
|
@@ -3870,6 +3874,8 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
|
|
buf[4] |= 0x02; /* PR_TYPE_WRITE_EXCLUSIVE */
|
|
buf[5] |= 0x01; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */
|
|
|
|
+ target_set_cmd_data_length(cmd, len);
|
|
+
|
|
transport_kunmap_data_sg(cmd);
|
|
|
|
return 0;
|
|
@@ -4030,6 +4036,7 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd)
|
|
* Set ADDITIONAL_LENGTH
|
|
*/
|
|
put_unaligned_be32(add_len, &buf[4]);
|
|
+ target_set_cmd_data_length(cmd, 8 + add_len);
|
|
|
|
transport_kunmap_data_sg(cmd);
|
|
|
|
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
|
|
index ff26ab0a5f600..484f0ba0a65bb 100644
|
|
--- a/drivers/target/target_core_transport.c
|
|
+++ b/drivers/target/target_core_transport.c
|
|
@@ -873,11 +873,9 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
|
|
}
|
|
EXPORT_SYMBOL(target_complete_cmd);
|
|
|
|
-void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
|
|
+void target_set_cmd_data_length(struct se_cmd *cmd, int length)
|
|
{
|
|
- if ((scsi_status == SAM_STAT_GOOD ||
|
|
- cmd->se_cmd_flags & SCF_TREAT_READ_AS_NORMAL) &&
|
|
- length < cmd->data_length) {
|
|
+ if (length < cmd->data_length) {
|
|
if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) {
|
|
cmd->residual_count += cmd->data_length - length;
|
|
} else {
|
|
@@ -887,6 +885,15 @@ void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int len
|
|
|
|
cmd->data_length = length;
|
|
}
|
|
+}
|
|
+EXPORT_SYMBOL(target_set_cmd_data_length);
|
|
+
|
|
+void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
|
|
+{
|
|
+ if (scsi_status == SAM_STAT_GOOD ||
|
|
+ cmd->se_cmd_flags & SCF_TREAT_READ_AS_NORMAL) {
|
|
+ target_set_cmd_data_length(cmd, length);
|
|
+ }
|
|
|
|
target_complete_cmd(cmd, scsi_status);
|
|
}
|
|
diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
|
|
index 21130af106bb6..8434bd5a8ec78 100644
|
|
--- a/drivers/tty/serial/max310x.c
|
|
+++ b/drivers/tty/serial/max310x.c
|
|
@@ -1056,9 +1056,9 @@ static int max310x_startup(struct uart_port *port)
|
|
max310x_port_update(port, MAX310X_MODE1_REG,
|
|
MAX310X_MODE1_TRNSCVCTRL_BIT, 0);
|
|
|
|
- /* Reset FIFOs */
|
|
- max310x_port_write(port, MAX310X_MODE2_REG,
|
|
- MAX310X_MODE2_FIFORST_BIT);
|
|
+ /* Configure MODE2 register & Reset FIFOs*/
|
|
+ val = MAX310X_MODE2_RXEMPTINV_BIT | MAX310X_MODE2_FIFORST_BIT;
|
|
+ max310x_port_write(port, MAX310X_MODE2_REG, val);
|
|
max310x_port_update(port, MAX310X_MODE2_REG,
|
|
MAX310X_MODE2_FIFORST_BIT, 0);
|
|
|
|
@@ -1086,27 +1086,8 @@ static int max310x_startup(struct uart_port *port)
|
|
/* Clear IRQ status register */
|
|
max310x_port_read(port, MAX310X_IRQSTS_REG);
|
|
|
|
- /*
|
|
- * Let's ask for an interrupt after a timeout equivalent to
|
|
- * the receiving time of 4 characters after the last character
|
|
- * has been received.
|
|
- */
|
|
- max310x_port_write(port, MAX310X_RXTO_REG, 4);
|
|
-
|
|
- /*
|
|
- * Make sure we also get RX interrupts when the RX FIFO is
|
|
- * filling up quickly, so get an interrupt when half of the RX
|
|
- * FIFO has been filled in.
|
|
- */
|
|
- max310x_port_write(port, MAX310X_FIFOTRIGLVL_REG,
|
|
- MAX310X_FIFOTRIGLVL_RX(MAX310X_FIFO_SIZE / 2));
|
|
-
|
|
- /* Enable RX timeout interrupt in LSR */
|
|
- max310x_port_write(port, MAX310X_LSR_IRQEN_REG,
|
|
- MAX310X_LSR_RXTO_BIT);
|
|
-
|
|
- /* Enable LSR, RX FIFO trigger, CTS change interrupts */
|
|
- val = MAX310X_IRQ_LSR_BIT | MAX310X_IRQ_RXFIFO_BIT | MAX310X_IRQ_TXEMPTY_BIT;
|
|
+ /* Enable RX, TX, CTS change interrupts */
|
|
+ val = MAX310X_IRQ_RXEMPTY_BIT | MAX310X_IRQ_TXEMPTY_BIT;
|
|
max310x_port_write(port, MAX310X_IRQEN_REG, val | MAX310X_IRQ_CTS_BIT);
|
|
|
|
return 0;
|
|
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
|
|
index 781905745812e..2f4e5174e78c8 100644
|
|
--- a/drivers/usb/class/cdc-acm.c
|
|
+++ b/drivers/usb/class/cdc-acm.c
|
|
@@ -1929,6 +1929,11 @@ static const struct usb_device_id acm_ids[] = {
|
|
.driver_info = SEND_ZERO_PACKET,
|
|
},
|
|
|
|
+ /* Exclude Goodix Fingerprint Reader */
|
|
+ { USB_DEVICE(0x27c6, 0x5395),
|
|
+ .driver_info = IGNORE_DEVICE,
|
|
+ },
|
|
+
|
|
/* control interfaces without any protocol set */
|
|
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,
|
|
USB_CDC_PROTO_NONE) },
|
|
diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
|
|
index c9f6e97582885..f27b4aecff3d4 100644
|
|
--- a/drivers/usb/class/usblp.c
|
|
+++ b/drivers/usb/class/usblp.c
|
|
@@ -494,16 +494,24 @@ static int usblp_release(struct inode *inode, struct file *file)
|
|
/* No kernel lock - fine */
|
|
static __poll_t usblp_poll(struct file *file, struct poll_table_struct *wait)
|
|
{
|
|
- __poll_t ret;
|
|
+ struct usblp *usblp = file->private_data;
|
|
+ __poll_t ret = 0;
|
|
unsigned long flags;
|
|
|
|
- struct usblp *usblp = file->private_data;
|
|
/* Should we check file->f_mode & FMODE_WRITE before poll_wait()? */
|
|
poll_wait(file, &usblp->rwait, wait);
|
|
poll_wait(file, &usblp->wwait, wait);
|
|
+
|
|
+ mutex_lock(&usblp->mut);
|
|
+ if (!usblp->present)
|
|
+ ret |= EPOLLHUP;
|
|
+ mutex_unlock(&usblp->mut);
|
|
+
|
|
spin_lock_irqsave(&usblp->lock, flags);
|
|
- ret = ((usblp->bidir && usblp->rcomplete) ? EPOLLIN | EPOLLRDNORM : 0) |
|
|
- ((usblp->no_paper || usblp->wcomplete) ? EPOLLOUT | EPOLLWRNORM : 0);
|
|
+ if (usblp->bidir && usblp->rcomplete)
|
|
+ ret |= EPOLLIN | EPOLLRDNORM;
|
|
+ if (usblp->no_paper || usblp->wcomplete)
|
|
+ ret |= EPOLLOUT | EPOLLWRNORM;
|
|
spin_unlock_irqrestore(&usblp->lock, flags);
|
|
return ret;
|
|
}
|
|
diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
|
|
index 9b4ac4415f1a4..db4de5367737a 100644
|
|
--- a/drivers/usb/core/usb.c
|
|
+++ b/drivers/usb/core/usb.c
|
|
@@ -748,6 +748,38 @@ void usb_put_intf(struct usb_interface *intf)
|
|
}
|
|
EXPORT_SYMBOL_GPL(usb_put_intf);
|
|
|
|
+/**
|
|
+ * usb_intf_get_dma_device - acquire a reference on the usb interface's DMA endpoint
|
|
+ * @intf: the usb interface
|
|
+ *
|
|
+ * While a USB device cannot perform DMA operations by itself, many USB
|
|
+ * controllers can. A call to usb_intf_get_dma_device() returns the DMA endpoint
|
|
+ * for the given USB interface, if any. The returned device structure must be
|
|
+ * released with put_device().
|
|
+ *
|
|
+ * See also usb_get_dma_device().
|
|
+ *
|
|
+ * Returns: A reference to the usb interface's DMA endpoint; or NULL if none
|
|
+ * exists.
|
|
+ */
|
|
+struct device *usb_intf_get_dma_device(struct usb_interface *intf)
|
|
+{
|
|
+ struct usb_device *udev = interface_to_usbdev(intf);
|
|
+ struct device *dmadev;
|
|
+
|
|
+ if (!udev->bus)
|
|
+ return NULL;
|
|
+
|
|
+ dmadev = get_device(udev->bus->sysdev);
|
|
+ if (!dmadev || !dmadev->dma_mask) {
|
|
+ put_device(dmadev);
|
|
+ return NULL;
|
|
+ }
|
|
+
|
|
+ return dmadev;
|
|
+}
|
|
+EXPORT_SYMBOL_GPL(usb_intf_get_dma_device);
|
|
+
|
|
/* USB device locking
|
|
*
|
|
* USB devices and interfaces are locked using the semaphore in their
|
|
diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
|
|
index c703d552bbcfc..c00c4fa139b88 100644
|
|
--- a/drivers/usb/dwc3/dwc3-qcom.c
|
|
+++ b/drivers/usb/dwc3/dwc3-qcom.c
|
|
@@ -60,12 +60,14 @@ struct dwc3_acpi_pdata {
|
|
int dp_hs_phy_irq_index;
|
|
int dm_hs_phy_irq_index;
|
|
int ss_phy_irq_index;
|
|
+ bool is_urs;
|
|
};
|
|
|
|
struct dwc3_qcom {
|
|
struct device *dev;
|
|
void __iomem *qscratch_base;
|
|
struct platform_device *dwc3;
|
|
+ struct platform_device *urs_usb;
|
|
struct clk **clks;
|
|
int num_clocks;
|
|
struct reset_control *resets;
|
|
@@ -356,8 +358,10 @@ static int dwc3_qcom_suspend(struct dwc3_qcom *qcom)
|
|
if (ret)
|
|
dev_warn(qcom->dev, "failed to disable interconnect: %d\n", ret);
|
|
|
|
+ if (device_may_wakeup(qcom->dev))
|
|
+ dwc3_qcom_enable_interrupts(qcom);
|
|
+
|
|
qcom->is_suspended = true;
|
|
- dwc3_qcom_enable_interrupts(qcom);
|
|
|
|
return 0;
|
|
}
|
|
@@ -370,7 +374,8 @@ static int dwc3_qcom_resume(struct dwc3_qcom *qcom)
|
|
if (!qcom->is_suspended)
|
|
return 0;
|
|
|
|
- dwc3_qcom_disable_interrupts(qcom);
|
|
+ if (device_may_wakeup(qcom->dev))
|
|
+ dwc3_qcom_disable_interrupts(qcom);
|
|
|
|
for (i = 0; i < qcom->num_clocks; i++) {
|
|
ret = clk_prepare_enable(qcom->clks[i]);
|
|
@@ -429,13 +434,15 @@ static void dwc3_qcom_select_utmi_clk(struct dwc3_qcom *qcom)
|
|
static int dwc3_qcom_get_irq(struct platform_device *pdev,
|
|
const char *name, int num)
|
|
{
|
|
+ struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
|
|
+ struct platform_device *pdev_irq = qcom->urs_usb ? qcom->urs_usb : pdev;
|
|
struct device_node *np = pdev->dev.of_node;
|
|
int ret;
|
|
|
|
if (np)
|
|
- ret = platform_get_irq_byname(pdev, name);
|
|
+ ret = platform_get_irq_byname(pdev_irq, name);
|
|
else
|
|
- ret = platform_get_irq(pdev, num);
|
|
+ ret = platform_get_irq(pdev_irq, num);
|
|
|
|
return ret;
|
|
}
|
|
@@ -568,6 +575,8 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
|
|
struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
|
|
struct device *dev = &pdev->dev;
|
|
struct resource *res, *child_res = NULL;
|
|
+ struct platform_device *pdev_irq = qcom->urs_usb ? qcom->urs_usb :
|
|
+ pdev;
|
|
int irq;
|
|
int ret;
|
|
|
|
@@ -597,7 +606,7 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
|
|
child_res[0].end = child_res[0].start +
|
|
qcom->acpi_pdata->dwc3_core_base_size;
|
|
|
|
- irq = platform_get_irq(pdev, 0);
|
|
+ irq = platform_get_irq(pdev_irq, 0);
|
|
child_res[1].flags = IORESOURCE_IRQ;
|
|
child_res[1].start = child_res[1].end = irq;
|
|
|
|
@@ -639,16 +648,46 @@ static int dwc3_qcom_of_register_core(struct platform_device *pdev)
|
|
ret = of_platform_populate(np, NULL, NULL, dev);
|
|
if (ret) {
|
|
dev_err(dev, "failed to register dwc3 core - %d\n", ret);
|
|
- return ret;
|
|
+ goto node_put;
|
|
}
|
|
|
|
qcom->dwc3 = of_find_device_by_node(dwc3_np);
|
|
if (!qcom->dwc3) {
|
|
+ ret = -ENODEV;
|
|
dev_err(dev, "failed to get dwc3 platform device\n");
|
|
- return -ENODEV;
|
|
}
|
|
|
|
- return 0;
|
|
+node_put:
|
|
+ of_node_put(dwc3_np);
|
|
+
|
|
+ return ret;
|
|
+}
|
|
+
|
|
+static struct platform_device *
|
|
+dwc3_qcom_create_urs_usb_platdev(struct device *dev)
|
|
+{
|
|
+ struct fwnode_handle *fwh;
|
|
+ struct acpi_device *adev;
|
|
+ char name[8];
|
|
+ int ret;
|
|
+ int id;
|
|
+
|
|
+ /* Figure out device id */
|
|
+ ret = sscanf(fwnode_get_name(dev->fwnode), "URS%d", &id);
|
|
+ if (!ret)
|
|
+ return NULL;
|
|
+
|
|
+ /* Find the child using name */
|
|
+ snprintf(name, sizeof(name), "USB%d", id);
|
|
+ fwh = fwnode_get_named_child_node(dev->fwnode, name);
|
|
+ if (!fwh)
|
|
+ return NULL;
|
|
+
|
|
+ adev = to_acpi_device_node(fwh);
|
|
+ if (!adev)
|
|
+ return NULL;
|
|
+
|
|
+ return acpi_create_platform_device(adev, NULL);
|
|
}
|
|
|
|
static int dwc3_qcom_probe(struct platform_device *pdev)
|
|
@@ -715,6 +754,14 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
|
|
qcom->acpi_pdata->qscratch_base_offset;
|
|
parent_res->end = parent_res->start +
|
|
qcom->acpi_pdata->qscratch_base_size;
|
|
+
|
|
+ if (qcom->acpi_pdata->is_urs) {
|
|
+ qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
|
|
+ if (!qcom->urs_usb) {
|
|
+ dev_err(dev, "failed to create URS USB platdev\n");
|
|
+ return -ENODEV;
|
|
+ }
|
|
+ }
|
|
}
|
|
|
|
qcom->qscratch_base = devm_ioremap_resource(dev, parent_res);
|
|
@@ -877,8 +924,22 @@ static const struct dwc3_acpi_pdata sdm845_acpi_pdata = {
|
|
.ss_phy_irq_index = 2
|
|
};
|
|
|
|
+static const struct dwc3_acpi_pdata sdm845_acpi_urs_pdata = {
|
|
+ .qscratch_base_offset = SDM845_QSCRATCH_BASE_OFFSET,
|
|
+ .qscratch_base_size = SDM845_QSCRATCH_SIZE,
|
|
+ .dwc3_core_base_size = SDM845_DWC3_CORE_SIZE,
|
|
+ .hs_phy_irq_index = 1,
|
|
+ .dp_hs_phy_irq_index = 4,
|
|
+ .dm_hs_phy_irq_index = 3,
|
|
+ .ss_phy_irq_index = 2,
|
|
+ .is_urs = true,
|
|
+};
|
|
+
|
|
static const struct acpi_device_id dwc3_qcom_acpi_match[] = {
|
|
{ "QCOM2430", (unsigned long)&sdm845_acpi_pdata },
|
|
+ { "QCOM0304", (unsigned long)&sdm845_acpi_urs_pdata },
|
|
+ { "QCOM0497", (unsigned long)&sdm845_acpi_urs_pdata },
|
|
+ { "QCOM04A6", (unsigned long)&sdm845_acpi_pdata },
|
|
{ },
|
|
};
|
|
MODULE_DEVICE_TABLE(acpi, dwc3_qcom_acpi_match);
|
|
diff --git a/drivers/usb/gadget/function/f_uac1.c b/drivers/usb/gadget/function/f_uac1.c
|
|
index 00d346965f7a5..560382e0a8f38 100644
|
|
--- a/drivers/usb/gadget/function/f_uac1.c
|
|
+++ b/drivers/usb/gadget/function/f_uac1.c
|
|
@@ -499,6 +499,7 @@ static void f_audio_disable(struct usb_function *f)
|
|
uac1->as_out_alt = 0;
|
|
uac1->as_in_alt = 0;
|
|
|
|
+ u_audio_stop_playback(&uac1->g_audio);
|
|
u_audio_stop_capture(&uac1->g_audio);
|
|
}
|
|
|
|
diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
|
|
index 5d960b6603b6f..6f03e944e0e31 100644
|
|
--- a/drivers/usb/gadget/function/f_uac2.c
|
|
+++ b/drivers/usb/gadget/function/f_uac2.c
|
|
@@ -478,7 +478,7 @@ static int set_ep_max_packet_size(const struct f_uac2_opts *uac2_opts,
|
|
}
|
|
|
|
max_size_bw = num_channels(chmask) * ssize *
|
|
- DIV_ROUND_UP(srate, factor / (1 << (ep_desc->bInterval - 1)));
|
|
+ ((srate / (factor / (1 << (ep_desc->bInterval - 1)))) + 1);
|
|
ep_desc->wMaxPacketSize = cpu_to_le16(min_t(u16, max_size_bw,
|
|
max_size_ep));
|
|
|
|
diff --git a/drivers/usb/gadget/function/u_ether_configfs.h b/drivers/usb/gadget/function/u_ether_configfs.h
|
|
index bd92b57030131..f982e18a5a789 100644
|
|
--- a/drivers/usb/gadget/function/u_ether_configfs.h
|
|
+++ b/drivers/usb/gadget/function/u_ether_configfs.h
|
|
@@ -169,12 +169,11 @@ out: \
|
|
size_t len) \
|
|
{ \
|
|
struct f_##_f_##_opts *opts = to_f_##_f_##_opts(item); \
|
|
- int ret; \
|
|
+ int ret = -EINVAL; \
|
|
u8 val; \
|
|
\
|
|
mutex_lock(&opts->lock); \
|
|
- ret = sscanf(page, "%02hhx", &val); \
|
|
- if (ret > 0) { \
|
|
+ if (sscanf(page, "%02hhx", &val) > 0) { \
|
|
opts->_n_ = val; \
|
|
ret = len; \
|
|
} \
|
|
diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
|
|
index f1ea51476add0..1d3ebb07ccd4d 100644
|
|
--- a/drivers/usb/gadget/udc/s3c2410_udc.c
|
|
+++ b/drivers/usb/gadget/udc/s3c2410_udc.c
|
|
@@ -1773,8 +1773,8 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
|
|
udc_info = dev_get_platdata(&pdev->dev);
|
|
|
|
base_addr = devm_platform_ioremap_resource(pdev, 0);
|
|
- if (!base_addr) {
|
|
- retval = -ENOMEM;
|
|
+ if (IS_ERR(base_addr)) {
|
|
+ retval = PTR_ERR(base_addr);
|
|
goto err_mem;
|
|
}
|
|
|
|
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
|
|
index 84da8406d5b42..5bbccc9a0179f 100644
|
|
--- a/drivers/usb/host/xhci-pci.c
|
|
+++ b/drivers/usb/host/xhci-pci.c
|
|
@@ -66,6 +66,7 @@
|
|
#define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
|
|
#define PCI_DEVICE_ID_ASMEDIA_1142_XHCI 0x1242
|
|
#define PCI_DEVICE_ID_ASMEDIA_2142_XHCI 0x2142
|
|
+#define PCI_DEVICE_ID_ASMEDIA_3242_XHCI 0x3242
|
|
|
|
static const char hcd_name[] = "xhci_hcd";
|
|
|
|
@@ -276,11 +277,14 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
|
|
pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
|
|
xhci->quirks |= XHCI_BROKEN_STREAMS;
|
|
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
|
|
- pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
|
|
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI) {
|
|
xhci->quirks |= XHCI_TRUST_TX_LENGTH;
|
|
+ xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
|
|
+ }
|
|
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
|
|
(pdev->device == PCI_DEVICE_ID_ASMEDIA_1142_XHCI ||
|
|
- pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI))
|
|
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_2142_XHCI ||
|
|
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_3242_XHCI))
|
|
xhci->quirks |= XHCI_NO_64BIT_SUPPORT;
|
|
|
|
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
|
|
@@ -295,6 +299,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
|
|
pdev->device == 0x9026)
|
|
xhci->quirks |= XHCI_RESET_PLL_ON_DISCONNECT;
|
|
|
|
+ if (pdev->vendor == PCI_VENDOR_ID_AMD &&
|
|
+ (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_2 ||
|
|
+ pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_4))
|
|
+ xhci->quirks |= XHCI_NO_SOFT_RETRY;
|
|
+
|
|
if (xhci->quirks & XHCI_RESET_ON_RESUME)
|
|
xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
|
|
"QUIRK: Resetting on resume");
|
|
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
|
|
index 061d5c51405fb..054840a69eb4a 100644
|
|
--- a/drivers/usb/host/xhci-ring.c
|
|
+++ b/drivers/usb/host/xhci-ring.c
|
|
@@ -2307,7 +2307,8 @@ static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td,
|
|
remaining = 0;
|
|
break;
|
|
case COMP_USB_TRANSACTION_ERROR:
|
|
- if ((ep_ring->err_count++ > MAX_SOFT_RETRY) ||
|
|
+ if (xhci->quirks & XHCI_NO_SOFT_RETRY ||
|
|
+ (ep_ring->err_count++ > MAX_SOFT_RETRY) ||
|
|
le32_to_cpu(slot_ctx->tt_info) & TT_SLOT)
|
|
break;
|
|
*status = 0;
|
|
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
|
|
index d17bbb162810a..c449de6164b18 100644
|
|
--- a/drivers/usb/host/xhci.c
|
|
+++ b/drivers/usb/host/xhci.c
|
|
@@ -883,44 +883,42 @@ static void xhci_clear_command_ring(struct xhci_hcd *xhci)
|
|
xhci_set_cmd_ring_deq(xhci);
|
|
}
|
|
|
|
-static void xhci_disable_port_wake_on_bits(struct xhci_hcd *xhci)
|
|
+/*
|
|
+ * Disable port wake bits if do_wakeup is not set.
|
|
+ *
|
|
+ * Also clear a possible internal port wake state left hanging for ports that
|
|
+ * detected termination but never successfully enumerated (trained to 0U).
|
|
+ * Internal wake causes immediate xHCI wake after suspend. PORT_CSC write done
|
|
+ * at enumeration clears this wake, force one here as well for unconnected ports
|
|
+ */
|
|
+
|
|
+static void xhci_disable_hub_port_wake(struct xhci_hcd *xhci,
|
|
+ struct xhci_hub *rhub,
|
|
+ bool do_wakeup)
|
|
{
|
|
- struct xhci_port **ports;
|
|
- int port_index;
|
|
unsigned long flags;
|
|
u32 t1, t2, portsc;
|
|
+ int i;
|
|
|
|
spin_lock_irqsave(&xhci->lock, flags);
|
|
|
|
- /* disable usb3 ports Wake bits */
|
|
- port_index = xhci->usb3_rhub.num_ports;
|
|
- ports = xhci->usb3_rhub.ports;
|
|
- while (port_index--) {
|
|
- t1 = readl(ports[port_index]->addr);
|
|
- portsc = t1;
|
|
- t1 = xhci_port_state_to_neutral(t1);
|
|
- t2 = t1 & ~PORT_WAKE_BITS;
|
|
- if (t1 != t2) {
|
|
- writel(t2, ports[port_index]->addr);
|
|
- xhci_dbg(xhci, "disable wake bits port %d-%d, portsc: 0x%x, write: 0x%x\n",
|
|
- xhci->usb3_rhub.hcd->self.busnum,
|
|
- port_index + 1, portsc, t2);
|
|
- }
|
|
- }
|
|
+ for (i = 0; i < rhub->num_ports; i++) {
|
|
+ portsc = readl(rhub->ports[i]->addr);
|
|
+ t1 = xhci_port_state_to_neutral(portsc);
|
|
+ t2 = t1;
|
|
+
|
|
+ /* clear wake bits if do_wake is not set */
|
|
+ if (!do_wakeup)
|
|
+ t2 &= ~PORT_WAKE_BITS;
|
|
+
|
|
+ /* Don't touch csc bit if connected or connect change is set */
|
|
+ if (!(portsc & (PORT_CSC | PORT_CONNECT)))
|
|
+ t2 |= PORT_CSC;
|
|
|
|
- /* disable usb2 ports Wake bits */
|
|
- port_index = xhci->usb2_rhub.num_ports;
|
|
- ports = xhci->usb2_rhub.ports;
|
|
- while (port_index--) {
|
|
- t1 = readl(ports[port_index]->addr);
|
|
- portsc = t1;
|
|
- t1 = xhci_port_state_to_neutral(t1);
|
|
- t2 = t1 & ~PORT_WAKE_BITS;
|
|
if (t1 != t2) {
|
|
- writel(t2, ports[port_index]->addr);
|
|
- xhci_dbg(xhci, "disable wake bits port %d-%d, portsc: 0x%x, write: 0x%x\n",
|
|
- xhci->usb2_rhub.hcd->self.busnum,
|
|
- port_index + 1, portsc, t2);
|
|
+ writel(t2, rhub->ports[i]->addr);
|
|
+ xhci_dbg(xhci, "config port %d-%d wake bits, portsc: 0x%x, write: 0x%x\n",
|
|
+ rhub->hcd->self.busnum, i + 1, portsc, t2);
|
|
}
|
|
}
|
|
spin_unlock_irqrestore(&xhci->lock, flags);
|
|
@@ -983,8 +981,8 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
|
|
return -EINVAL;
|
|
|
|
/* Clear root port wake on bits if wakeup not allowed. */
|
|
- if (!do_wakeup)
|
|
- xhci_disable_port_wake_on_bits(xhci);
|
|
+ xhci_disable_hub_port_wake(xhci, &xhci->usb3_rhub, do_wakeup);
|
|
+ xhci_disable_hub_port_wake(xhci, &xhci->usb2_rhub, do_wakeup);
|
|
|
|
if (!HCD_HW_ACCESSIBLE(hcd))
|
|
return 0;
|
|
@@ -1088,6 +1086,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
|
|
struct usb_hcd *secondary_hcd;
|
|
int retval = 0;
|
|
bool comp_timer_running = false;
|
|
+ bool pending_portevent = false;
|
|
|
|
if (!hcd->state)
|
|
return 0;
|
|
@@ -1226,13 +1225,22 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
|
|
|
|
done:
|
|
if (retval == 0) {
|
|
- /* Resume root hubs only when have pending events. */
|
|
- if (xhci_pending_portevent(xhci)) {
|
|
+ /*
|
|
+ * Resume roothubs only if there are pending events.
|
|
+ * USB 3 devices resend U3 LFPS wake after a 100ms delay if
|
|
+ * the first wake signalling failed, give it that chance.
|
|
+ */
|
|
+ pending_portevent = xhci_pending_portevent(xhci);
|
|
+ if (!pending_portevent) {
|
|
+ msleep(120);
|
|
+ pending_portevent = xhci_pending_portevent(xhci);
|
|
+ }
|
|
+
|
|
+ if (pending_portevent) {
|
|
usb_hcd_resume_root_hub(xhci->shared_hcd);
|
|
usb_hcd_resume_root_hub(hcd);
|
|
}
|
|
}
|
|
-
|
|
/*
|
|
* If system is subject to the Quirk, Compliance Mode Timer needs to
|
|
* be re-initialized Always after a system resume. Ports are subject
|
|
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
|
|
index 045740ad9c1ec..d01241f1daf3b 100644
|
|
--- a/drivers/usb/host/xhci.h
|
|
+++ b/drivers/usb/host/xhci.h
|
|
@@ -1879,6 +1879,7 @@ struct xhci_hcd {
|
|
#define XHCI_SKIP_PHY_INIT BIT_ULL(37)
|
|
#define XHCI_DISABLE_SPARSE BIT_ULL(38)
|
|
#define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39)
|
|
+#define XHCI_NO_SOFT_RETRY BIT_ULL(40)
|
|
|
|
unsigned int num_active_eps;
|
|
unsigned int limit_active_eps;
|
|
diff --git a/drivers/usb/renesas_usbhs/pipe.c b/drivers/usb/renesas_usbhs/pipe.c
|
|
index e7334b7fb3a62..75fff2e4cbc65 100644
|
|
--- a/drivers/usb/renesas_usbhs/pipe.c
|
|
+++ b/drivers/usb/renesas_usbhs/pipe.c
|
|
@@ -746,6 +746,8 @@ struct usbhs_pipe *usbhs_pipe_malloc(struct usbhs_priv *priv,
|
|
|
|
void usbhs_pipe_free(struct usbhs_pipe *pipe)
|
|
{
|
|
+ usbhsp_pipe_select(pipe);
|
|
+ usbhsp_pipe_cfg_set(pipe, 0xFFFF, 0);
|
|
usbhsp_put_pipe(pipe);
|
|
}
|
|
|
|
diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
|
|
index 28deaaec581f6..f26861246f653 100644
|
|
--- a/drivers/usb/serial/ch341.c
|
|
+++ b/drivers/usb/serial/ch341.c
|
|
@@ -86,6 +86,7 @@ static const struct usb_device_id id_table[] = {
|
|
{ USB_DEVICE(0x1a86, 0x7522) },
|
|
{ USB_DEVICE(0x1a86, 0x7523) },
|
|
{ USB_DEVICE(0x4348, 0x5523) },
|
|
+ { USB_DEVICE(0x9986, 0x7523) },
|
|
{ },
|
|
};
|
|
MODULE_DEVICE_TABLE(usb, id_table);
|
|
diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
|
|
index bf11f86896837..b5f4e584f3c9e 100644
|
|
--- a/drivers/usb/serial/cp210x.c
|
|
+++ b/drivers/usb/serial/cp210x.c
|
|
@@ -149,6 +149,7 @@ static const struct usb_device_id id_table[] = {
|
|
{ USB_DEVICE(0x10C4, 0x8857) }, /* CEL EM357 ZigBee USB Stick */
|
|
{ USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */
|
|
{ USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */
|
|
+ { USB_DEVICE(0x10C4, 0x88D8) }, /* Acuity Brands nLight Air Adapter */
|
|
{ USB_DEVICE(0x10C4, 0x88FB) }, /* CESINEL MEDCAL STII Network Analyzer */
|
|
{ USB_DEVICE(0x10C4, 0x8938) }, /* CESINEL MEDCAL S II Network Analyzer */
|
|
{ USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
|
|
@@ -205,6 +206,8 @@ static const struct usb_device_id id_table[] = {
|
|
{ USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */
|
|
{ USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */
|
|
{ USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */
|
|
+ { USB_DEVICE(0x1901, 0x0197) }, /* GE CS1000 Display serial interface */
|
|
+ { USB_DEVICE(0x1901, 0x0198) }, /* GE CS1000 M.2 Key E serial interface */
|
|
{ USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
|
|
{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
|
|
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
|
|
diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
|
|
index ba5d8df695189..4b48ef4adbeb6 100644
|
|
--- a/drivers/usb/serial/io_edgeport.c
|
|
+++ b/drivers/usb/serial/io_edgeport.c
|
|
@@ -3003,26 +3003,32 @@ static int edge_startup(struct usb_serial *serial)
|
|
response = -ENODEV;
|
|
}
|
|
|
|
- usb_free_urb(edge_serial->interrupt_read_urb);
|
|
- kfree(edge_serial->interrupt_in_buffer);
|
|
-
|
|
- usb_free_urb(edge_serial->read_urb);
|
|
- kfree(edge_serial->bulk_in_buffer);
|
|
-
|
|
- kfree(edge_serial);
|
|
-
|
|
- return response;
|
|
+ goto error;
|
|
}
|
|
|
|
/* start interrupt read for this edgeport this interrupt will
|
|
* continue as long as the edgeport is connected */
|
|
response = usb_submit_urb(edge_serial->interrupt_read_urb,
|
|
GFP_KERNEL);
|
|
- if (response)
|
|
+ if (response) {
|
|
dev_err(ddev, "%s - Error %d submitting control urb\n",
|
|
__func__, response);
|
|
+
|
|
+ goto error;
|
|
+ }
|
|
}
|
|
return response;
|
|
+
|
|
+error:
|
|
+ usb_free_urb(edge_serial->interrupt_read_urb);
|
|
+ kfree(edge_serial->interrupt_in_buffer);
|
|
+
|
|
+ usb_free_urb(edge_serial->read_urb);
|
|
+ kfree(edge_serial->bulk_in_buffer);
|
|
+
|
|
+ kfree(edge_serial);
|
|
+
|
|
+ return response;
|
|
}
|
|
|
|
|
|
diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
|
|
index 2305d425e6c9a..8f1de1fbbeedf 100644
|
|
--- a/drivers/usb/usbip/stub_dev.c
|
|
+++ b/drivers/usb/usbip/stub_dev.c
|
|
@@ -46,6 +46,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
int sockfd = 0;
|
|
struct socket *socket;
|
|
int rv;
|
|
+ struct task_struct *tcp_rx = NULL;
|
|
+ struct task_struct *tcp_tx = NULL;
|
|
|
|
if (!sdev) {
|
|
dev_err(dev, "sdev is null\n");
|
|
@@ -69,23 +71,47 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
}
|
|
|
|
socket = sockfd_lookup(sockfd, &err);
|
|
- if (!socket)
|
|
+ if (!socket) {
|
|
+ dev_err(dev, "failed to lookup sock");
|
|
goto err;
|
|
+ }
|
|
|
|
- sdev->ud.tcp_socket = socket;
|
|
- sdev->ud.sockfd = sockfd;
|
|
+ if (socket->type != SOCK_STREAM) {
|
|
+ dev_err(dev, "Expecting SOCK_STREAM - found %d",
|
|
+ socket->type);
|
|
+ goto sock_err;
|
|
+ }
|
|
|
|
+ /* unlock and create threads and get tasks */
|
|
spin_unlock_irq(&sdev->ud.lock);
|
|
+ tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx");
|
|
+ if (IS_ERR(tcp_rx)) {
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx");
|
|
+ if (IS_ERR(tcp_tx)) {
|
|
+ kthread_stop(tcp_rx);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
|
|
- sdev->ud.tcp_rx = kthread_get_run(stub_rx_loop, &sdev->ud,
|
|
- "stub_rx");
|
|
- sdev->ud.tcp_tx = kthread_get_run(stub_tx_loop, &sdev->ud,
|
|
- "stub_tx");
|
|
+ /* get task structs now */
|
|
+ get_task_struct(tcp_rx);
|
|
+ get_task_struct(tcp_tx);
|
|
|
|
+ /* lock and update sdev->ud state */
|
|
spin_lock_irq(&sdev->ud.lock);
|
|
+ sdev->ud.tcp_socket = socket;
|
|
+ sdev->ud.sockfd = sockfd;
|
|
+ sdev->ud.tcp_rx = tcp_rx;
|
|
+ sdev->ud.tcp_tx = tcp_tx;
|
|
sdev->ud.status = SDEV_ST_USED;
|
|
spin_unlock_irq(&sdev->ud.lock);
|
|
|
|
+ wake_up_process(sdev->ud.tcp_rx);
|
|
+ wake_up_process(sdev->ud.tcp_tx);
|
|
+
|
|
} else {
|
|
dev_info(dev, "stub down\n");
|
|
|
|
@@ -100,6 +126,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
|
|
return count;
|
|
|
|
+sock_err:
|
|
+ sockfd_put(socket);
|
|
err:
|
|
spin_unlock_irq(&sdev->ud.lock);
|
|
return -EINVAL;
|
|
diff --git a/drivers/usb/usbip/vhci_sysfs.c b/drivers/usb/usbip/vhci_sysfs.c
|
|
index be37aec250c2b..e64ea314930be 100644
|
|
--- a/drivers/usb/usbip/vhci_sysfs.c
|
|
+++ b/drivers/usb/usbip/vhci_sysfs.c
|
|
@@ -312,6 +312,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
struct vhci *vhci;
|
|
int err;
|
|
unsigned long flags;
|
|
+ struct task_struct *tcp_rx = NULL;
|
|
+ struct task_struct *tcp_tx = NULL;
|
|
|
|
/*
|
|
* @rhport: port number of vhci_hcd
|
|
@@ -349,12 +351,35 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
|
|
/* Extract socket from fd. */
|
|
socket = sockfd_lookup(sockfd, &err);
|
|
- if (!socket)
|
|
+ if (!socket) {
|
|
+ dev_err(dev, "failed to lookup sock");
|
|
return -EINVAL;
|
|
+ }
|
|
+ if (socket->type != SOCK_STREAM) {
|
|
+ dev_err(dev, "Expecting SOCK_STREAM - found %d",
|
|
+ socket->type);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
+ /* create threads before locking */
|
|
+ tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx");
|
|
+ if (IS_ERR(tcp_rx)) {
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx");
|
|
+ if (IS_ERR(tcp_tx)) {
|
|
+ kthread_stop(tcp_rx);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
|
|
- /* now need lock until setting vdev status as used */
|
|
+ /* get task structs now */
|
|
+ get_task_struct(tcp_rx);
|
|
+ get_task_struct(tcp_tx);
|
|
|
|
- /* begin a lock */
|
|
+ /* now begin lock until setting vdev status set */
|
|
spin_lock_irqsave(&vhci->lock, flags);
|
|
spin_lock(&vdev->ud.lock);
|
|
|
|
@@ -364,6 +389,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
spin_unlock_irqrestore(&vhci->lock, flags);
|
|
|
|
sockfd_put(socket);
|
|
+ kthread_stop_put(tcp_rx);
|
|
+ kthread_stop_put(tcp_tx);
|
|
|
|
dev_err(dev, "port %d already used\n", rhport);
|
|
/*
|
|
@@ -382,14 +409,16 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
|
|
vdev->speed = speed;
|
|
vdev->ud.sockfd = sockfd;
|
|
vdev->ud.tcp_socket = socket;
|
|
+ vdev->ud.tcp_rx = tcp_rx;
|
|
+ vdev->ud.tcp_tx = tcp_tx;
|
|
vdev->ud.status = VDEV_ST_NOTASSIGNED;
|
|
|
|
spin_unlock(&vdev->ud.lock);
|
|
spin_unlock_irqrestore(&vhci->lock, flags);
|
|
/* end the lock */
|
|
|
|
- vdev->ud.tcp_rx = kthread_get_run(vhci_rx_loop, &vdev->ud, "vhci_rx");
|
|
- vdev->ud.tcp_tx = kthread_get_run(vhci_tx_loop, &vdev->ud, "vhci_tx");
|
|
+ wake_up_process(vdev->ud.tcp_rx);
|
|
+ wake_up_process(vdev->ud.tcp_tx);
|
|
|
|
rh_port_connect(vdev, speed);
|
|
|
|
diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
|
|
index 100f680c572ae..a3ec39fc61778 100644
|
|
--- a/drivers/usb/usbip/vudc_sysfs.c
|
|
+++ b/drivers/usb/usbip/vudc_sysfs.c
|
|
@@ -90,8 +90,9 @@ unlock:
|
|
}
|
|
static BIN_ATTR_RO(dev_desc, sizeof(struct usb_device_descriptor));
|
|
|
|
-static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *attr,
|
|
- const char *in, size_t count)
|
|
+static ssize_t usbip_sockfd_store(struct device *dev,
|
|
+ struct device_attribute *attr,
|
|
+ const char *in, size_t count)
|
|
{
|
|
struct vudc *udc = (struct vudc *) dev_get_drvdata(dev);
|
|
int rv;
|
|
@@ -100,6 +101,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
struct socket *socket;
|
|
unsigned long flags;
|
|
int ret;
|
|
+ struct task_struct *tcp_rx = NULL;
|
|
+ struct task_struct *tcp_tx = NULL;
|
|
|
|
rv = kstrtoint(in, 0, &sockfd);
|
|
if (rv != 0)
|
|
@@ -138,24 +141,54 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
goto unlock_ud;
|
|
}
|
|
|
|
- udc->ud.tcp_socket = socket;
|
|
+ if (socket->type != SOCK_STREAM) {
|
|
+ dev_err(dev, "Expecting SOCK_STREAM - found %d",
|
|
+ socket->type);
|
|
+ ret = -EINVAL;
|
|
+ goto sock_err;
|
|
+ }
|
|
|
|
+ /* unlock and create threads and get tasks */
|
|
spin_unlock_irq(&udc->ud.lock);
|
|
spin_unlock_irqrestore(&udc->lock, flags);
|
|
|
|
- udc->ud.tcp_rx = kthread_get_run(&v_rx_loop,
|
|
- &udc->ud, "vudc_rx");
|
|
- udc->ud.tcp_tx = kthread_get_run(&v_tx_loop,
|
|
- &udc->ud, "vudc_tx");
|
|
+ tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
|
|
+ if (IS_ERR(tcp_rx)) {
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+ tcp_tx = kthread_create(&v_tx_loop, &udc->ud, "vudc_tx");
|
|
+ if (IS_ERR(tcp_tx)) {
|
|
+ kthread_stop(tcp_rx);
|
|
+ sockfd_put(socket);
|
|
+ return -EINVAL;
|
|
+ }
|
|
+
|
|
+ /* get task structs now */
|
|
+ get_task_struct(tcp_rx);
|
|
+ get_task_struct(tcp_tx);
|
|
|
|
+ /* lock and update udc->ud state */
|
|
spin_lock_irqsave(&udc->lock, flags);
|
|
spin_lock_irq(&udc->ud.lock);
|
|
+
|
|
+ udc->ud.tcp_socket = socket;
|
|
+ udc->ud.tcp_rx = tcp_rx;
|
|
+ udc->ud.tcp_rx = tcp_tx;
|
|
udc->ud.status = SDEV_ST_USED;
|
|
+
|
|
spin_unlock_irq(&udc->ud.lock);
|
|
|
|
ktime_get_ts64(&udc->start_time);
|
|
v_start_timer(udc);
|
|
udc->connected = 1;
|
|
+
|
|
+ spin_unlock_irqrestore(&udc->lock, flags);
|
|
+
|
|
+ wake_up_process(udc->ud.tcp_rx);
|
|
+ wake_up_process(udc->ud.tcp_tx);
|
|
+ return count;
|
|
+
|
|
} else {
|
|
if (!udc->connected) {
|
|
dev_err(dev, "Device not connected");
|
|
@@ -177,6 +210,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
|
|
|
|
return count;
|
|
|
|
+sock_err:
|
|
+ sockfd_put(socket);
|
|
unlock_ud:
|
|
spin_unlock_irq(&udc->ud.lock);
|
|
unlock:
|
|
diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
|
|
index da87f3a1e351b..b8f2f971c2f0f 100644
|
|
--- a/drivers/xen/events/events_2l.c
|
|
+++ b/drivers/xen/events/events_2l.c
|
|
@@ -47,6 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
|
|
return EVTCHN_2L_NR_CHANNELS;
|
|
}
|
|
|
|
+static void evtchn_2l_remove(evtchn_port_t evtchn, unsigned int cpu)
|
|
+{
|
|
+ clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
|
|
+}
|
|
+
|
|
static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
|
|
unsigned int old_cpu)
|
|
{
|
|
@@ -72,12 +77,6 @@ static bool evtchn_2l_is_pending(evtchn_port_t port)
|
|
return sync_test_bit(port, BM(&s->evtchn_pending[0]));
|
|
}
|
|
|
|
-static bool evtchn_2l_test_and_set_mask(evtchn_port_t port)
|
|
-{
|
|
- struct shared_info *s = HYPERVISOR_shared_info;
|
|
- return sync_test_and_set_bit(port, BM(&s->evtchn_mask[0]));
|
|
-}
|
|
-
|
|
static void evtchn_2l_mask(evtchn_port_t port)
|
|
{
|
|
struct shared_info *s = HYPERVISOR_shared_info;
|
|
@@ -355,18 +354,27 @@ static void evtchn_2l_resume(void)
|
|
EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
|
|
}
|
|
|
|
+static int evtchn_2l_percpu_deinit(unsigned int cpu)
|
|
+{
|
|
+ memset(per_cpu(cpu_evtchn_mask, cpu), 0, sizeof(xen_ulong_t) *
|
|
+ EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD);
|
|
+
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static const struct evtchn_ops evtchn_ops_2l = {
|
|
.max_channels = evtchn_2l_max_channels,
|
|
.nr_channels = evtchn_2l_max_channels,
|
|
+ .remove = evtchn_2l_remove,
|
|
.bind_to_cpu = evtchn_2l_bind_to_cpu,
|
|
.clear_pending = evtchn_2l_clear_pending,
|
|
.set_pending = evtchn_2l_set_pending,
|
|
.is_pending = evtchn_2l_is_pending,
|
|
- .test_and_set_mask = evtchn_2l_test_and_set_mask,
|
|
.mask = evtchn_2l_mask,
|
|
.unmask = evtchn_2l_unmask,
|
|
.handle_events = evtchn_2l_handle_events,
|
|
.resume = evtchn_2l_resume,
|
|
+ .percpu_deinit = evtchn_2l_percpu_deinit,
|
|
};
|
|
|
|
void __init xen_evtchn_2l_init(void)
|
|
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
|
|
index bbebe248b7264..7bd03f6e0422f 100644
|
|
--- a/drivers/xen/events/events_base.c
|
|
+++ b/drivers/xen/events/events_base.c
|
|
@@ -96,13 +96,19 @@ struct irq_info {
|
|
struct list_head eoi_list;
|
|
short refcnt;
|
|
short spurious_cnt;
|
|
- enum xen_irq_type type; /* type */
|
|
+ short type; /* type */
|
|
+ u8 mask_reason; /* Why is event channel masked */
|
|
+#define EVT_MASK_REASON_EXPLICIT 0x01
|
|
+#define EVT_MASK_REASON_TEMPORARY 0x02
|
|
+#define EVT_MASK_REASON_EOI_PENDING 0x04
|
|
+ u8 is_active; /* Is event just being handled? */
|
|
unsigned irq;
|
|
evtchn_port_t evtchn; /* event channel */
|
|
unsigned short cpu; /* cpu bound */
|
|
unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
|
|
unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
|
|
u64 eoi_time; /* Time in jiffies when to EOI. */
|
|
+ spinlock_t lock;
|
|
|
|
union {
|
|
unsigned short virq;
|
|
@@ -151,6 +157,7 @@ static DEFINE_RWLOCK(evtchn_rwlock);
|
|
* evtchn_rwlock
|
|
* IRQ-desc lock
|
|
* percpu eoi_list_lock
|
|
+ * irq_info->lock
|
|
*/
|
|
|
|
static LIST_HEAD(xen_irq_list_head);
|
|
@@ -272,6 +279,8 @@ static int xen_irq_info_common_setup(struct irq_info *info,
|
|
info->irq = irq;
|
|
info->evtchn = evtchn;
|
|
info->cpu = cpu;
|
|
+ info->mask_reason = EVT_MASK_REASON_EXPLICIT;
|
|
+ spin_lock_init(&info->lock);
|
|
|
|
ret = set_evtchn_to_irq(evtchn, irq);
|
|
if (ret < 0)
|
|
@@ -338,6 +347,7 @@ static int xen_irq_info_pirq_setup(unsigned irq,
|
|
static void xen_irq_info_cleanup(struct irq_info *info)
|
|
{
|
|
set_evtchn_to_irq(info->evtchn, -1);
|
|
+ xen_evtchn_port_remove(info->evtchn, info->cpu);
|
|
info->evtchn = 0;
|
|
}
|
|
|
|
@@ -418,6 +428,34 @@ unsigned int cpu_from_evtchn(evtchn_port_t evtchn)
|
|
return ret;
|
|
}
|
|
|
|
+static void do_mask(struct irq_info *info, u8 reason)
|
|
+{
|
|
+ unsigned long flags;
|
|
+
|
|
+ spin_lock_irqsave(&info->lock, flags);
|
|
+
|
|
+ if (!info->mask_reason)
|
|
+ mask_evtchn(info->evtchn);
|
|
+
|
|
+ info->mask_reason |= reason;
|
|
+
|
|
+ spin_unlock_irqrestore(&info->lock, flags);
|
|
+}
|
|
+
|
|
+static void do_unmask(struct irq_info *info, u8 reason)
|
|
+{
|
|
+ unsigned long flags;
|
|
+
|
|
+ spin_lock_irqsave(&info->lock, flags);
|
|
+
|
|
+ info->mask_reason &= ~reason;
|
|
+
|
|
+ if (!info->mask_reason)
|
|
+ unmask_evtchn(info->evtchn);
|
|
+
|
|
+ spin_unlock_irqrestore(&info->lock, flags);
|
|
+}
|
|
+
|
|
#ifdef CONFIG_X86
|
|
static bool pirq_check_eoi_map(unsigned irq)
|
|
{
|
|
@@ -545,7 +583,7 @@ static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
|
|
}
|
|
|
|
info->eoi_time = 0;
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_EOI_PENDING);
|
|
}
|
|
|
|
static void xen_irq_lateeoi_worker(struct work_struct *work)
|
|
@@ -714,6 +752,12 @@ static void xen_evtchn_close(evtchn_port_t port)
|
|
BUG();
|
|
}
|
|
|
|
+static void event_handler_exit(struct irq_info *info)
|
|
+{
|
|
+ smp_store_release(&info->is_active, 0);
|
|
+ clear_evtchn(info->evtchn);
|
|
+}
|
|
+
|
|
static void pirq_query_unmask(int irq)
|
|
{
|
|
struct physdev_irq_status_query irq_status;
|
|
@@ -732,7 +776,8 @@ static void pirq_query_unmask(int irq)
|
|
|
|
static void eoi_pirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) };
|
|
int rc = 0;
|
|
|
|
@@ -741,16 +786,15 @@ static void eoi_pirq(struct irq_data *data)
|
|
|
|
if (unlikely(irqd_is_setaffinity_pending(data)) &&
|
|
likely(!irqd_irq_disabled(data))) {
|
|
- int masked = test_and_set_mask(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
|
|
irq_move_masked_irq(data);
|
|
|
|
- if (!masked)
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_TEMPORARY);
|
|
} else
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
|
|
if (pirq_needs_eoi(data->irq)) {
|
|
rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi);
|
|
@@ -801,7 +845,8 @@ static unsigned int __startup_pirq(unsigned int irq)
|
|
goto err;
|
|
|
|
out:
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_EXPLICIT);
|
|
+
|
|
eoi_pirq(irq_get_irq_data(irq));
|
|
|
|
return 0;
|
|
@@ -828,7 +873,7 @@ static void shutdown_pirq(struct irq_data *data)
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return;
|
|
|
|
- mask_evtchn(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_EXPLICIT);
|
|
xen_evtchn_close(evtchn);
|
|
xen_irq_info_cleanup(info);
|
|
}
|
|
@@ -1565,6 +1610,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
|
|
}
|
|
|
|
info = info_for_irq(irq);
|
|
+ if (xchg_acquire(&info->is_active, 1))
|
|
+ return;
|
|
|
|
if (ctrl->defer_eoi) {
|
|
info->eoi_cpu = smp_processor_id();
|
|
@@ -1655,10 +1702,10 @@ void rebind_evtchn_irq(evtchn_port_t evtchn, int irq)
|
|
}
|
|
|
|
/* Rebind an evtchn so that it gets delivered to a specific cpu */
|
|
-static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
|
|
+static int xen_rebind_evtchn_to_cpu(struct irq_info *info, unsigned int tcpu)
|
|
{
|
|
struct evtchn_bind_vcpu bind_vcpu;
|
|
- int masked;
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return -1;
|
|
@@ -1674,7 +1721,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
|
|
* Mask the event while changing the VCPU binding to prevent
|
|
* it being delivered on an unexpected VCPU.
|
|
*/
|
|
- masked = test_and_set_mask(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
/*
|
|
* If this fails, it usually just indicates that we're dealing with a
|
|
@@ -1684,8 +1731,7 @@ static int xen_rebind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int tcpu)
|
|
if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
|
|
bind_evtchn_to_cpu(evtchn, tcpu);
|
|
|
|
- if (!masked)
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
return 0;
|
|
}
|
|
@@ -1694,7 +1740,7 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
|
|
bool force)
|
|
{
|
|
unsigned tcpu = cpumask_first_and(dest, cpu_online_mask);
|
|
- int ret = xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
|
|
+ int ret = xen_rebind_evtchn_to_cpu(info_for_irq(data->irq), tcpu);
|
|
|
|
if (!ret)
|
|
irq_data_update_effective_affinity(data, cpumask_of(tcpu));
|
|
@@ -1713,39 +1759,41 @@ EXPORT_SYMBOL_GPL(xen_set_affinity_evtchn);
|
|
|
|
static void enable_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (VALID_EVTCHN(evtchn))
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_EXPLICIT);
|
|
}
|
|
|
|
static void disable_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (VALID_EVTCHN(evtchn))
|
|
- mask_evtchn(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_EXPLICIT);
|
|
}
|
|
|
|
static void ack_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return;
|
|
|
|
if (unlikely(irqd_is_setaffinity_pending(data)) &&
|
|
likely(!irqd_irq_disabled(data))) {
|
|
- int masked = test_and_set_mask(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
|
|
irq_move_masked_irq(data);
|
|
|
|
- if (!masked)
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_TEMPORARY);
|
|
} else
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
}
|
|
|
|
static void mask_ack_dynirq(struct irq_data *data)
|
|
@@ -1754,18 +1802,39 @@ static void mask_ack_dynirq(struct irq_data *data)
|
|
ack_dynirq(data);
|
|
}
|
|
|
|
+static void lateeoi_ack_dynirq(struct irq_data *data)
|
|
+{
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
+
|
|
+ if (VALID_EVTCHN(evtchn)) {
|
|
+ do_mask(info, EVT_MASK_REASON_EOI_PENDING);
|
|
+ event_handler_exit(info);
|
|
+ }
|
|
+}
|
|
+
|
|
+static void lateeoi_mask_ack_dynirq(struct irq_data *data)
|
|
+{
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
+
|
|
+ if (VALID_EVTCHN(evtchn)) {
|
|
+ do_mask(info, EVT_MASK_REASON_EXPLICIT);
|
|
+ event_handler_exit(info);
|
|
+ }
|
|
+}
|
|
+
|
|
static int retrigger_dynirq(struct irq_data *data)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(data->irq);
|
|
- int masked;
|
|
+ struct irq_info *info = info_for_irq(data->irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (!VALID_EVTCHN(evtchn))
|
|
return 0;
|
|
|
|
- masked = test_and_set_mask(evtchn);
|
|
+ do_mask(info, EVT_MASK_REASON_TEMPORARY);
|
|
set_evtchn(evtchn);
|
|
- if (!masked)
|
|
- unmask_evtchn(evtchn);
|
|
+ do_unmask(info, EVT_MASK_REASON_TEMPORARY);
|
|
|
|
return 1;
|
|
}
|
|
@@ -1862,10 +1931,11 @@ static void restore_cpu_ipis(unsigned int cpu)
|
|
/* Clear an irq's pending state, in preparation for polling on it */
|
|
void xen_clear_irq_pending(int irq)
|
|
{
|
|
- evtchn_port_t evtchn = evtchn_from_irq(irq);
|
|
+ struct irq_info *info = info_for_irq(irq);
|
|
+ evtchn_port_t evtchn = info ? info->evtchn : 0;
|
|
|
|
if (VALID_EVTCHN(evtchn))
|
|
- clear_evtchn(evtchn);
|
|
+ event_handler_exit(info);
|
|
}
|
|
EXPORT_SYMBOL(xen_clear_irq_pending);
|
|
void xen_set_irq_pending(int irq)
|
|
@@ -1973,8 +2043,8 @@ static struct irq_chip xen_lateeoi_chip __read_mostly = {
|
|
.irq_mask = disable_dynirq,
|
|
.irq_unmask = enable_dynirq,
|
|
|
|
- .irq_ack = mask_ack_dynirq,
|
|
- .irq_mask_ack = mask_ack_dynirq,
|
|
+ .irq_ack = lateeoi_ack_dynirq,
|
|
+ .irq_mask_ack = lateeoi_mask_ack_dynirq,
|
|
|
|
.irq_set_affinity = set_affinity_irq,
|
|
.irq_retrigger = retrigger_dynirq,
|
|
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
|
|
index b234f1766810c..ad9fe51d3fb33 100644
|
|
--- a/drivers/xen/events/events_fifo.c
|
|
+++ b/drivers/xen/events/events_fifo.c
|
|
@@ -209,12 +209,6 @@ static bool evtchn_fifo_is_pending(evtchn_port_t port)
|
|
return sync_test_bit(EVTCHN_FIFO_BIT(PENDING, word), BM(word));
|
|
}
|
|
|
|
-static bool evtchn_fifo_test_and_set_mask(evtchn_port_t port)
|
|
-{
|
|
- event_word_t *word = event_word_from_port(port);
|
|
- return sync_test_and_set_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word));
|
|
-}
|
|
-
|
|
static void evtchn_fifo_mask(evtchn_port_t port)
|
|
{
|
|
event_word_t *word = event_word_from_port(port);
|
|
@@ -423,7 +417,6 @@ static const struct evtchn_ops evtchn_ops_fifo = {
|
|
.clear_pending = evtchn_fifo_clear_pending,
|
|
.set_pending = evtchn_fifo_set_pending,
|
|
.is_pending = evtchn_fifo_is_pending,
|
|
- .test_and_set_mask = evtchn_fifo_test_and_set_mask,
|
|
.mask = evtchn_fifo_mask,
|
|
.unmask = evtchn_fifo_unmask,
|
|
.handle_events = evtchn_fifo_handle_events,
|
|
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
|
|
index 0a97c0549db76..4d3398eff9cdf 100644
|
|
--- a/drivers/xen/events/events_internal.h
|
|
+++ b/drivers/xen/events/events_internal.h
|
|
@@ -14,13 +14,13 @@ struct evtchn_ops {
|
|
unsigned (*nr_channels)(void);
|
|
|
|
int (*setup)(evtchn_port_t port);
|
|
+ void (*remove)(evtchn_port_t port, unsigned int cpu);
|
|
void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
|
|
unsigned int old_cpu);
|
|
|
|
void (*clear_pending)(evtchn_port_t port);
|
|
void (*set_pending)(evtchn_port_t port);
|
|
bool (*is_pending)(evtchn_port_t port);
|
|
- bool (*test_and_set_mask)(evtchn_port_t port);
|
|
void (*mask)(evtchn_port_t port);
|
|
void (*unmask)(evtchn_port_t port);
|
|
|
|
@@ -54,6 +54,13 @@ static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
|
|
return 0;
|
|
}
|
|
|
|
+static inline void xen_evtchn_port_remove(evtchn_port_t evtchn,
|
|
+ unsigned int cpu)
|
|
+{
|
|
+ if (evtchn_ops->remove)
|
|
+ evtchn_ops->remove(evtchn, cpu);
|
|
+}
|
|
+
|
|
static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
|
|
unsigned int cpu,
|
|
unsigned int old_cpu)
|
|
@@ -76,11 +83,6 @@ static inline bool test_evtchn(evtchn_port_t port)
|
|
return evtchn_ops->is_pending(port);
|
|
}
|
|
|
|
-static inline bool test_and_set_mask(evtchn_port_t port)
|
|
-{
|
|
- return evtchn_ops->test_and_set_mask(port);
|
|
-}
|
|
-
|
|
static inline void mask_evtchn(evtchn_port_t port)
|
|
{
|
|
return evtchn_ops->mask(port);
|
|
diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
|
|
index 3880a82da1dc5..11b5bf2419555 100644
|
|
--- a/fs/binfmt_misc.c
|
|
+++ b/fs/binfmt_misc.c
|
|
@@ -647,12 +647,24 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
|
|
struct super_block *sb = file_inode(file)->i_sb;
|
|
struct dentry *root = sb->s_root, *dentry;
|
|
int err = 0;
|
|
+ struct file *f = NULL;
|
|
|
|
e = create_entry(buffer, count);
|
|
|
|
if (IS_ERR(e))
|
|
return PTR_ERR(e);
|
|
|
|
+ if (e->flags & MISC_FMT_OPEN_FILE) {
|
|
+ f = open_exec(e->interpreter);
|
|
+ if (IS_ERR(f)) {
|
|
+ pr_notice("register: failed to install interpreter file %s\n",
|
|
+ e->interpreter);
|
|
+ kfree(e);
|
|
+ return PTR_ERR(f);
|
|
+ }
|
|
+ e->interp_file = f;
|
|
+ }
|
|
+
|
|
inode_lock(d_inode(root));
|
|
dentry = lookup_one_len(e->name, root, strlen(e->name));
|
|
err = PTR_ERR(dentry);
|
|
@@ -676,21 +688,6 @@ static ssize_t bm_register_write(struct file *file, const char __user *buffer,
|
|
goto out2;
|
|
}
|
|
|
|
- if (e->flags & MISC_FMT_OPEN_FILE) {
|
|
- struct file *f;
|
|
-
|
|
- f = open_exec(e->interpreter);
|
|
- if (IS_ERR(f)) {
|
|
- err = PTR_ERR(f);
|
|
- pr_notice("register: failed to install interpreter file %s\n", e->interpreter);
|
|
- simple_release_fs(&bm_mnt, &entry_count);
|
|
- iput(inode);
|
|
- inode = NULL;
|
|
- goto out2;
|
|
- }
|
|
- e->interp_file = f;
|
|
- }
|
|
-
|
|
e->dentry = dget(dentry);
|
|
inode->i_private = e;
|
|
inode->i_fop = &bm_entry_operations;
|
|
@@ -707,6 +704,8 @@ out:
|
|
inode_unlock(d_inode(root));
|
|
|
|
if (err) {
|
|
+ if (f)
|
|
+ filp_close(f, NULL);
|
|
kfree(e);
|
|
return err;
|
|
}
|
|
diff --git a/fs/block_dev.c b/fs/block_dev.c
|
|
index 2ea189c1b4ffe..fe201b757baa4 100644
|
|
--- a/fs/block_dev.c
|
|
+++ b/fs/block_dev.c
|
|
@@ -123,12 +123,21 @@ int truncate_bdev_range(struct block_device *bdev, fmode_t mode,
|
|
err = bd_prepare_to_claim(bdev, claimed_bdev,
|
|
truncate_bdev_range);
|
|
if (err)
|
|
- return err;
|
|
+ goto invalidate;
|
|
}
|
|
truncate_inode_pages_range(bdev->bd_inode->i_mapping, lstart, lend);
|
|
if (claimed_bdev)
|
|
bd_abort_claiming(bdev, claimed_bdev, truncate_bdev_range);
|
|
return 0;
|
|
+
|
|
+invalidate:
|
|
+ /*
|
|
+ * Someone else has handle exclusively open. Try invalidating instead.
|
|
+ * The 'end' argument is inclusive so the rounding is safe.
|
|
+ */
|
|
+ return invalidate_inode_pages2_range(bdev->bd_inode->i_mapping,
|
|
+ lstart >> PAGE_SHIFT,
|
|
+ lend >> PAGE_SHIFT);
|
|
}
|
|
EXPORT_SYMBOL(truncate_bdev_range);
|
|
|
|
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
|
|
index 472cb7777e3e9..f0ed29a9a6f11 100644
|
|
--- a/fs/cifs/cifsfs.c
|
|
+++ b/fs/cifs/cifsfs.c
|
|
@@ -286,7 +286,7 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
|
|
rc = server->ops->queryfs(xid, tcon, cifs_sb, buf);
|
|
|
|
free_xid(xid);
|
|
- return 0;
|
|
+ return rc;
|
|
}
|
|
|
|
static long cifs_fallocate(struct file *file, int mode, loff_t off, loff_t len)
|
|
diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
|
|
index 484ec2d8c5c95..3295516af2aec 100644
|
|
--- a/fs/cifs/cifsglob.h
|
|
+++ b/fs/cifs/cifsglob.h
|
|
@@ -256,7 +256,7 @@ struct smb_version_operations {
|
|
/* verify the message */
|
|
int (*check_message)(char *, unsigned int, struct TCP_Server_Info *);
|
|
bool (*is_oplock_break)(char *, struct TCP_Server_Info *);
|
|
- int (*handle_cancelled_mid)(char *, struct TCP_Server_Info *);
|
|
+ int (*handle_cancelled_mid)(struct mid_q_entry *, struct TCP_Server_Info *);
|
|
void (*downgrade_oplock)(struct TCP_Server_Info *server,
|
|
struct cifsInodeInfo *cinode, __u32 oplock,
|
|
unsigned int epoch, bool *purge_cache);
|
|
@@ -1785,10 +1785,11 @@ static inline bool is_retryable_error(int error)
|
|
#define CIFS_NO_RSP_BUF 0x040 /* no response buffer required */
|
|
|
|
/* Type of request operation */
|
|
-#define CIFS_ECHO_OP 0x080 /* echo request */
|
|
-#define CIFS_OBREAK_OP 0x0100 /* oplock break request */
|
|
-#define CIFS_NEG_OP 0x0200 /* negotiate request */
|
|
-#define CIFS_OP_MASK 0x0380 /* mask request type */
|
|
+#define CIFS_ECHO_OP 0x080 /* echo request */
|
|
+#define CIFS_OBREAK_OP 0x0100 /* oplock break request */
|
|
+#define CIFS_NEG_OP 0x0200 /* negotiate request */
|
|
+#define CIFS_CP_CREATE_CLOSE_OP 0x0400 /* compound create+close request */
|
|
+#define CIFS_OP_MASK 0x0780 /* mask request type */
|
|
|
|
#define CIFS_HAS_CREDITS 0x0400 /* already has credits */
|
|
#define CIFS_TRANSFORM_REQ 0x0800 /* transform request before sending */
|
|
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
|
|
index ad3ecda1314d9..fa359f473e3db 100644
|
|
--- a/fs/cifs/connect.c
|
|
+++ b/fs/cifs/connect.c
|
|
@@ -2629,6 +2629,11 @@ smbd_connected:
|
|
tcp_ses->min_offload = volume_info->min_offload;
|
|
tcp_ses->tcpStatus = CifsNeedNegotiate;
|
|
|
|
+ if ((volume_info->max_credits < 20) || (volume_info->max_credits > 60000))
|
|
+ tcp_ses->max_credits = SMB2_MAX_CREDITS_AVAILABLE;
|
|
+ else
|
|
+ tcp_ses->max_credits = volume_info->max_credits;
|
|
+
|
|
tcp_ses->nr_targets = 1;
|
|
tcp_ses->ignore_signature = volume_info->ignore_signature;
|
|
/* thread spawned, put it on the list */
|
|
@@ -4077,11 +4082,6 @@ static int mount_get_conns(struct smb_vol *vol, struct cifs_sb_info *cifs_sb,
|
|
|
|
*nserver = server;
|
|
|
|
- if ((vol->max_credits < 20) || (vol->max_credits > 60000))
|
|
- server->max_credits = SMB2_MAX_CREDITS_AVAILABLE;
|
|
- else
|
|
- server->max_credits = vol->max_credits;
|
|
-
|
|
/* get a reference to a SMB session */
|
|
ses = cifs_get_smb_ses(server, vol);
|
|
if (IS_ERR(ses)) {
|
|
diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
|
|
index de564368a887c..c2fe85ca2ded3 100644
|
|
--- a/fs/cifs/sess.c
|
|
+++ b/fs/cifs/sess.c
|
|
@@ -224,6 +224,7 @@ cifs_ses_add_channel(struct cifs_ses *ses, struct cifs_server_iface *iface)
|
|
vol.noautotune = ses->server->noautotune;
|
|
vol.sockopt_tcp_nodelay = ses->server->tcp_nodelay;
|
|
vol.echo_interval = ses->server->echo_interval / HZ;
|
|
+ vol.max_credits = ses->server->max_credits;
|
|
|
|
/*
|
|
* This will be used for encoding/decoding user/domain/pw
|
|
diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
|
|
index 1f900b81c34ae..a718dc77e604e 100644
|
|
--- a/fs/cifs/smb2inode.c
|
|
+++ b/fs/cifs/smb2inode.c
|
|
@@ -358,6 +358,7 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
|
|
if (cfile)
|
|
goto after_close;
|
|
/* Close */
|
|
+ flags |= CIFS_CP_CREATE_CLOSE_OP;
|
|
rqst[num_rqst].rq_iov = &vars->close_iov[0];
|
|
rqst[num_rqst].rq_nvec = 1;
|
|
rc = SMB2_close_init(tcon, server,
|
|
diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
|
|
index 2da6b41cb5526..db22d686c61ff 100644
|
|
--- a/fs/cifs/smb2misc.c
|
|
+++ b/fs/cifs/smb2misc.c
|
|
@@ -835,14 +835,14 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
|
|
}
|
|
|
|
int
|
|
-smb2_handle_cancelled_mid(char *buffer, struct TCP_Server_Info *server)
|
|
+smb2_handle_cancelled_mid(struct mid_q_entry *mid, struct TCP_Server_Info *server)
|
|
{
|
|
- struct smb2_sync_hdr *sync_hdr = (struct smb2_sync_hdr *)buffer;
|
|
- struct smb2_create_rsp *rsp = (struct smb2_create_rsp *)buffer;
|
|
+ struct smb2_sync_hdr *sync_hdr = mid->resp_buf;
|
|
+ struct smb2_create_rsp *rsp = mid->resp_buf;
|
|
struct cifs_tcon *tcon;
|
|
int rc;
|
|
|
|
- if (sync_hdr->Command != SMB2_CREATE ||
|
|
+ if ((mid->optype & CIFS_CP_CREATE_CLOSE_OP) || sync_hdr->Command != SMB2_CREATE ||
|
|
sync_hdr->Status != STATUS_SUCCESS)
|
|
return 0;
|
|
|
|
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
|
|
index 22f1d8dc12b00..02998c79bb907 100644
|
|
--- a/fs/cifs/smb2ops.c
|
|
+++ b/fs/cifs/smb2ops.c
|
|
@@ -1137,7 +1137,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
|
|
struct TCP_Server_Info *server = cifs_pick_channel(ses);
|
|
__le16 *utf16_path = NULL;
|
|
int ea_name_len = strlen(ea_name);
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
int len;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
@@ -1515,7 +1515,7 @@ smb2_ioctl_query_info(const unsigned int xid,
|
|
struct smb_query_info qi;
|
|
struct smb_query_info __user *pqi;
|
|
int rc = 0;
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb2_query_info_rsp *qi_rsp = NULL;
|
|
struct smb2_ioctl_rsp *io_rsp = NULL;
|
|
void *buffer = NULL;
|
|
@@ -2482,7 +2482,7 @@ smb2_query_info_compound(const unsigned int xid, struct cifs_tcon *tcon,
|
|
{
|
|
struct cifs_ses *ses = tcon->ses;
|
|
struct TCP_Server_Info *server = cifs_pick_channel(ses);
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
struct kvec rsp_iov[3];
|
|
@@ -2880,7 +2880,7 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
|
|
unsigned int sub_offset;
|
|
unsigned int print_len;
|
|
unsigned int print_offset;
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
struct kvec rsp_iov[3];
|
|
@@ -3062,7 +3062,7 @@ smb2_query_reparse_tag(const unsigned int xid, struct cifs_tcon *tcon,
|
|
struct cifs_open_parms oparms;
|
|
struct cifs_fid fid;
|
|
struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);
|
|
- int flags = 0;
|
|
+ int flags = CIFS_CP_CREATE_CLOSE_OP;
|
|
struct smb_rqst rqst[3];
|
|
int resp_buftype[3];
|
|
struct kvec rsp_iov[3];
|
|
diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
|
|
index d4110447ee3a8..4eb0ca84355a6 100644
|
|
--- a/fs/cifs/smb2proto.h
|
|
+++ b/fs/cifs/smb2proto.h
|
|
@@ -246,8 +246,7 @@ extern int SMB2_oplock_break(const unsigned int xid, struct cifs_tcon *tcon,
|
|
extern int smb2_handle_cancelled_close(struct cifs_tcon *tcon,
|
|
__u64 persistent_fid,
|
|
__u64 volatile_fid);
|
|
-extern int smb2_handle_cancelled_mid(char *buffer,
|
|
- struct TCP_Server_Info *server);
|
|
+extern int smb2_handle_cancelled_mid(struct mid_q_entry *mid, struct TCP_Server_Info *server);
|
|
void smb2_cancelled_close_fid(struct work_struct *work);
|
|
extern int SMB2_QFS_info(const unsigned int xid, struct cifs_tcon *tcon,
|
|
u64 persistent_file_id, u64 volatile_file_id,
|
|
diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
|
|
index 9391cd17a2b55..0b9f1a0cba1a3 100644
|
|
--- a/fs/cifs/transport.c
|
|
+++ b/fs/cifs/transport.c
|
|
@@ -101,7 +101,7 @@ static void _cifs_mid_q_entry_release(struct kref *refcount)
|
|
if (midEntry->resp_buf && (midEntry->mid_flags & MID_WAIT_CANCELLED) &&
|
|
midEntry->mid_state == MID_RESPONSE_RECEIVED &&
|
|
server->ops->handle_cancelled_mid)
|
|
- server->ops->handle_cancelled_mid(midEntry->resp_buf, server);
|
|
+ server->ops->handle_cancelled_mid(midEntry, server);
|
|
|
|
midEntry->mid_state = MID_FREE;
|
|
atomic_dec(&midCount);
|
|
diff --git a/fs/configfs/file.c b/fs/configfs/file.c
|
|
index 1f0270229d7b7..da8351d1e4552 100644
|
|
--- a/fs/configfs/file.c
|
|
+++ b/fs/configfs/file.c
|
|
@@ -378,7 +378,7 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
|
|
|
|
attr = to_attr(dentry);
|
|
if (!attr)
|
|
- goto out_put_item;
|
|
+ goto out_free_buffer;
|
|
|
|
if (type & CONFIGFS_ITEM_BIN_ATTR) {
|
|
buffer->bin_attr = to_bin_attr(dentry);
|
|
@@ -391,7 +391,7 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
|
|
/* Grab the module reference for this attribute if we have one */
|
|
error = -ENODEV;
|
|
if (!try_module_get(buffer->owner))
|
|
- goto out_put_item;
|
|
+ goto out_free_buffer;
|
|
|
|
error = -EACCES;
|
|
if (!buffer->item->ci_type)
|
|
@@ -435,8 +435,6 @@ static int __configfs_open_file(struct inode *inode, struct file *file, int type
|
|
|
|
out_put_module:
|
|
module_put(buffer->owner);
|
|
-out_put_item:
|
|
- config_item_put(buffer->item);
|
|
out_free_buffer:
|
|
up_read(&frag->frag_sem);
|
|
kfree(buffer);
|
|
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
|
|
index ea5aefa23a20a..e30bf8f342c2a 100644
|
|
--- a/fs/ext4/super.c
|
|
+++ b/fs/ext4/super.c
|
|
@@ -4876,7 +4876,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
|
|
|
set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
|
|
|
|
- sbi->s_journal->j_commit_callback = ext4_journal_commit_callback;
|
|
sbi->s_journal->j_submit_inode_data_buffers =
|
|
ext4_journal_submit_inode_data_buffers;
|
|
sbi->s_journal->j_finish_inode_data_buffers =
|
|
@@ -4993,6 +4992,14 @@ no_journal:
|
|
goto failed_mount5;
|
|
}
|
|
|
|
+ /*
|
|
+ * We can only set up the journal commit callback once
|
|
+ * mballoc is initialized
|
|
+ */
|
|
+ if (sbi->s_journal)
|
|
+ sbi->s_journal->j_commit_callback =
|
|
+ ext4_journal_commit_callback;
|
|
+
|
|
block = ext4_count_free_clusters(sb);
|
|
ext4_free_blocks_count_set(sbi->s_es,
|
|
EXT4_C2B(sbi, block));
|
|
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
|
|
index 4e011adaf9670..c837675cd395a 100644
|
|
--- a/fs/nfs/dir.c
|
|
+++ b/fs/nfs/dir.c
|
|
@@ -1202,6 +1202,15 @@ out_force:
|
|
goto out;
|
|
}
|
|
|
|
+static void nfs_mark_dir_for_revalidate(struct inode *inode)
|
|
+{
|
|
+ struct nfs_inode *nfsi = NFS_I(inode);
|
|
+
|
|
+ spin_lock(&inode->i_lock);
|
|
+ nfsi->cache_validity |= NFS_INO_REVAL_PAGECACHE;
|
|
+ spin_unlock(&inode->i_lock);
|
|
+}
|
|
+
|
|
/*
|
|
* We judge how long we want to trust negative
|
|
* dentries by looking at the parent inode mtime.
|
|
@@ -1236,19 +1245,14 @@ nfs_lookup_revalidate_done(struct inode *dir, struct dentry *dentry,
|
|
__func__, dentry);
|
|
return 1;
|
|
case 0:
|
|
- nfs_mark_for_revalidate(dir);
|
|
- if (inode && S_ISDIR(inode->i_mode)) {
|
|
- /* Purge readdir caches. */
|
|
- nfs_zap_caches(inode);
|
|
- /*
|
|
- * We can't d_drop the root of a disconnected tree:
|
|
- * its d_hash is on the s_anon list and d_drop() would hide
|
|
- * it from shrink_dcache_for_unmount(), leading to busy
|
|
- * inodes on unmount and further oopses.
|
|
- */
|
|
- if (IS_ROOT(dentry))
|
|
- return 1;
|
|
- }
|
|
+ /*
|
|
+ * We can't d_drop the root of a disconnected tree:
|
|
+ * its d_hash is on the s_anon list and d_drop() would hide
|
|
+ * it from shrink_dcache_for_unmount(), leading to busy
|
|
+ * inodes on unmount and further oopses.
|
|
+ */
|
|
+ if (inode && IS_ROOT(dentry))
|
|
+ return 1;
|
|
dfprintk(LOOKUPCACHE, "NFS: %s(%pd2) is invalid\n",
|
|
__func__, dentry);
|
|
return 0;
|
|
@@ -1326,6 +1330,13 @@ out:
|
|
nfs_free_fattr(fattr);
|
|
nfs_free_fhandle(fhandle);
|
|
nfs4_label_free(label);
|
|
+
|
|
+ /*
|
|
+ * If the lookup failed despite the dentry change attribute being
|
|
+ * a match, then we should revalidate the directory cache.
|
|
+ */
|
|
+ if (!ret && nfs_verify_change_attribute(dir, dentry->d_time))
|
|
+ nfs_mark_dir_for_revalidate(dir);
|
|
return nfs_lookup_revalidate_done(dir, dentry, inode, ret);
|
|
}
|
|
|
|
@@ -1368,7 +1379,7 @@ nfs_do_lookup_revalidate(struct inode *dir, struct dentry *dentry,
|
|
error = nfs_lookup_verify_inode(inode, flags);
|
|
if (error) {
|
|
if (error == -ESTALE)
|
|
- nfs_zap_caches(dir);
|
|
+ nfs_mark_dir_for_revalidate(dir);
|
|
goto out_bad;
|
|
}
|
|
nfs_advise_use_readdirplus(dir);
|
|
@@ -1865,7 +1876,6 @@ out:
|
|
dput(parent);
|
|
return d;
|
|
out_error:
|
|
- nfs_mark_for_revalidate(dir);
|
|
d = ERR_PTR(error);
|
|
goto out;
|
|
}
|
|
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
|
|
index a811d42ffbd11..ba2dfba4854bf 100644
|
|
--- a/fs/nfs/nfs4proc.c
|
|
+++ b/fs/nfs/nfs4proc.c
|
|
@@ -5967,7 +5967,7 @@ static int _nfs4_get_security_label(struct inode *inode, void *buf,
|
|
return ret;
|
|
if (!(fattr.valid & NFS_ATTR_FATTR_V4_SECURITY_LABEL))
|
|
return -ENOENT;
|
|
- return 0;
|
|
+ return label.len;
|
|
}
|
|
|
|
static int nfs4_get_security_label(struct inode *inode, void *buf,
|
|
diff --git a/fs/pnode.h b/fs/pnode.h
|
|
index 26f74e092bd98..988f1aa9b02ae 100644
|
|
--- a/fs/pnode.h
|
|
+++ b/fs/pnode.h
|
|
@@ -12,7 +12,7 @@
|
|
|
|
#define IS_MNT_SHARED(m) ((m)->mnt.mnt_flags & MNT_SHARED)
|
|
#define IS_MNT_SLAVE(m) ((m)->mnt_master)
|
|
-#define IS_MNT_NEW(m) (!(m)->mnt_ns)
|
|
+#define IS_MNT_NEW(m) (!(m)->mnt_ns || is_anon_ns((m)->mnt_ns))
|
|
#define CLEAR_MNT_SHARED(m) ((m)->mnt.mnt_flags &= ~MNT_SHARED)
|
|
#define IS_MNT_UNBINDABLE(m) ((m)->mnt.mnt_flags & MNT_UNBINDABLE)
|
|
#define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
|
|
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
|
|
index bb89c3e43212b..0dd2f93ac0480 100644
|
|
--- a/fs/udf/inode.c
|
|
+++ b/fs/udf/inode.c
|
|
@@ -544,11 +544,14 @@ static int udf_do_extend_file(struct inode *inode,
|
|
|
|
udf_write_aext(inode, last_pos, &last_ext->extLocation,
|
|
last_ext->extLength, 1);
|
|
+
|
|
/*
|
|
- * We've rewritten the last extent but there may be empty
|
|
- * indirect extent after it - enter it.
|
|
+ * We've rewritten the last extent. If we are going to add
|
|
+ * more extents, we may need to enter possible following
|
|
+ * empty indirect extent.
|
|
*/
|
|
- udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
|
|
+ if (new_block_bytes || prealloc_len)
|
|
+ udf_next_aext(inode, last_pos, &tmploc, &tmplen, 0);
|
|
}
|
|
|
|
/* Managed to do everything necessary? */
|
|
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
|
|
index 5b1dc1ad4fb32..9e173c6f312dc 100644
|
|
--- a/include/linux/acpi.h
|
|
+++ b/include/linux/acpi.h
|
|
@@ -1072,19 +1072,25 @@ void __acpi_handle_debug(struct _ddebug *descriptor, acpi_handle handle, const c
|
|
#if defined(CONFIG_ACPI) && defined(CONFIG_GPIOLIB)
|
|
bool acpi_gpio_get_irq_resource(struct acpi_resource *ares,
|
|
struct acpi_resource_gpio **agpio);
|
|
-int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index);
|
|
+int acpi_dev_gpio_irq_get_by(struct acpi_device *adev, const char *name, int index);
|
|
#else
|
|
static inline bool acpi_gpio_get_irq_resource(struct acpi_resource *ares,
|
|
struct acpi_resource_gpio **agpio)
|
|
{
|
|
return false;
|
|
}
|
|
-static inline int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
+static inline int acpi_dev_gpio_irq_get_by(struct acpi_device *adev,
|
|
+ const char *name, int index)
|
|
{
|
|
return -ENXIO;
|
|
}
|
|
#endif
|
|
|
|
+static inline int acpi_dev_gpio_irq_get(struct acpi_device *adev, int index)
|
|
+{
|
|
+ return acpi_dev_gpio_irq_get_by(adev, NULL, index);
|
|
+}
|
|
+
|
|
/* Device properties */
|
|
|
|
#ifdef CONFIG_ACPI
|
|
diff --git a/include/linux/can/skb.h b/include/linux/can/skb.h
|
|
index fc61cf4eff1c9..ce7393d397e18 100644
|
|
--- a/include/linux/can/skb.h
|
|
+++ b/include/linux/can/skb.h
|
|
@@ -49,8 +49,12 @@ static inline void can_skb_reserve(struct sk_buff *skb)
|
|
|
|
static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk)
|
|
{
|
|
- if (sk) {
|
|
- sock_hold(sk);
|
|
+ /* If the socket has already been closed by user space, the
|
|
+ * refcount may already be 0 (and the socket will be freed
|
|
+ * after the last TX skb has been freed). So only increase
|
|
+ * socket refcount if the refcount is > 0.
|
|
+ */
|
|
+ if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
|
|
skb->destructor = sock_efree;
|
|
skb->sk = sk;
|
|
}
|
|
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
|
|
index 98cff1b4b088c..189149de77a9d 100644
|
|
--- a/include/linux/compiler-clang.h
|
|
+++ b/include/linux/compiler-clang.h
|
|
@@ -41,6 +41,12 @@
|
|
#define __no_sanitize_thread
|
|
#endif
|
|
|
|
+#if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP)
|
|
+#define __HAVE_BUILTIN_BSWAP32__
|
|
+#define __HAVE_BUILTIN_BSWAP64__
|
|
+#define __HAVE_BUILTIN_BSWAP16__
|
|
+#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
|
|
+
|
|
#if __has_feature(undefined_behavior_sanitizer)
|
|
/* GCC does not have __SANITIZE_UNDEFINED__ */
|
|
#define __no_sanitize_undefined \
|
|
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
|
|
index 474f29638d2c9..7dff07713a073 100644
|
|
--- a/include/linux/entry-common.h
|
|
+++ b/include/linux/entry-common.h
|
|
@@ -341,8 +341,26 @@ void irqentry_enter_from_user_mode(struct pt_regs *regs);
|
|
void irqentry_exit_to_user_mode(struct pt_regs *regs);
|
|
|
|
#ifndef irqentry_state
|
|
+/**
|
|
+ * struct irqentry_state - Opaque object for exception state storage
|
|
+ * @exit_rcu: Used exclusively in the irqentry_*() calls; signals whether the
|
|
+ * exit path has to invoke rcu_irq_exit().
|
|
+ * @lockdep: Used exclusively in the irqentry_nmi_*() calls; ensures that
|
|
+ * lockdep state is restored correctly on exit from nmi.
|
|
+ *
|
|
+ * This opaque object is filled in by the irqentry_*_enter() functions and
|
|
+ * must be passed back into the corresponding irqentry_*_exit() functions
|
|
+ * when the exception is complete.
|
|
+ *
|
|
+ * Callers of irqentry_*_[enter|exit]() must consider this structure opaque
|
|
+ * and all members private. Descriptions of the members are provided to aid in
|
|
+ * the maintenance of the irqentry_*() functions.
|
|
+ */
|
|
typedef struct irqentry_state {
|
|
- bool exit_rcu;
|
|
+ union {
|
|
+ bool exit_rcu;
|
|
+ bool lockdep;
|
|
+ };
|
|
} irqentry_state_t;
|
|
#endif
|
|
|
|
@@ -402,4 +420,23 @@ void irqentry_exit_cond_resched(void);
|
|
*/
|
|
void noinstr irqentry_exit(struct pt_regs *regs, irqentry_state_t state);
|
|
|
|
+/**
|
|
+ * irqentry_nmi_enter - Handle NMI entry
|
|
+ * @regs: Pointer to currents pt_regs
|
|
+ *
|
|
+ * Similar to irqentry_enter() but taking care of the NMI constraints.
|
|
+ */
|
|
+irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs);
|
|
+
|
|
+/**
|
|
+ * irqentry_nmi_exit - Handle return from NMI handling
|
|
+ * @regs: Pointer to pt_regs (NMI entry regs)
|
|
+ * @irq_state: Return value from matching call to irqentry_nmi_enter()
|
|
+ *
|
|
+ * Last action before returning to the low level assmenbly code.
|
|
+ *
|
|
+ * Counterpart to irqentry_nmi_enter().
|
|
+ */
|
|
+void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state);
|
|
+
|
|
#endif
|
|
diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
|
|
index 901aab89d025f..79f450e93abfd 100644
|
|
--- a/include/linux/gpio/consumer.h
|
|
+++ b/include/linux/gpio/consumer.h
|
|
@@ -674,6 +674,8 @@ struct acpi_gpio_mapping {
|
|
* get GpioIo type explicitly, this quirk may be used.
|
|
*/
|
|
#define ACPI_GPIO_QUIRK_ONLY_GPIOIO BIT(1)
|
|
+/* Use given pin as an absolute GPIO number in the system */
|
|
+#define ACPI_GPIO_QUIRK_ABSOLUTE_NUMBER BIT(2)
|
|
|
|
unsigned int quirks;
|
|
};
|
|
diff --git a/include/linux/memory.h b/include/linux/memory.h
|
|
index 439a89e758d87..4da95e684e20f 100644
|
|
--- a/include/linux/memory.h
|
|
+++ b/include/linux/memory.h
|
|
@@ -27,9 +27,8 @@ struct memory_block {
|
|
unsigned long start_section_nr;
|
|
unsigned long state; /* serialized by the dev->lock */
|
|
int online_type; /* for passing data to online routine */
|
|
- int phys_device; /* to which fru does this belong? */
|
|
- struct device dev;
|
|
int nid; /* NID for this memory block */
|
|
+ struct device dev;
|
|
};
|
|
|
|
int arch_get_memory_phys_device(unsigned long start_pfn);
|
|
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
|
|
index 96450f6fb1de8..22ce0604b4480 100644
|
|
--- a/include/linux/perf_event.h
|
|
+++ b/include/linux/perf_event.h
|
|
@@ -606,6 +606,7 @@ struct swevent_hlist {
|
|
#define PERF_ATTACH_TASK 0x04
|
|
#define PERF_ATTACH_TASK_DATA 0x08
|
|
#define PERF_ATTACH_ITRACE 0x10
|
|
+#define PERF_ATTACH_SCHED_CB 0x20
|
|
|
|
struct perf_cgroup;
|
|
struct perf_buffer;
|
|
@@ -872,6 +873,7 @@ struct perf_cpu_context {
|
|
struct list_head cgrp_cpuctx_entry;
|
|
#endif
|
|
|
|
+ struct list_head sched_cb_entry;
|
|
int sched_cb_usage;
|
|
|
|
int online;
|
|
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
|
|
index e237004d498d6..7c869ea8dffc8 100644
|
|
--- a/include/linux/pgtable.h
|
|
+++ b/include/linux/pgtable.h
|
|
@@ -857,6 +857,10 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
|
|
#define pgprot_device pgprot_noncached
|
|
#endif
|
|
|
|
+#ifndef pgprot_mhp
|
|
+#define pgprot_mhp(prot) (prot)
|
|
+#endif
|
|
+
|
|
#ifdef CONFIG_MMU
|
|
#ifndef pgprot_modify
|
|
#define pgprot_modify pgprot_modify
|
|
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
|
|
index d5ece7a9a403f..dc1f4dcd9a825 100644
|
|
--- a/include/linux/sched/mm.h
|
|
+++ b/include/linux/sched/mm.h
|
|
@@ -140,7 +140,8 @@ static inline bool in_vfork(struct task_struct *tsk)
|
|
* another oom-unkillable task does this it should blame itself.
|
|
*/
|
|
rcu_read_lock();
|
|
- ret = tsk->vfork_done && tsk->real_parent->mm == tsk->mm;
|
|
+ ret = tsk->vfork_done &&
|
|
+ rcu_dereference(tsk->real_parent)->mm == tsk->mm;
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
|
|
index cbfc78b92b654..1ac20d75b0618 100644
|
|
--- a/include/linux/seqlock.h
|
|
+++ b/include/linux/seqlock.h
|
|
@@ -659,10 +659,7 @@ typedef struct {
|
|
* seqcount_latch_init() - runtime initializer for seqcount_latch_t
|
|
* @s: Pointer to the seqcount_latch_t instance
|
|
*/
|
|
-static inline void seqcount_latch_init(seqcount_latch_t *s)
|
|
-{
|
|
- seqcount_init(&s->seqcount);
|
|
-}
|
|
+#define seqcount_latch_init(s) seqcount_init(&(s)->seqcount)
|
|
|
|
/**
|
|
* raw_read_seqcount_latch() - pick even/odd latch data copy
|
|
diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
|
|
index 76d8b09384a7a..63ea9aff368f0 100644
|
|
--- a/include/linux/stop_machine.h
|
|
+++ b/include/linux/stop_machine.h
|
|
@@ -123,7 +123,7 @@ int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
|
|
const struct cpumask *cpus);
|
|
#else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
|
|
|
|
-static inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
|
|
+static __always_inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
|
|
const struct cpumask *cpus)
|
|
{
|
|
unsigned long flags;
|
|
@@ -134,14 +134,15 @@ static inline int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data,
|
|
return ret;
|
|
}
|
|
|
|
-static inline int stop_machine(cpu_stop_fn_t fn, void *data,
|
|
- const struct cpumask *cpus)
|
|
+static __always_inline int
|
|
+stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
|
|
{
|
|
return stop_machine_cpuslocked(fn, data, cpus);
|
|
}
|
|
|
|
-static inline int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
|
|
- const struct cpumask *cpus)
|
|
+static __always_inline int
|
|
+stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
|
|
+ const struct cpumask *cpus)
|
|
{
|
|
return stop_machine(fn, data, cpus);
|
|
}
|
|
diff --git a/include/linux/usb.h b/include/linux/usb.h
|
|
index 7d72c4e0713c1..d6a41841b93e4 100644
|
|
--- a/include/linux/usb.h
|
|
+++ b/include/linux/usb.h
|
|
@@ -746,6 +746,8 @@ extern int usb_lock_device_for_reset(struct usb_device *udev,
|
|
extern int usb_reset_device(struct usb_device *dev);
|
|
extern void usb_queue_reset_device(struct usb_interface *dev);
|
|
|
|
+extern struct device *usb_intf_get_dma_device(struct usb_interface *intf);
|
|
+
|
|
#ifdef CONFIG_ACPI
|
|
extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
|
|
bool enable);
|
|
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
|
|
index e8a924eeea3d0..6b5fcfa1e5553 100644
|
|
--- a/include/linux/virtio_net.h
|
|
+++ b/include/linux/virtio_net.h
|
|
@@ -79,8 +79,13 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
|
|
if (gso_type && skb->network_header) {
|
|
struct flow_keys_basic keys;
|
|
|
|
- if (!skb->protocol)
|
|
+ if (!skb->protocol) {
|
|
+ __be16 protocol = dev_parse_header_protocol(skb);
|
|
+
|
|
virtio_net_hdr_set_proto(skb, hdr);
|
|
+ if (protocol && protocol != skb->protocol)
|
|
+ return -EINVAL;
|
|
+ }
|
|
retry:
|
|
if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
|
|
NULL, 0, 0, 0,
|
|
diff --git a/include/media/rc-map.h b/include/media/rc-map.h
|
|
index 7dbb91c601a77..c3effcdf2a641 100644
|
|
--- a/include/media/rc-map.h
|
|
+++ b/include/media/rc-map.h
|
|
@@ -175,6 +175,13 @@ struct rc_map_list {
|
|
struct rc_map map;
|
|
};
|
|
|
|
+#ifdef CONFIG_MEDIA_CEC_RC
|
|
+/*
|
|
+ * rc_map_list from rc-cec.c
|
|
+ */
|
|
+extern struct rc_map_list cec_map;
|
|
+#endif
|
|
+
|
|
/* Routines from rc-map.c */
|
|
|
|
/**
|
|
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
|
|
index 6336780d83a75..ce2fba49c95da 100644
|
|
--- a/include/target/target_core_backend.h
|
|
+++ b/include/target/target_core_backend.h
|
|
@@ -72,6 +72,7 @@ int transport_backend_register(const struct target_backend_ops *);
|
|
void target_backend_unregister(const struct target_backend_ops *);
|
|
|
|
void target_complete_cmd(struct se_cmd *, u8);
|
|
+void target_set_cmd_data_length(struct se_cmd *, int);
|
|
void target_complete_cmd_with_length(struct se_cmd *, u8, int);
|
|
|
|
void transport_copy_sense_to_cmd(struct se_cmd *, unsigned char *);
|
|
diff --git a/include/uapi/linux/l2tp.h b/include/uapi/linux/l2tp.h
|
|
index 30c80d5ba4bfc..bab8c97086111 100644
|
|
--- a/include/uapi/linux/l2tp.h
|
|
+++ b/include/uapi/linux/l2tp.h
|
|
@@ -145,6 +145,7 @@ enum {
|
|
L2TP_ATTR_RX_ERRORS, /* u64 */
|
|
L2TP_ATTR_STATS_PAD,
|
|
L2TP_ATTR_RX_COOKIE_DISCARDS, /* u64 */
|
|
+ L2TP_ATTR_RX_INVALID, /* u64 */
|
|
__L2TP_ATTR_STATS_MAX,
|
|
};
|
|
|
|
diff --git a/include/uapi/linux/netfilter/nfnetlink_cthelper.h b/include/uapi/linux/netfilter/nfnetlink_cthelper.h
|
|
index a13137afc4299..70af02092d16e 100644
|
|
--- a/include/uapi/linux/netfilter/nfnetlink_cthelper.h
|
|
+++ b/include/uapi/linux/netfilter/nfnetlink_cthelper.h
|
|
@@ -5,7 +5,7 @@
|
|
#define NFCT_HELPER_STATUS_DISABLED 0
|
|
#define NFCT_HELPER_STATUS_ENABLED 1
|
|
|
|
-enum nfnl_acct_msg_types {
|
|
+enum nfnl_cthelper_msg_types {
|
|
NFNL_MSG_CTHELPER_NEW,
|
|
NFNL_MSG_CTHELPER_GET,
|
|
NFNL_MSG_CTHELPER_DEL,
|
|
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
|
|
index e9e2df3f3f9ee..e289e67732926 100644
|
|
--- a/kernel/entry/common.c
|
|
+++ b/kernel/entry/common.c
|
|
@@ -397,3 +397,39 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
|
|
rcu_irq_exit();
|
|
}
|
|
}
|
|
+
|
|
+irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs)
|
|
+{
|
|
+ irqentry_state_t irq_state;
|
|
+
|
|
+ irq_state.lockdep = lockdep_hardirqs_enabled();
|
|
+
|
|
+ __nmi_enter();
|
|
+ lockdep_hardirqs_off(CALLER_ADDR0);
|
|
+ lockdep_hardirq_enter();
|
|
+ rcu_nmi_enter();
|
|
+
|
|
+ instrumentation_begin();
|
|
+ trace_hardirqs_off_finish();
|
|
+ ftrace_nmi_enter();
|
|
+ instrumentation_end();
|
|
+
|
|
+ return irq_state;
|
|
+}
|
|
+
|
|
+void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state)
|
|
+{
|
|
+ instrumentation_begin();
|
|
+ ftrace_nmi_exit();
|
|
+ if (irq_state.lockdep) {
|
|
+ trace_hardirqs_on_prepare();
|
|
+ lockdep_hardirqs_on_prepare(CALLER_ADDR0);
|
|
+ }
|
|
+ instrumentation_end();
|
|
+
|
|
+ rcu_nmi_exit();
|
|
+ lockdep_hardirq_exit();
|
|
+ if (irq_state.lockdep)
|
|
+ lockdep_hardirqs_on(CALLER_ADDR0);
|
|
+ __nmi_exit();
|
|
+}
|
|
diff --git a/kernel/events/core.c b/kernel/events/core.c
|
|
index c3ba29d058b73..4af161b3f322f 100644
|
|
--- a/kernel/events/core.c
|
|
+++ b/kernel/events/core.c
|
|
@@ -383,6 +383,7 @@ static DEFINE_MUTEX(perf_sched_mutex);
|
|
static atomic_t perf_sched_count;
|
|
|
|
static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
|
|
+static DEFINE_PER_CPU(int, perf_sched_cb_usages);
|
|
static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events);
|
|
|
|
static atomic_t nr_mmap_events __read_mostly;
|
|
@@ -3466,11 +3467,16 @@ unlock:
|
|
}
|
|
}
|
|
|
|
+static DEFINE_PER_CPU(struct list_head, sched_cb_list);
|
|
+
|
|
void perf_sched_cb_dec(struct pmu *pmu)
|
|
{
|
|
struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
|
|
|
|
- --cpuctx->sched_cb_usage;
|
|
+ this_cpu_dec(perf_sched_cb_usages);
|
|
+
|
|
+ if (!--cpuctx->sched_cb_usage)
|
|
+ list_del(&cpuctx->sched_cb_entry);
|
|
}
|
|
|
|
|
|
@@ -3478,7 +3484,10 @@ void perf_sched_cb_inc(struct pmu *pmu)
|
|
{
|
|
struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
|
|
|
|
- cpuctx->sched_cb_usage++;
|
|
+ if (!cpuctx->sched_cb_usage++)
|
|
+ list_add(&cpuctx->sched_cb_entry, this_cpu_ptr(&sched_cb_list));
|
|
+
|
|
+ this_cpu_inc(perf_sched_cb_usages);
|
|
}
|
|
|
|
/*
|
|
@@ -3507,6 +3516,24 @@ static void __perf_pmu_sched_task(struct perf_cpu_context *cpuctx, bool sched_in
|
|
perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
|
|
}
|
|
|
|
+static void perf_pmu_sched_task(struct task_struct *prev,
|
|
+ struct task_struct *next,
|
|
+ bool sched_in)
|
|
+{
|
|
+ struct perf_cpu_context *cpuctx;
|
|
+
|
|
+ if (prev == next)
|
|
+ return;
|
|
+
|
|
+ list_for_each_entry(cpuctx, this_cpu_ptr(&sched_cb_list), sched_cb_entry) {
|
|
+ /* will be handled in perf_event_context_sched_in/out */
|
|
+ if (cpuctx->task_ctx)
|
|
+ continue;
|
|
+
|
|
+ __perf_pmu_sched_task(cpuctx, sched_in);
|
|
+ }
|
|
+}
|
|
+
|
|
static void perf_event_switch(struct task_struct *task,
|
|
struct task_struct *next_prev, bool sched_in);
|
|
|
|
@@ -3529,6 +3556,9 @@ void __perf_event_task_sched_out(struct task_struct *task,
|
|
{
|
|
int ctxn;
|
|
|
|
+ if (__this_cpu_read(perf_sched_cb_usages))
|
|
+ perf_pmu_sched_task(task, next, false);
|
|
+
|
|
if (atomic_read(&nr_switch_events))
|
|
perf_event_switch(task, next, false);
|
|
|
|
@@ -3837,6 +3867,9 @@ void __perf_event_task_sched_in(struct task_struct *prev,
|
|
|
|
if (atomic_read(&nr_switch_events))
|
|
perf_event_switch(task, prev, true);
|
|
+
|
|
+ if (__this_cpu_read(perf_sched_cb_usages))
|
|
+ perf_pmu_sched_task(prev, task, true);
|
|
}
|
|
|
|
static u64 perf_calculate_period(struct perf_event *event, u64 nsec, u64 count)
|
|
@@ -4661,7 +4694,7 @@ static void unaccount_event(struct perf_event *event)
|
|
if (event->parent)
|
|
return;
|
|
|
|
- if (event->attach_state & PERF_ATTACH_TASK)
|
|
+ if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_CB))
|
|
dec = true;
|
|
if (event->attr.mmap || event->attr.mmap_data)
|
|
atomic_dec(&nr_mmap_events);
|
|
@@ -11056,7 +11089,7 @@ static void account_event(struct perf_event *event)
|
|
if (event->parent)
|
|
return;
|
|
|
|
- if (event->attach_state & PERF_ATTACH_TASK)
|
|
+ if (event->attach_state & (PERF_ATTACH_TASK | PERF_ATTACH_SCHED_CB))
|
|
inc = true;
|
|
if (event->attr.mmap || event->attr.mmap_data)
|
|
atomic_inc(&nr_mmap_events);
|
|
@@ -12848,6 +12881,7 @@ static void __init perf_event_init_all_cpus(void)
|
|
#ifdef CONFIG_CGROUP_PERF
|
|
INIT_LIST_HEAD(&per_cpu(cgrp_cpuctx_list, cpu));
|
|
#endif
|
|
+ INIT_LIST_HEAD(&per_cpu(sched_cb_list, cpu));
|
|
}
|
|
}
|
|
|
|
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
|
|
index 9d8df34bea75b..16f57e71f9c44 100644
|
|
--- a/kernel/sched/membarrier.c
|
|
+++ b/kernel/sched/membarrier.c
|
|
@@ -332,9 +332,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
|
|
}
|
|
rcu_read_unlock();
|
|
|
|
- preempt_disable();
|
|
- smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);
|
|
- preempt_enable();
|
|
+ on_each_cpu_mask(tmpmask, ipi_sync_rq_state, mm, true);
|
|
|
|
free_cpumask_var(tmpmask);
|
|
cpus_read_unlock();
|
|
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
|
|
index afad085960b81..b9306d2bb4269 100644
|
|
--- a/kernel/sysctl.c
|
|
+++ b/kernel/sysctl.c
|
|
@@ -2951,7 +2951,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &block_dump,
|
|
.maxlen = sizeof(block_dump),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
{
|
|
@@ -2959,7 +2959,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &sysctl_vfs_cache_pressure,
|
|
.maxlen = sizeof(sysctl_vfs_cache_pressure),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
#if defined(HAVE_ARCH_PICK_MMAP_LAYOUT) || \
|
|
@@ -2969,7 +2969,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &sysctl_legacy_va_layout,
|
|
.maxlen = sizeof(sysctl_legacy_va_layout),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
#endif
|
|
@@ -2979,7 +2979,7 @@ static struct ctl_table vm_table[] = {
|
|
.data = &node_reclaim_mode,
|
|
.maxlen = sizeof(node_reclaim_mode),
|
|
.mode = 0644,
|
|
- .proc_handler = proc_dointvec,
|
|
+ .proc_handler = proc_dointvec_minmax,
|
|
.extra1 = SYSCTL_ZERO,
|
|
},
|
|
{
|
|
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
|
|
index 387b4bef7dd14..4416f5d72c11e 100644
|
|
--- a/kernel/time/hrtimer.c
|
|
+++ b/kernel/time/hrtimer.c
|
|
@@ -546,8 +546,11 @@ static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
|
|
}
|
|
|
|
/*
|
|
- * Recomputes cpu_base::*next_timer and returns the earliest expires_next but
|
|
- * does not set cpu_base::*expires_next, that is done by hrtimer_reprogram.
|
|
+ * Recomputes cpu_base::*next_timer and returns the earliest expires_next
|
|
+ * but does not set cpu_base::*expires_next, that is done by
|
|
+ * hrtimer[_force]_reprogram and hrtimer_interrupt only. When updating
|
|
+ * cpu_base::*expires_next right away, reprogramming logic would no longer
|
|
+ * work.
|
|
*
|
|
* When a softirq is pending, we can ignore the HRTIMER_ACTIVE_SOFT bases,
|
|
* those timers will get run whenever the softirq gets handled, at the end of
|
|
@@ -588,6 +591,37 @@ __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base, unsigned int active_
|
|
return expires_next;
|
|
}
|
|
|
|
+static ktime_t hrtimer_update_next_event(struct hrtimer_cpu_base *cpu_base)
|
|
+{
|
|
+ ktime_t expires_next, soft = KTIME_MAX;
|
|
+
|
|
+ /*
|
|
+ * If the soft interrupt has already been activated, ignore the
|
|
+ * soft bases. They will be handled in the already raised soft
|
|
+ * interrupt.
|
|
+ */
|
|
+ if (!cpu_base->softirq_activated) {
|
|
+ soft = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_SOFT);
|
|
+ /*
|
|
+ * Update the soft expiry time. clock_settime() might have
|
|
+ * affected it.
|
|
+ */
|
|
+ cpu_base->softirq_expires_next = soft;
|
|
+ }
|
|
+
|
|
+ expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_HARD);
|
|
+ /*
|
|
+ * If a softirq timer is expiring first, update cpu_base->next_timer
|
|
+ * and program the hardware with the soft expiry time.
|
|
+ */
|
|
+ if (expires_next > soft) {
|
|
+ cpu_base->next_timer = cpu_base->softirq_next_timer;
|
|
+ expires_next = soft;
|
|
+ }
|
|
+
|
|
+ return expires_next;
|
|
+}
|
|
+
|
|
static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
|
|
{
|
|
ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset;
|
|
@@ -628,23 +662,7 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal)
|
|
{
|
|
ktime_t expires_next;
|
|
|
|
- /*
|
|
- * Find the current next expiration time.
|
|
- */
|
|
- expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
|
|
-
|
|
- if (cpu_base->next_timer && cpu_base->next_timer->is_soft) {
|
|
- /*
|
|
- * When the softirq is activated, hrtimer has to be
|
|
- * programmed with the first hard hrtimer because soft
|
|
- * timer interrupt could occur too late.
|
|
- */
|
|
- if (cpu_base->softirq_activated)
|
|
- expires_next = __hrtimer_get_next_event(cpu_base,
|
|
- HRTIMER_ACTIVE_HARD);
|
|
- else
|
|
- cpu_base->softirq_expires_next = expires_next;
|
|
- }
|
|
+ expires_next = hrtimer_update_next_event(cpu_base);
|
|
|
|
if (skip_equal && expires_next == cpu_base->expires_next)
|
|
return;
|
|
@@ -1644,8 +1662,8 @@ retry:
|
|
|
|
__hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD);
|
|
|
|
- /* Reevaluate the clock bases for the next expiry */
|
|
- expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
|
|
+ /* Reevaluate the clock bases for the [soft] next expiry */
|
|
+ expires_next = hrtimer_update_next_event(cpu_base);
|
|
/*
|
|
* Store the new expiry value so the migration code can verify
|
|
* against it.
|
|
diff --git a/lib/logic_pio.c b/lib/logic_pio.c
|
|
index f32fe481b4922..07b4b9a1f54b6 100644
|
|
--- a/lib/logic_pio.c
|
|
+++ b/lib/logic_pio.c
|
|
@@ -28,6 +28,8 @@ static DEFINE_MUTEX(io_range_mutex);
|
|
* @new_range: pointer to the IO range to be registered.
|
|
*
|
|
* Returns 0 on success, the error code in case of failure.
|
|
+ * If the range already exists, -EEXIST will be returned, which should be
|
|
+ * considered a success.
|
|
*
|
|
* Register a new IO range node in the IO range list.
|
|
*/
|
|
@@ -51,6 +53,7 @@ int logic_pio_register_range(struct logic_pio_hwaddr *new_range)
|
|
list_for_each_entry(range, &io_range_list, list) {
|
|
if (range->fwnode == new_range->fwnode) {
|
|
/* range already there */
|
|
+ ret = -EEXIST;
|
|
goto end_register;
|
|
}
|
|
if (range->flags == LOGIC_PIO_CPU_MMIO &&
|
|
diff --git a/lib/test_kasan.c b/lib/test_kasan.c
|
|
index 662f862702fc8..400507f1e5db0 100644
|
|
--- a/lib/test_kasan.c
|
|
+++ b/lib/test_kasan.c
|
|
@@ -737,13 +737,13 @@ static void kasan_bitops_tags(struct kunit *test)
|
|
return;
|
|
}
|
|
|
|
- /* Allocation size will be rounded to up granule size, which is 16. */
|
|
- bits = kzalloc(sizeof(*bits), GFP_KERNEL);
|
|
+ /* kmalloc-64 cache will be used and the last 16 bytes will be the redzone. */
|
|
+ bits = kzalloc(48, GFP_KERNEL);
|
|
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bits);
|
|
|
|
- /* Do the accesses past the 16 allocated bytes. */
|
|
- kasan_bitops_modify(test, BITS_PER_LONG, &bits[1]);
|
|
- kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, &bits[1]);
|
|
+ /* Do the accesses past the 48 allocated bytes, but within the redone. */
|
|
+ kasan_bitops_modify(test, BITS_PER_LONG, (void *)bits + 48);
|
|
+ kasan_bitops_test_and_modify(test, BITS_PER_LONG + BITS_PER_BYTE, (void *)bits + 48);
|
|
|
|
kfree(bits);
|
|
}
|
|
diff --git a/mm/madvise.c b/mm/madvise.c
|
|
index 9abf4c5f2bce2..24abc79f8914e 100644
|
|
--- a/mm/madvise.c
|
|
+++ b/mm/madvise.c
|
|
@@ -1202,12 +1202,22 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
|
|
goto release_task;
|
|
}
|
|
|
|
- mm = mm_access(task, PTRACE_MODE_ATTACH_FSCREDS);
|
|
+ /* Require PTRACE_MODE_READ to avoid leaking ASLR metadata. */
|
|
+ mm = mm_access(task, PTRACE_MODE_READ_FSCREDS);
|
|
if (IS_ERR_OR_NULL(mm)) {
|
|
ret = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH;
|
|
goto release_task;
|
|
}
|
|
|
|
+ /*
|
|
+ * Require CAP_SYS_NICE for influencing process performance. Note that
|
|
+ * only non-destructive hints are currently supported.
|
|
+ */
|
|
+ if (!capable(CAP_SYS_NICE)) {
|
|
+ ret = -EPERM;
|
|
+ goto release_mm;
|
|
+ }
|
|
+
|
|
total_len = iov_iter_count(&iter);
|
|
|
|
while (iov_iter_count(&iter)) {
|
|
@@ -1222,6 +1232,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
|
|
if (ret == 0)
|
|
ret = total_len - iov_iter_count(&iter);
|
|
|
|
+release_mm:
|
|
mmput(mm);
|
|
release_task:
|
|
put_task_struct(task);
|
|
diff --git a/mm/memory.c b/mm/memory.c
|
|
index 827d42f9ebf7c..4d565d7c80169 100644
|
|
--- a/mm/memory.c
|
|
+++ b/mm/memory.c
|
|
@@ -3090,6 +3090,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
|
|
return handle_userfault(vmf, VM_UFFD_WP);
|
|
}
|
|
|
|
+ /*
|
|
+ * Userfaultfd write-protect can defer flushes. Ensure the TLB
|
|
+ * is flushed in this case before copying.
|
|
+ */
|
|
+ if (unlikely(userfaultfd_wp(vmf->vma) &&
|
|
+ mm_tlb_flush_pending(vmf->vma->vm_mm)))
|
|
+ flush_tlb_page(vmf->vma, vmf->address);
|
|
+
|
|
vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
|
|
if (!vmf->page) {
|
|
/*
|
|
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
|
|
index aa453a4331437..b9de2df5b8358 100644
|
|
--- a/mm/memory_hotplug.c
|
|
+++ b/mm/memory_hotplug.c
|
|
@@ -1020,7 +1020,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
|
|
*/
|
|
int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
|
|
{
|
|
- struct mhp_params params = { .pgprot = PAGE_KERNEL };
|
|
+ struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
|
|
u64 start, size;
|
|
bool new_node = false;
|
|
int ret;
|
|
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
|
|
index 88639706ae177..690f79c781cf7 100644
|
|
--- a/mm/page_alloc.c
|
|
+++ b/mm/page_alloc.c
|
|
@@ -6189,13 +6189,66 @@ static void __meminit zone_init_free_lists(struct zone *zone)
|
|
}
|
|
}
|
|
|
|
+#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
|
|
+/*
|
|
+ * Only struct pages that correspond to ranges defined by memblock.memory
|
|
+ * are zeroed and initialized by going through __init_single_page() during
|
|
+ * memmap_init_zone().
|
|
+ *
|
|
+ * But, there could be struct pages that correspond to holes in
|
|
+ * memblock.memory. This can happen because of the following reasons:
|
|
+ * - physical memory bank size is not necessarily the exact multiple of the
|
|
+ * arbitrary section size
|
|
+ * - early reserved memory may not be listed in memblock.memory
|
|
+ * - memory layouts defined with memmap= kernel parameter may not align
|
|
+ * nicely with memmap sections
|
|
+ *
|
|
+ * Explicitly initialize those struct pages so that:
|
|
+ * - PG_Reserved is set
|
|
+ * - zone and node links point to zone and node that span the page if the
|
|
+ * hole is in the middle of a zone
|
|
+ * - zone and node links point to adjacent zone/node if the hole falls on
|
|
+ * the zone boundary; the pages in such holes will be prepended to the
|
|
+ * zone/node above the hole except for the trailing pages in the last
|
|
+ * section that will be appended to the zone/node below.
|
|
+ */
|
|
+static u64 __meminit init_unavailable_range(unsigned long spfn,
|
|
+ unsigned long epfn,
|
|
+ int zone, int node)
|
|
+{
|
|
+ unsigned long pfn;
|
|
+ u64 pgcnt = 0;
|
|
+
|
|
+ for (pfn = spfn; pfn < epfn; pfn++) {
|
|
+ if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
|
|
+ pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
|
|
+ + pageblock_nr_pages - 1;
|
|
+ continue;
|
|
+ }
|
|
+ __init_single_page(pfn_to_page(pfn), pfn, zone, node);
|
|
+ __SetPageReserved(pfn_to_page(pfn));
|
|
+ pgcnt++;
|
|
+ }
|
|
+
|
|
+ return pgcnt;
|
|
+}
|
|
+#else
|
|
+static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn,
|
|
+ int zone, int node)
|
|
+{
|
|
+ return 0;
|
|
+}
|
|
+#endif
|
|
+
|
|
void __meminit __weak memmap_init(unsigned long size, int nid,
|
|
unsigned long zone,
|
|
unsigned long range_start_pfn)
|
|
{
|
|
+ static unsigned long hole_pfn;
|
|
unsigned long start_pfn, end_pfn;
|
|
unsigned long range_end_pfn = range_start_pfn + size;
|
|
int i;
|
|
+ u64 pgcnt = 0;
|
|
|
|
for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
|
|
start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);
|
|
@@ -6206,7 +6259,29 @@ void __meminit __weak memmap_init(unsigned long size, int nid,
|
|
memmap_init_zone(size, nid, zone, start_pfn, range_end_pfn,
|
|
MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
|
|
}
|
|
+
|
|
+ if (hole_pfn < start_pfn)
|
|
+ pgcnt += init_unavailable_range(hole_pfn, start_pfn,
|
|
+ zone, nid);
|
|
+ hole_pfn = end_pfn;
|
|
}
|
|
+
|
|
+#ifdef CONFIG_SPARSEMEM
|
|
+ /*
|
|
+ * Initialize the hole in the range [zone_end_pfn, section_end].
|
|
+ * If zone boundary falls in the middle of a section, this hole
|
|
+ * will be re-initialized during the call to this function for the
|
|
+ * higher zone.
|
|
+ */
|
|
+ end_pfn = round_up(range_end_pfn, PAGES_PER_SECTION);
|
|
+ if (hole_pfn < end_pfn)
|
|
+ pgcnt += init_unavailable_range(hole_pfn, end_pfn,
|
|
+ zone, nid);
|
|
+#endif
|
|
+
|
|
+ if (pgcnt)
|
|
+ pr_info(" %s zone: %llu pages in unavailable ranges\n",
|
|
+ zone_names[zone], pgcnt);
|
|
}
|
|
|
|
static int zone_batchsize(struct zone *zone)
|
|
@@ -6999,88 +7074,6 @@ void __init free_area_init_memoryless_node(int nid)
|
|
free_area_init_node(nid);
|
|
}
|
|
|
|
-#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
|
|
-/*
|
|
- * Initialize all valid struct pages in the range [spfn, epfn) and mark them
|
|
- * PageReserved(). Return the number of struct pages that were initialized.
|
|
- */
|
|
-static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn)
|
|
-{
|
|
- unsigned long pfn;
|
|
- u64 pgcnt = 0;
|
|
-
|
|
- for (pfn = spfn; pfn < epfn; pfn++) {
|
|
- if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
|
|
- pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
|
|
- + pageblock_nr_pages - 1;
|
|
- continue;
|
|
- }
|
|
- /*
|
|
- * Use a fake node/zone (0) for now. Some of these pages
|
|
- * (in memblock.reserved but not in memblock.memory) will
|
|
- * get re-initialized via reserve_bootmem_region() later.
|
|
- */
|
|
- __init_single_page(pfn_to_page(pfn), pfn, 0, 0);
|
|
- __SetPageReserved(pfn_to_page(pfn));
|
|
- pgcnt++;
|
|
- }
|
|
-
|
|
- return pgcnt;
|
|
-}
|
|
-
|
|
-/*
|
|
- * Only struct pages that are backed by physical memory are zeroed and
|
|
- * initialized by going through __init_single_page(). But, there are some
|
|
- * struct pages which are reserved in memblock allocator and their fields
|
|
- * may be accessed (for example page_to_pfn() on some configuration accesses
|
|
- * flags). We must explicitly initialize those struct pages.
|
|
- *
|
|
- * This function also addresses a similar issue where struct pages are left
|
|
- * uninitialized because the physical address range is not covered by
|
|
- * memblock.memory or memblock.reserved. That could happen when memblock
|
|
- * layout is manually configured via memmap=, or when the highest physical
|
|
- * address (max_pfn) does not end on a section boundary.
|
|
- */
|
|
-static void __init init_unavailable_mem(void)
|
|
-{
|
|
- phys_addr_t start, end;
|
|
- u64 i, pgcnt;
|
|
- phys_addr_t next = 0;
|
|
-
|
|
- /*
|
|
- * Loop through unavailable ranges not covered by memblock.memory.
|
|
- */
|
|
- pgcnt = 0;
|
|
- for_each_mem_range(i, &start, &end) {
|
|
- if (next < start)
|
|
- pgcnt += init_unavailable_range(PFN_DOWN(next),
|
|
- PFN_UP(start));
|
|
- next = end;
|
|
- }
|
|
-
|
|
- /*
|
|
- * Early sections always have a fully populated memmap for the whole
|
|
- * section - see pfn_valid(). If the last section has holes at the
|
|
- * end and that section is marked "online", the memmap will be
|
|
- * considered initialized. Make sure that memmap has a well defined
|
|
- * state.
|
|
- */
|
|
- pgcnt += init_unavailable_range(PFN_DOWN(next),
|
|
- round_up(max_pfn, PAGES_PER_SECTION));
|
|
-
|
|
- /*
|
|
- * Struct pages that do not have backing memory. This could be because
|
|
- * firmware is using some of this memory, or for some other reasons.
|
|
- */
|
|
- if (pgcnt)
|
|
- pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt);
|
|
-}
|
|
-#else
|
|
-static inline void __init init_unavailable_mem(void)
|
|
-{
|
|
-}
|
|
-#endif /* !CONFIG_FLAT_NODE_MEM_MAP */
|
|
-
|
|
#if MAX_NUMNODES > 1
|
|
/*
|
|
* Figure out the number of possible node ids.
|
|
@@ -7504,7 +7497,6 @@ void __init free_area_init(unsigned long *max_zone_pfn)
|
|
/* Initialise every node */
|
|
mminit_verify_pageflags_layout();
|
|
setup_nr_node_ids();
|
|
- init_unavailable_mem();
|
|
for_each_online_node(nid) {
|
|
pg_data_t *pgdat = NODE_DATA(nid);
|
|
free_area_init_node(nid);
|
|
diff --git a/mm/slub.c b/mm/slub.c
|
|
index 7b378e2ce270d..fbc415c340095 100644
|
|
--- a/mm/slub.c
|
|
+++ b/mm/slub.c
|
|
@@ -1971,7 +1971,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
|
|
|
|
t = acquire_slab(s, n, page, object == NULL, &objects);
|
|
if (!t)
|
|
- continue; /* cmpxchg raced */
|
|
+ break;
|
|
|
|
available += objects;
|
|
if (!object) {
|
|
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
|
|
index 3bc5ca40c9fbb..c6806eef906f9 100644
|
|
--- a/net/dsa/slave.c
|
|
+++ b/net/dsa/slave.c
|
|
@@ -548,6 +548,30 @@ netdev_tx_t dsa_enqueue_skb(struct sk_buff *skb, struct net_device *dev)
|
|
}
|
|
EXPORT_SYMBOL_GPL(dsa_enqueue_skb);
|
|
|
|
+static int dsa_realloc_skb(struct sk_buff *skb, struct net_device *dev)
|
|
+{
|
|
+ int needed_headroom = dev->needed_headroom;
|
|
+ int needed_tailroom = dev->needed_tailroom;
|
|
+
|
|
+ /* For tail taggers, we need to pad short frames ourselves, to ensure
|
|
+ * that the tail tag does not fail at its role of being at the end of
|
|
+ * the packet, once the master interface pads the frame. Account for
|
|
+ * that pad length here, and pad later.
|
|
+ */
|
|
+ if (unlikely(needed_tailroom && skb->len < ETH_ZLEN))
|
|
+ needed_tailroom += ETH_ZLEN - skb->len;
|
|
+ /* skb_headroom() returns unsigned int... */
|
|
+ needed_headroom = max_t(int, needed_headroom - skb_headroom(skb), 0);
|
|
+ needed_tailroom = max_t(int, needed_tailroom - skb_tailroom(skb), 0);
|
|
+
|
|
+ if (likely(!needed_headroom && !needed_tailroom && !skb_cloned(skb)))
|
|
+ /* No reallocation needed, yay! */
|
|
+ return 0;
|
|
+
|
|
+ return pskb_expand_head(skb, needed_headroom, needed_tailroom,
|
|
+ GFP_ATOMIC);
|
|
+}
|
|
+
|
|
static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
{
|
|
struct dsa_slave_priv *p = netdev_priv(dev);
|
|
@@ -567,6 +591,17 @@ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
*/
|
|
dsa_skb_tx_timestamp(p, skb);
|
|
|
|
+ if (dsa_realloc_skb(skb, dev)) {
|
|
+ dev_kfree_skb_any(skb);
|
|
+ return NETDEV_TX_OK;
|
|
+ }
|
|
+
|
|
+ /* needed_tailroom should still be 'warm' in the cache line from
|
|
+ * dsa_realloc_skb(), which has also ensured that padding is safe.
|
|
+ */
|
|
+ if (dev->needed_tailroom)
|
|
+ eth_skb_pad(skb);
|
|
+
|
|
/* Transmit function may have to reallocate the original SKB,
|
|
* in which case it must have freed it. Only free it here on error.
|
|
*/
|
|
@@ -1791,6 +1826,16 @@ int dsa_slave_create(struct dsa_port *port)
|
|
slave_dev->netdev_ops = &dsa_slave_netdev_ops;
|
|
if (ds->ops->port_max_mtu)
|
|
slave_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index);
|
|
+ if (cpu_dp->tag_ops->tail_tag)
|
|
+ slave_dev->needed_tailroom = cpu_dp->tag_ops->overhead;
|
|
+ else
|
|
+ slave_dev->needed_headroom = cpu_dp->tag_ops->overhead;
|
|
+ /* Try to save one extra realloc later in the TX path (in the master)
|
|
+ * by also inheriting the master's needed headroom and tailroom.
|
|
+ * The 8021q driver also does this.
|
|
+ */
|
|
+ slave_dev->needed_headroom += master->needed_headroom;
|
|
+ slave_dev->needed_tailroom += master->needed_tailroom;
|
|
SET_NETDEV_DEVTYPE(slave_dev, &dsa_type);
|
|
|
|
netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one,
|
|
diff --git a/net/dsa/tag_ar9331.c b/net/dsa/tag_ar9331.c
|
|
index 55b00694cdba1..002cf7f952e2d 100644
|
|
--- a/net/dsa/tag_ar9331.c
|
|
+++ b/net/dsa/tag_ar9331.c
|
|
@@ -31,9 +31,6 @@ static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb,
|
|
__le16 *phdr;
|
|
u16 hdr;
|
|
|
|
- if (skb_cow_head(skb, AR9331_HDR_LEN) < 0)
|
|
- return NULL;
|
|
-
|
|
phdr = skb_push(skb, AR9331_HDR_LEN);
|
|
|
|
hdr = FIELD_PREP(AR9331_HDR_VERSION_MASK, AR9331_HDR_VERSION);
|
|
diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
|
|
index ad72dff8d5242..e934dace39227 100644
|
|
--- a/net/dsa/tag_brcm.c
|
|
+++ b/net/dsa/tag_brcm.c
|
|
@@ -66,9 +66,6 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
|
|
u16 queue = skb_get_queue_mapping(skb);
|
|
u8 *brcm_tag;
|
|
|
|
- if (skb_cow_head(skb, BRCM_TAG_LEN) < 0)
|
|
- return NULL;
|
|
-
|
|
/* The Ethernet switch we are interfaced with needs packets to be at
|
|
* least 64 bytes (including FCS) otherwise they will be discarded when
|
|
* they enter the switch port logic. When Broadcom tags are enabled, we
|
|
diff --git a/net/dsa/tag_dsa.c b/net/dsa/tag_dsa.c
|
|
index 0b756fae68a5f..63d690a0fca6f 100644
|
|
--- a/net/dsa/tag_dsa.c
|
|
+++ b/net/dsa/tag_dsa.c
|
|
@@ -23,9 +23,6 @@ static struct sk_buff *dsa_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
* the ethertype field for untagged packets.
|
|
*/
|
|
if (skb->protocol == htons(ETH_P_8021Q)) {
|
|
- if (skb_cow_head(skb, 0) < 0)
|
|
- return NULL;
|
|
-
|
|
/*
|
|
* Construct tagged FROM_CPU DSA tag from 802.1q tag.
|
|
*/
|
|
@@ -41,8 +38,6 @@ static struct sk_buff *dsa_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
dsa_header[2] &= ~0x10;
|
|
}
|
|
} else {
|
|
- if (skb_cow_head(skb, DSA_HLEN) < 0)
|
|
- return NULL;
|
|
skb_push(skb, DSA_HLEN);
|
|
|
|
memmove(skb->data, skb->data + DSA_HLEN, 2 * ETH_ALEN);
|
|
diff --git a/net/dsa/tag_edsa.c b/net/dsa/tag_edsa.c
|
|
index 1206142403197..abf70a29deb43 100644
|
|
--- a/net/dsa/tag_edsa.c
|
|
+++ b/net/dsa/tag_edsa.c
|
|
@@ -35,8 +35,6 @@ static struct sk_buff *edsa_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
* current ethertype field if the packet is untagged.
|
|
*/
|
|
if (skb->protocol == htons(ETH_P_8021Q)) {
|
|
- if (skb_cow_head(skb, DSA_HLEN) < 0)
|
|
- return NULL;
|
|
skb_push(skb, DSA_HLEN);
|
|
|
|
memmove(skb->data, skb->data + DSA_HLEN, 2 * ETH_ALEN);
|
|
@@ -60,8 +58,6 @@ static struct sk_buff *edsa_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
edsa_header[6] &= ~0x10;
|
|
}
|
|
} else {
|
|
- if (skb_cow_head(skb, EDSA_HLEN) < 0)
|
|
- return NULL;
|
|
skb_push(skb, EDSA_HLEN);
|
|
|
|
memmove(skb->data, skb->data + EDSA_HLEN, 2 * ETH_ALEN);
|
|
diff --git a/net/dsa/tag_gswip.c b/net/dsa/tag_gswip.c
|
|
index 408d4af390a0e..2f5bd5e338ab5 100644
|
|
--- a/net/dsa/tag_gswip.c
|
|
+++ b/net/dsa/tag_gswip.c
|
|
@@ -60,13 +60,8 @@ static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
- int err;
|
|
u8 *gswip_tag;
|
|
|
|
- err = skb_cow_head(skb, GSWIP_TX_HEADER_LEN);
|
|
- if (err)
|
|
- return NULL;
|
|
-
|
|
skb_push(skb, GSWIP_TX_HEADER_LEN);
|
|
|
|
gswip_tag = skb->data;
|
|
diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c
|
|
index 0a5aa982c60d9..4820dbcedfa2d 100644
|
|
--- a/net/dsa/tag_ksz.c
|
|
+++ b/net/dsa/tag_ksz.c
|
|
@@ -14,46 +14,6 @@
|
|
#define KSZ_EGRESS_TAG_LEN 1
|
|
#define KSZ_INGRESS_TAG_LEN 1
|
|
|
|
-static struct sk_buff *ksz_common_xmit(struct sk_buff *skb,
|
|
- struct net_device *dev, int len)
|
|
-{
|
|
- struct sk_buff *nskb;
|
|
- int padlen;
|
|
-
|
|
- padlen = (skb->len >= ETH_ZLEN) ? 0 : ETH_ZLEN - skb->len;
|
|
-
|
|
- if (skb_tailroom(skb) >= padlen + len) {
|
|
- /* Let dsa_slave_xmit() free skb */
|
|
- if (__skb_put_padto(skb, skb->len + padlen, false))
|
|
- return NULL;
|
|
-
|
|
- nskb = skb;
|
|
- } else {
|
|
- nskb = alloc_skb(NET_IP_ALIGN + skb->len +
|
|
- padlen + len, GFP_ATOMIC);
|
|
- if (!nskb)
|
|
- return NULL;
|
|
- skb_reserve(nskb, NET_IP_ALIGN);
|
|
-
|
|
- skb_reset_mac_header(nskb);
|
|
- skb_set_network_header(nskb,
|
|
- skb_network_header(skb) - skb->head);
|
|
- skb_set_transport_header(nskb,
|
|
- skb_transport_header(skb) - skb->head);
|
|
- skb_copy_and_csum_dev(skb, skb_put(nskb, skb->len));
|
|
-
|
|
- /* Let skb_put_padto() free nskb, and let dsa_slave_xmit() free
|
|
- * skb
|
|
- */
|
|
- if (skb_put_padto(nskb, nskb->len + padlen))
|
|
- return NULL;
|
|
-
|
|
- consume_skb(skb);
|
|
- }
|
|
-
|
|
- return nskb;
|
|
-}
|
|
-
|
|
static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
|
|
struct net_device *dev,
|
|
unsigned int port, unsigned int len)
|
|
@@ -90,23 +50,18 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
|
|
static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
- struct sk_buff *nskb;
|
|
u8 *tag;
|
|
u8 *addr;
|
|
|
|
- nskb = ksz_common_xmit(skb, dev, KSZ_INGRESS_TAG_LEN);
|
|
- if (!nskb)
|
|
- return NULL;
|
|
-
|
|
/* Tag encoding */
|
|
- tag = skb_put(nskb, KSZ_INGRESS_TAG_LEN);
|
|
- addr = skb_mac_header(nskb);
|
|
+ tag = skb_put(skb, KSZ_INGRESS_TAG_LEN);
|
|
+ addr = skb_mac_header(skb);
|
|
|
|
*tag = 1 << dp->index;
|
|
if (is_link_local_ether_addr(addr))
|
|
*tag |= KSZ8795_TAIL_TAG_OVERRIDE;
|
|
|
|
- return nskb;
|
|
+ return skb;
|
|
}
|
|
|
|
static struct sk_buff *ksz8795_rcv(struct sk_buff *skb, struct net_device *dev,
|
|
@@ -156,18 +111,13 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
- struct sk_buff *nskb;
|
|
__be16 *tag;
|
|
u8 *addr;
|
|
u16 val;
|
|
|
|
- nskb = ksz_common_xmit(skb, dev, KSZ9477_INGRESS_TAG_LEN);
|
|
- if (!nskb)
|
|
- return NULL;
|
|
-
|
|
/* Tag encoding */
|
|
- tag = skb_put(nskb, KSZ9477_INGRESS_TAG_LEN);
|
|
- addr = skb_mac_header(nskb);
|
|
+ tag = skb_put(skb, KSZ9477_INGRESS_TAG_LEN);
|
|
+ addr = skb_mac_header(skb);
|
|
|
|
val = BIT(dp->index);
|
|
|
|
@@ -176,7 +126,7 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
|
|
|
|
*tag = cpu_to_be16(val);
|
|
|
|
- return nskb;
|
|
+ return skb;
|
|
}
|
|
|
|
static struct sk_buff *ksz9477_rcv(struct sk_buff *skb, struct net_device *dev,
|
|
@@ -213,24 +163,19 @@ static struct sk_buff *ksz9893_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
- struct sk_buff *nskb;
|
|
u8 *addr;
|
|
u8 *tag;
|
|
|
|
- nskb = ksz_common_xmit(skb, dev, KSZ_INGRESS_TAG_LEN);
|
|
- if (!nskb)
|
|
- return NULL;
|
|
-
|
|
/* Tag encoding */
|
|
- tag = skb_put(nskb, KSZ_INGRESS_TAG_LEN);
|
|
- addr = skb_mac_header(nskb);
|
|
+ tag = skb_put(skb, KSZ_INGRESS_TAG_LEN);
|
|
+ addr = skb_mac_header(skb);
|
|
|
|
*tag = BIT(dp->index);
|
|
|
|
if (is_link_local_ether_addr(addr))
|
|
*tag |= KSZ9893_TAIL_TAG_OVERRIDE;
|
|
|
|
- return nskb;
|
|
+ return skb;
|
|
}
|
|
|
|
static const struct dsa_device_ops ksz9893_netdev_ops = {
|
|
diff --git a/net/dsa/tag_lan9303.c b/net/dsa/tag_lan9303.c
|
|
index ccfb6f641bbfb..aa1318dccaf0a 100644
|
|
--- a/net/dsa/tag_lan9303.c
|
|
+++ b/net/dsa/tag_lan9303.c
|
|
@@ -58,15 +58,6 @@ static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
__be16 *lan9303_tag;
|
|
u16 tag;
|
|
|
|
- /* insert a special VLAN tag between the MAC addresses
|
|
- * and the current ethertype field.
|
|
- */
|
|
- if (skb_cow_head(skb, LAN9303_TAG_LEN) < 0) {
|
|
- dev_dbg(&dev->dev,
|
|
- "Cannot make room for the special tag. Dropping packet\n");
|
|
- return NULL;
|
|
- }
|
|
-
|
|
/* provide 'LAN9303_TAG_LEN' bytes additional space */
|
|
skb_push(skb, LAN9303_TAG_LEN);
|
|
|
|
diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c
|
|
index 4cdd9cf428fbf..59748487664fe 100644
|
|
--- a/net/dsa/tag_mtk.c
|
|
+++ b/net/dsa/tag_mtk.c
|
|
@@ -13,6 +13,7 @@
|
|
#define MTK_HDR_LEN 4
|
|
#define MTK_HDR_XMIT_UNTAGGED 0
|
|
#define MTK_HDR_XMIT_TAGGED_TPID_8100 1
|
|
+#define MTK_HDR_XMIT_TAGGED_TPID_88A8 2
|
|
#define MTK_HDR_RECV_SOURCE_PORT_MASK GENMASK(2, 0)
|
|
#define MTK_HDR_XMIT_DP_BIT_MASK GENMASK(5, 0)
|
|
#define MTK_HDR_XMIT_SA_DIS BIT(6)
|
|
@@ -21,8 +22,8 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
+ u8 xmit_tpid;
|
|
u8 *mtk_tag;
|
|
- bool is_vlan_skb = true;
|
|
unsigned char *dest = eth_hdr(skb)->h_dest;
|
|
bool is_multicast_skb = is_multicast_ether_addr(dest) &&
|
|
!is_broadcast_ether_addr(dest);
|
|
@@ -33,13 +34,17 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
* the both special and VLAN tag at the same time and then look up VLAN
|
|
* table with VID.
|
|
*/
|
|
- if (!skb_vlan_tagged(skb)) {
|
|
- if (skb_cow_head(skb, MTK_HDR_LEN) < 0)
|
|
- return NULL;
|
|
-
|
|
+ switch (skb->protocol) {
|
|
+ case htons(ETH_P_8021Q):
|
|
+ xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_8100;
|
|
+ break;
|
|
+ case htons(ETH_P_8021AD):
|
|
+ xmit_tpid = MTK_HDR_XMIT_TAGGED_TPID_88A8;
|
|
+ break;
|
|
+ default:
|
|
+ xmit_tpid = MTK_HDR_XMIT_UNTAGGED;
|
|
skb_push(skb, MTK_HDR_LEN);
|
|
memmove(skb->data, skb->data + MTK_HDR_LEN, 2 * ETH_ALEN);
|
|
- is_vlan_skb = false;
|
|
}
|
|
|
|
mtk_tag = skb->data + 2 * ETH_ALEN;
|
|
@@ -47,8 +52,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
/* Mark tag attribute on special tag insertion to notify hardware
|
|
* whether that's a combined special tag with 802.1Q header.
|
|
*/
|
|
- mtk_tag[0] = is_vlan_skb ? MTK_HDR_XMIT_TAGGED_TPID_8100 :
|
|
- MTK_HDR_XMIT_UNTAGGED;
|
|
+ mtk_tag[0] = xmit_tpid;
|
|
mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK;
|
|
|
|
/* Disable SA learning for multicast frames */
|
|
@@ -56,7 +60,7 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
|
|
mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS;
|
|
|
|
/* Tag control information is kept for 802.1Q */
|
|
- if (!is_vlan_skb) {
|
|
+ if (xmit_tpid == MTK_HDR_XMIT_UNTAGGED) {
|
|
mtk_tag[2] = 0;
|
|
mtk_tag[3] = 0;
|
|
}
|
|
diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c
|
|
index 3b468aca5c53f..16a1afd5b8e14 100644
|
|
--- a/net/dsa/tag_ocelot.c
|
|
+++ b/net/dsa/tag_ocelot.c
|
|
@@ -143,13 +143,6 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
|
|
struct ocelot_port *ocelot_port;
|
|
u8 *prefix, *injection;
|
|
u64 qos_class, rew_op;
|
|
- int err;
|
|
-
|
|
- err = skb_cow_head(skb, OCELOT_TOTAL_TAG_LEN);
|
|
- if (unlikely(err < 0)) {
|
|
- netdev_err(netdev, "Cannot make room for tag.\n");
|
|
- return NULL;
|
|
- }
|
|
|
|
ocelot_port = ocelot->ports[dp->index];
|
|
|
|
diff --git a/net/dsa/tag_qca.c b/net/dsa/tag_qca.c
|
|
index 1b9e8507112b5..88181b52f480b 100644
|
|
--- a/net/dsa/tag_qca.c
|
|
+++ b/net/dsa/tag_qca.c
|
|
@@ -34,9 +34,6 @@ static struct sk_buff *qca_tag_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
__be16 *phdr;
|
|
u16 hdr;
|
|
|
|
- if (skb_cow_head(skb, QCA_HDR_LEN) < 0)
|
|
- return NULL;
|
|
-
|
|
skb_push(skb, QCA_HDR_LEN);
|
|
|
|
memmove(skb->data, skb->data + QCA_HDR_LEN, 2 * ETH_ALEN);
|
|
diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
|
|
index c17d39b4a1a04..e9176475bac89 100644
|
|
--- a/net/dsa/tag_rtl4_a.c
|
|
+++ b/net/dsa/tag_rtl4_a.c
|
|
@@ -35,14 +35,12 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
|
|
struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
+ __be16 *p;
|
|
u8 *tag;
|
|
- u16 *p;
|
|
u16 out;
|
|
|
|
/* Pad out to at least 60 bytes */
|
|
- if (unlikely(eth_skb_pad(skb)))
|
|
- return NULL;
|
|
- if (skb_cow_head(skb, RTL4_A_HDR_LEN) < 0)
|
|
+ if (unlikely(__skb_put_padto(skb, ETH_ZLEN, false)))
|
|
return NULL;
|
|
|
|
netdev_dbg(dev, "add realtek tag to package to port %d\n",
|
|
@@ -53,13 +51,13 @@ static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
|
|
tag = skb->data + 2 * ETH_ALEN;
|
|
|
|
/* Set Ethertype */
|
|
- p = (u16 *)tag;
|
|
+ p = (__be16 *)tag;
|
|
*p = htons(RTL4_A_ETHERTYPE);
|
|
|
|
out = (RTL4_A_PROTOCOL_RTL8366RB << 12) | (2 << 8);
|
|
- /* The lower bits is the port numer */
|
|
+ /* The lower bits is the port number */
|
|
out |= (u8)dp->index;
|
|
- p = (u16 *)(tag + 2);
|
|
+ p = (__be16 *)(tag + 2);
|
|
*p = htons(out);
|
|
|
|
return skb;
|
|
diff --git a/net/dsa/tag_trailer.c b/net/dsa/tag_trailer.c
|
|
index 3a1cc24a4f0a5..5b97ede56a0fd 100644
|
|
--- a/net/dsa/tag_trailer.c
|
|
+++ b/net/dsa/tag_trailer.c
|
|
@@ -13,42 +13,15 @@
|
|
static struct sk_buff *trailer_xmit(struct sk_buff *skb, struct net_device *dev)
|
|
{
|
|
struct dsa_port *dp = dsa_slave_to_port(dev);
|
|
- struct sk_buff *nskb;
|
|
- int padlen;
|
|
u8 *trailer;
|
|
|
|
- /*
|
|
- * We have to make sure that the trailer ends up as the very
|
|
- * last 4 bytes of the packet. This means that we have to pad
|
|
- * the packet to the minimum ethernet frame size, if necessary,
|
|
- * before adding the trailer.
|
|
- */
|
|
- padlen = 0;
|
|
- if (skb->len < 60)
|
|
- padlen = 60 - skb->len;
|
|
-
|
|
- nskb = alloc_skb(NET_IP_ALIGN + skb->len + padlen + 4, GFP_ATOMIC);
|
|
- if (!nskb)
|
|
- return NULL;
|
|
- skb_reserve(nskb, NET_IP_ALIGN);
|
|
-
|
|
- skb_reset_mac_header(nskb);
|
|
- skb_set_network_header(nskb, skb_network_header(skb) - skb->head);
|
|
- skb_set_transport_header(nskb, skb_transport_header(skb) - skb->head);
|
|
- skb_copy_and_csum_dev(skb, skb_put(nskb, skb->len));
|
|
- consume_skb(skb);
|
|
-
|
|
- if (padlen) {
|
|
- skb_put_zero(nskb, padlen);
|
|
- }
|
|
-
|
|
- trailer = skb_put(nskb, 4);
|
|
+ trailer = skb_put(skb, 4);
|
|
trailer[0] = 0x80;
|
|
trailer[1] = 1 << dp->index;
|
|
trailer[2] = 0x10;
|
|
trailer[3] = 0x00;
|
|
|
|
- return nskb;
|
|
+ return skb;
|
|
}
|
|
|
|
static struct sk_buff *trailer_rcv(struct sk_buff *skb, struct net_device *dev,
|
|
diff --git a/net/ethtool/channels.c b/net/ethtool/channels.c
|
|
index 25a9e566ef5cd..6a070dc8e4b0d 100644
|
|
--- a/net/ethtool/channels.c
|
|
+++ b/net/ethtool/channels.c
|
|
@@ -116,10 +116,9 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
|
|
struct ethtool_channels channels = {};
|
|
struct ethnl_req_info req_info = {};
|
|
struct nlattr **tb = info->attrs;
|
|
- const struct nlattr *err_attr;
|
|
+ u32 err_attr, max_rx_in_use = 0;
|
|
const struct ethtool_ops *ops;
|
|
struct net_device *dev;
|
|
- u32 max_rx_in_use = 0;
|
|
int ret;
|
|
|
|
ret = ethnl_parse_header_dev_get(&req_info,
|
|
@@ -157,34 +156,35 @@ int ethnl_set_channels(struct sk_buff *skb, struct genl_info *info)
|
|
|
|
/* ensure new channel counts are within limits */
|
|
if (channels.rx_count > channels.max_rx)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_RX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_RX_COUNT;
|
|
else if (channels.tx_count > channels.max_tx)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_TX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_TX_COUNT;
|
|
else if (channels.other_count > channels.max_other)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_OTHER_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_OTHER_COUNT;
|
|
else if (channels.combined_count > channels.max_combined)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_COMBINED_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_COMBINED_COUNT;
|
|
else
|
|
- err_attr = NULL;
|
|
+ err_attr = 0;
|
|
if (err_attr) {
|
|
ret = -EINVAL;
|
|
- NL_SET_ERR_MSG_ATTR(info->extack, err_attr,
|
|
+ NL_SET_ERR_MSG_ATTR(info->extack, tb[err_attr],
|
|
"requested channel count exceeds maximum");
|
|
goto out_ops;
|
|
}
|
|
|
|
/* ensure there is at least one RX and one TX channel */
|
|
if (!channels.combined_count && !channels.rx_count)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_RX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_RX_COUNT;
|
|
else if (!channels.combined_count && !channels.tx_count)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_TX_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_TX_COUNT;
|
|
else
|
|
- err_attr = NULL;
|
|
+ err_attr = 0;
|
|
if (err_attr) {
|
|
if (mod_combined)
|
|
- err_attr = tb[ETHTOOL_A_CHANNELS_COMBINED_COUNT];
|
|
+ err_attr = ETHTOOL_A_CHANNELS_COMBINED_COUNT;
|
|
ret = -EINVAL;
|
|
- NL_SET_ERR_MSG_ATTR(info->extack, err_attr, "requested channel counts would result in no RX or TX channel being configured");
|
|
+ NL_SET_ERR_MSG_ATTR(info->extack, tb[err_attr],
|
|
+ "requested channel counts would result in no RX or TX channel being configured");
|
|
goto out_ops;
|
|
}
|
|
|
|
diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
|
|
index 471d33a0d095f..be09c7669a799 100644
|
|
--- a/net/ipv4/cipso_ipv4.c
|
|
+++ b/net/ipv4/cipso_ipv4.c
|
|
@@ -519,16 +519,10 @@ int cipso_v4_doi_remove(u32 doi, struct netlbl_audit *audit_info)
|
|
ret_val = -ENOENT;
|
|
goto doi_remove_return;
|
|
}
|
|
- if (!refcount_dec_and_test(&doi_def->refcount)) {
|
|
- spin_unlock(&cipso_v4_doi_list_lock);
|
|
- ret_val = -EBUSY;
|
|
- goto doi_remove_return;
|
|
- }
|
|
list_del_rcu(&doi_def->list);
|
|
spin_unlock(&cipso_v4_doi_list_lock);
|
|
|
|
- cipso_v4_cache_invalidate();
|
|
- call_rcu(&doi_def->rcu, cipso_v4_doi_free_rcu);
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
ret_val = 0;
|
|
|
|
doi_remove_return:
|
|
@@ -585,9 +579,6 @@ void cipso_v4_doi_putdef(struct cipso_v4_doi *doi_def)
|
|
|
|
if (!refcount_dec_and_test(&doi_def->refcount))
|
|
return;
|
|
- spin_lock(&cipso_v4_doi_list_lock);
|
|
- list_del_rcu(&doi_def->list);
|
|
- spin_unlock(&cipso_v4_doi_list_lock);
|
|
|
|
cipso_v4_cache_invalidate();
|
|
call_rcu(&doi_def->rcu, cipso_v4_doi_free_rcu);
|
|
diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
|
|
index 76a420c76f16e..f6cc26de5ed30 100644
|
|
--- a/net/ipv4/ip_tunnel.c
|
|
+++ b/net/ipv4/ip_tunnel.c
|
|
@@ -502,8 +502,7 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
|
|
if (!skb_is_gso(skb) &&
|
|
(inner_iph->frag_off & htons(IP_DF)) &&
|
|
mtu < pkt_size) {
|
|
- memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
|
|
return -E2BIG;
|
|
}
|
|
}
|
|
@@ -527,7 +526,7 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
|
|
|
|
if (!skb_is_gso(skb) && mtu >= IPV6_MIN_MTU &&
|
|
mtu < pkt_size) {
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
return -E2BIG;
|
|
}
|
|
}
|
|
diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
|
|
index b957cbee2cf7b..84a818b09beeb 100644
|
|
--- a/net/ipv4/ip_vti.c
|
|
+++ b/net/ipv4/ip_vti.c
|
|
@@ -238,13 +238,13 @@ static netdev_tx_t vti_xmit(struct sk_buff *skb, struct net_device *dev,
|
|
if (skb->len > mtu) {
|
|
skb_dst_update_pmtu_no_confirm(skb, mtu);
|
|
if (skb->protocol == htons(ETH_P_IP)) {
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
- htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
+ htonl(mtu));
|
|
} else {
|
|
if (mtu < IPV6_MIN_MTU)
|
|
mtu = IPV6_MIN_MTU;
|
|
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
}
|
|
|
|
dst_release(dst);
|
|
diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
|
|
index f63f7ada51b36..f2d313c5900df 100644
|
|
--- a/net/ipv4/nexthop.c
|
|
+++ b/net/ipv4/nexthop.c
|
|
@@ -1182,7 +1182,7 @@ out:
|
|
|
|
/* rtnl */
|
|
/* remove all nexthops tied to a device being deleted */
|
|
-static void nexthop_flush_dev(struct net_device *dev)
|
|
+static void nexthop_flush_dev(struct net_device *dev, unsigned long event)
|
|
{
|
|
unsigned int hash = nh_dev_hashfn(dev->ifindex);
|
|
struct net *net = dev_net(dev);
|
|
@@ -1194,6 +1194,10 @@ static void nexthop_flush_dev(struct net_device *dev)
|
|
if (nhi->fib_nhc.nhc_dev != dev)
|
|
continue;
|
|
|
|
+ if (nhi->reject_nh &&
|
|
+ (event == NETDEV_DOWN || event == NETDEV_CHANGE))
|
|
+ continue;
|
|
+
|
|
remove_nexthop(net, nhi->nh_parent, NULL);
|
|
}
|
|
}
|
|
@@ -1940,11 +1944,11 @@ static int nh_netdev_event(struct notifier_block *this,
|
|
switch (event) {
|
|
case NETDEV_DOWN:
|
|
case NETDEV_UNREGISTER:
|
|
- nexthop_flush_dev(dev);
|
|
+ nexthop_flush_dev(dev, event);
|
|
break;
|
|
case NETDEV_CHANGE:
|
|
if (!(dev_get_flags(dev) & (IFF_RUNNING | IFF_LOWER_UP)))
|
|
- nexthop_flush_dev(dev);
|
|
+ nexthop_flush_dev(dev, event);
|
|
break;
|
|
case NETDEV_CHANGEMTU:
|
|
info_ext = ptr;
|
|
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
|
|
index 41d03683b13d6..2384ac048bead 100644
|
|
--- a/net/ipv4/tcp.c
|
|
+++ b/net/ipv4/tcp.c
|
|
@@ -3164,16 +3164,23 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname,
|
|
break;
|
|
|
|
case TCP_QUEUE_SEQ:
|
|
- if (sk->sk_state != TCP_CLOSE)
|
|
+ if (sk->sk_state != TCP_CLOSE) {
|
|
err = -EPERM;
|
|
- else if (tp->repair_queue == TCP_SEND_QUEUE)
|
|
- WRITE_ONCE(tp->write_seq, val);
|
|
- else if (tp->repair_queue == TCP_RECV_QUEUE) {
|
|
- WRITE_ONCE(tp->rcv_nxt, val);
|
|
- WRITE_ONCE(tp->copied_seq, val);
|
|
- }
|
|
- else
|
|
+ } else if (tp->repair_queue == TCP_SEND_QUEUE) {
|
|
+ if (!tcp_rtx_queue_empty(sk))
|
|
+ err = -EPERM;
|
|
+ else
|
|
+ WRITE_ONCE(tp->write_seq, val);
|
|
+ } else if (tp->repair_queue == TCP_RECV_QUEUE) {
|
|
+ if (tp->rcv_nxt != tp->copied_seq) {
|
|
+ err = -EPERM;
|
|
+ } else {
|
|
+ WRITE_ONCE(tp->rcv_nxt, val);
|
|
+ WRITE_ONCE(tp->copied_seq, val);
|
|
+ }
|
|
+ } else {
|
|
err = -EINVAL;
|
|
+ }
|
|
break;
|
|
|
|
case TCP_REPAIR_OPTIONS:
|
|
@@ -3829,7 +3836,8 @@ static int do_tcp_getsockopt(struct sock *sk, int level,
|
|
|
|
if (get_user(len, optlen))
|
|
return -EFAULT;
|
|
- if (len < offsetofend(struct tcp_zerocopy_receive, length))
|
|
+ if (len < 0 ||
|
|
+ len < offsetofend(struct tcp_zerocopy_receive, length))
|
|
return -EINVAL;
|
|
if (len > sizeof(zc)) {
|
|
len = sizeof(zc);
|
|
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
|
|
index cfdaac4a57e41..6e2b02cf78418 100644
|
|
--- a/net/ipv4/udp_offload.c
|
|
+++ b/net/ipv4/udp_offload.c
|
|
@@ -522,7 +522,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
|
|
}
|
|
|
|
if (!sk || NAPI_GRO_CB(skb)->encap_mark ||
|
|
- (skb->ip_summed != CHECKSUM_PARTIAL &&
|
|
+ (uh->check && skb->ip_summed != CHECKSUM_PARTIAL &&
|
|
NAPI_GRO_CB(skb)->csum_cnt == 0 &&
|
|
!NAPI_GRO_CB(skb)->csum_valid) ||
|
|
!udp_sk(sk)->gro_receive)
|
|
diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
|
|
index 78f766019b7e0..0ea66e9db2495 100644
|
|
--- a/net/ipv6/calipso.c
|
|
+++ b/net/ipv6/calipso.c
|
|
@@ -83,6 +83,9 @@ struct calipso_map_cache_entry {
|
|
|
|
static struct calipso_map_cache_bkt *calipso_cache;
|
|
|
|
+static void calipso_cache_invalidate(void);
|
|
+static void calipso_doi_putdef(struct calipso_doi *doi_def);
|
|
+
|
|
/* Label Mapping Cache Functions
|
|
*/
|
|
|
|
@@ -444,15 +447,10 @@ static int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info)
|
|
ret_val = -ENOENT;
|
|
goto doi_remove_return;
|
|
}
|
|
- if (!refcount_dec_and_test(&doi_def->refcount)) {
|
|
- spin_unlock(&calipso_doi_list_lock);
|
|
- ret_val = -EBUSY;
|
|
- goto doi_remove_return;
|
|
- }
|
|
list_del_rcu(&doi_def->list);
|
|
spin_unlock(&calipso_doi_list_lock);
|
|
|
|
- call_rcu(&doi_def->rcu, calipso_doi_free_rcu);
|
|
+ calipso_doi_putdef(doi_def);
|
|
ret_val = 0;
|
|
|
|
doi_remove_return:
|
|
@@ -508,10 +506,8 @@ static void calipso_doi_putdef(struct calipso_doi *doi_def)
|
|
|
|
if (!refcount_dec_and_test(&doi_def->refcount))
|
|
return;
|
|
- spin_lock(&calipso_doi_list_lock);
|
|
- list_del_rcu(&doi_def->list);
|
|
- spin_unlock(&calipso_doi_list_lock);
|
|
|
|
+ calipso_cache_invalidate();
|
|
call_rcu(&doi_def->rcu, calipso_doi_free_rcu);
|
|
}
|
|
|
|
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
|
|
index cf6e1380b527c..640f71a7b29d9 100644
|
|
--- a/net/ipv6/ip6_gre.c
|
|
+++ b/net/ipv6/ip6_gre.c
|
|
@@ -678,8 +678,8 @@ static int prepare_ip6gre_xmit_ipv6(struct sk_buff *skb,
|
|
|
|
tel = (struct ipv6_tlv_tnl_enc_lim *)&skb_network_header(skb)[offset];
|
|
if (tel->encap_limit == 0) {
|
|
- icmpv6_send(skb, ICMPV6_PARAMPROB,
|
|
- ICMPV6_HDR_FIELD, offset + 2);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PARAMPROB,
|
|
+ ICMPV6_HDR_FIELD, offset + 2);
|
|
return -1;
|
|
}
|
|
*encap_limit = tel->encap_limit - 1;
|
|
@@ -805,8 +805,8 @@ static inline int ip6gre_xmit_ipv4(struct sk_buff *skb, struct net_device *dev)
|
|
if (err != 0) {
|
|
/* XXX: send ICMP error even if DF is not set. */
|
|
if (err == -EMSGSIZE)
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
- htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
+ htonl(mtu));
|
|
return -1;
|
|
}
|
|
|
|
@@ -837,7 +837,7 @@ static inline int ip6gre_xmit_ipv6(struct sk_buff *skb, struct net_device *dev)
|
|
&mtu, skb->protocol);
|
|
if (err != 0) {
|
|
if (err == -EMSGSIZE)
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
return -1;
|
|
}
|
|
|
|
@@ -1063,10 +1063,10 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
|
|
/* XXX: send ICMP error even if DF is not set. */
|
|
if (err == -EMSGSIZE) {
|
|
if (skb->protocol == htons(ETH_P_IP))
|
|
- icmp_send(skb, ICMP_DEST_UNREACH,
|
|
- ICMP_FRAG_NEEDED, htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH,
|
|
+ ICMP_FRAG_NEEDED, htonl(mtu));
|
|
else
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
}
|
|
|
|
goto tx_err;
|
|
diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
|
|
index 648db3fe508f0..5d27b5c631217 100644
|
|
--- a/net/ipv6/ip6_tunnel.c
|
|
+++ b/net/ipv6/ip6_tunnel.c
|
|
@@ -1363,8 +1363,8 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
|
|
|
|
tel = (void *)&skb_network_header(skb)[offset];
|
|
if (tel->encap_limit == 0) {
|
|
- icmpv6_send(skb, ICMPV6_PARAMPROB,
|
|
- ICMPV6_HDR_FIELD, offset + 2);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PARAMPROB,
|
|
+ ICMPV6_HDR_FIELD, offset + 2);
|
|
return -1;
|
|
}
|
|
encap_limit = tel->encap_limit - 1;
|
|
@@ -1416,11 +1416,11 @@ ipxip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev,
|
|
if (err == -EMSGSIZE)
|
|
switch (protocol) {
|
|
case IPPROTO_IPIP:
|
|
- icmp_send(skb, ICMP_DEST_UNREACH,
|
|
- ICMP_FRAG_NEEDED, htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH,
|
|
+ ICMP_FRAG_NEEDED, htonl(mtu));
|
|
break;
|
|
case IPPROTO_IPV6:
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
break;
|
|
default:
|
|
break;
|
|
diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
|
|
index 5f9c4fdc120d6..ecfeffc06c55c 100644
|
|
--- a/net/ipv6/ip6_vti.c
|
|
+++ b/net/ipv6/ip6_vti.c
|
|
@@ -520,10 +520,10 @@ vti6_xmit(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
|
|
if (mtu < IPV6_MIN_MTU)
|
|
mtu = IPV6_MIN_MTU;
|
|
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
} else {
|
|
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
- htonl(mtu));
|
|
+ icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
|
|
+ htonl(mtu));
|
|
}
|
|
|
|
err = -EMSGSIZE;
|
|
diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
|
|
index ff048cb8d8074..b26f469a3fb8c 100644
|
|
--- a/net/ipv6/sit.c
|
|
+++ b/net/ipv6/sit.c
|
|
@@ -987,7 +987,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb,
|
|
skb_dst_update_pmtu_no_confirm(skb, mtu);
|
|
|
|
if (skb->len > mtu && !skb_is_gso(skb)) {
|
|
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
+ icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
|
|
ip_rt_put(rt);
|
|
goto tx_error;
|
|
}
|
|
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
|
|
index 7be5103ff2a84..203890e378cb0 100644
|
|
--- a/net/l2tp/l2tp_core.c
|
|
+++ b/net/l2tp/l2tp_core.c
|
|
@@ -649,9 +649,9 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|
/* Parse and check optional cookie */
|
|
if (session->peer_cookie_len > 0) {
|
|
if (memcmp(ptr, &session->peer_cookie[0], session->peer_cookie_len)) {
|
|
- pr_warn_ratelimited("%s: cookie mismatch (%u/%u). Discarding.\n",
|
|
- tunnel->name, tunnel->tunnel_id,
|
|
- session->session_id);
|
|
+ pr_debug_ratelimited("%s: cookie mismatch (%u/%u). Discarding.\n",
|
|
+ tunnel->name, tunnel->tunnel_id,
|
|
+ session->session_id);
|
|
atomic_long_inc(&session->stats.rx_cookie_discards);
|
|
goto discard;
|
|
}
|
|
@@ -702,8 +702,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|
* If user has configured mandatory sequence numbers, discard.
|
|
*/
|
|
if (session->recv_seq) {
|
|
- pr_warn_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
- session->name);
|
|
+ pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
+ session->name);
|
|
atomic_long_inc(&session->stats.rx_seq_discards);
|
|
goto discard;
|
|
}
|
|
@@ -718,8 +718,8 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|
session->send_seq = 0;
|
|
l2tp_session_set_header_len(session, tunnel->version);
|
|
} else if (session->send_seq) {
|
|
- pr_warn_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
- session->name);
|
|
+ pr_debug_ratelimited("%s: recv data has no seq numbers when required. Discarding.\n",
|
|
+ session->name);
|
|
atomic_long_inc(&session->stats.rx_seq_discards);
|
|
goto discard;
|
|
}
|
|
@@ -809,9 +809,9 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
|
|
/* Short packet? */
|
|
if (!pskb_may_pull(skb, L2TP_HDR_SIZE_MAX)) {
|
|
- pr_warn_ratelimited("%s: recv short packet (len=%d)\n",
|
|
- tunnel->name, skb->len);
|
|
- goto error;
|
|
+ pr_debug_ratelimited("%s: recv short packet (len=%d)\n",
|
|
+ tunnel->name, skb->len);
|
|
+ goto invalid;
|
|
}
|
|
|
|
/* Point to L2TP header */
|
|
@@ -824,9 +824,9 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
/* Check protocol version */
|
|
version = hdrflags & L2TP_HDR_VER_MASK;
|
|
if (version != tunnel->version) {
|
|
- pr_warn_ratelimited("%s: recv protocol version mismatch: got %d expected %d\n",
|
|
- tunnel->name, version, tunnel->version);
|
|
- goto error;
|
|
+ pr_debug_ratelimited("%s: recv protocol version mismatch: got %d expected %d\n",
|
|
+ tunnel->name, version, tunnel->version);
|
|
+ goto invalid;
|
|
}
|
|
|
|
/* Get length of L2TP packet */
|
|
@@ -834,7 +834,7 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
|
|
/* If type is control packet, it is handled by userspace. */
|
|
if (hdrflags & L2TP_HDRFLAG_T)
|
|
- goto error;
|
|
+ goto pass;
|
|
|
|
/* Skip flags */
|
|
ptr += 2;
|
|
@@ -863,21 +863,24 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb)
|
|
l2tp_session_dec_refcount(session);
|
|
|
|
/* Not found? Pass to userspace to deal with */
|
|
- pr_warn_ratelimited("%s: no session found (%u/%u). Passing up.\n",
|
|
- tunnel->name, tunnel_id, session_id);
|
|
- goto error;
|
|
+ pr_debug_ratelimited("%s: no session found (%u/%u). Passing up.\n",
|
|
+ tunnel->name, tunnel_id, session_id);
|
|
+ goto pass;
|
|
}
|
|
|
|
if (tunnel->version == L2TP_HDR_VER_3 &&
|
|
l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr))
|
|
- goto error;
|
|
+ goto invalid;
|
|
|
|
l2tp_recv_common(session, skb, ptr, optr, hdrflags, length);
|
|
l2tp_session_dec_refcount(session);
|
|
|
|
return 0;
|
|
|
|
-error:
|
|
+invalid:
|
|
+ atomic_long_inc(&tunnel->stats.rx_invalid);
|
|
+
|
|
+pass:
|
|
/* Put UDP header back */
|
|
__skb_push(skb, sizeof(struct udphdr));
|
|
|
|
diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
|
|
index cb21d906343e8..98ea98eb9567b 100644
|
|
--- a/net/l2tp/l2tp_core.h
|
|
+++ b/net/l2tp/l2tp_core.h
|
|
@@ -39,6 +39,7 @@ struct l2tp_stats {
|
|
atomic_long_t rx_oos_packets;
|
|
atomic_long_t rx_errors;
|
|
atomic_long_t rx_cookie_discards;
|
|
+ atomic_long_t rx_invalid;
|
|
};
|
|
|
|
struct l2tp_tunnel;
|
|
diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
|
|
index 83956c9ee1fcc..96eb91be9238b 100644
|
|
--- a/net/l2tp/l2tp_netlink.c
|
|
+++ b/net/l2tp/l2tp_netlink.c
|
|
@@ -428,6 +428,9 @@ static int l2tp_nl_tunnel_send(struct sk_buff *skb, u32 portid, u32 seq, int fla
|
|
L2TP_ATTR_STATS_PAD) ||
|
|
nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
|
|
atomic_long_read(&tunnel->stats.rx_errors),
|
|
+ L2TP_ATTR_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, L2TP_ATTR_RX_INVALID,
|
|
+ atomic_long_read(&tunnel->stats.rx_invalid),
|
|
L2TP_ATTR_STATS_PAD))
|
|
goto nla_put_failure;
|
|
nla_nest_end(skb, nest);
|
|
@@ -771,6 +774,9 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
|
|
L2TP_ATTR_STATS_PAD) ||
|
|
nla_put_u64_64bit(skb, L2TP_ATTR_RX_ERRORS,
|
|
atomic_long_read(&session->stats.rx_errors),
|
|
+ L2TP_ATTR_STATS_PAD) ||
|
|
+ nla_put_u64_64bit(skb, L2TP_ATTR_RX_INVALID,
|
|
+ atomic_long_read(&session->stats.rx_invalid),
|
|
L2TP_ATTR_STATS_PAD))
|
|
goto nla_put_failure;
|
|
nla_nest_end(skb, nest);
|
|
diff --git a/net/mpls/mpls_gso.c b/net/mpls/mpls_gso.c
|
|
index b1690149b6fa0..1482259de9b5d 100644
|
|
--- a/net/mpls/mpls_gso.c
|
|
+++ b/net/mpls/mpls_gso.c
|
|
@@ -14,6 +14,7 @@
|
|
#include <linux/netdev_features.h>
|
|
#include <linux/netdevice.h>
|
|
#include <linux/skbuff.h>
|
|
+#include <net/mpls.h>
|
|
|
|
static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
|
|
netdev_features_t features)
|
|
@@ -27,6 +28,8 @@ static struct sk_buff *mpls_gso_segment(struct sk_buff *skb,
|
|
|
|
skb_reset_network_header(skb);
|
|
mpls_hlen = skb_inner_network_header(skb) - skb_network_header(skb);
|
|
+ if (unlikely(!mpls_hlen || mpls_hlen % MPLS_HLEN))
|
|
+ goto out;
|
|
if (unlikely(!pskb_may_pull(skb, mpls_hlen)))
|
|
goto out;
|
|
|
|
diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
|
|
index e87b6bd6b3cdb..4731d21fc3ad8 100644
|
|
--- a/net/netfilter/nf_nat_proto.c
|
|
+++ b/net/netfilter/nf_nat_proto.c
|
|
@@ -646,8 +646,8 @@ nf_nat_ipv4_fn(void *priv, struct sk_buff *skb,
|
|
}
|
|
|
|
static unsigned int
|
|
-nf_nat_ipv4_in(void *priv, struct sk_buff *skb,
|
|
- const struct nf_hook_state *state)
|
|
+nf_nat_ipv4_pre_routing(void *priv, struct sk_buff *skb,
|
|
+ const struct nf_hook_state *state)
|
|
{
|
|
unsigned int ret;
|
|
__be32 daddr = ip_hdr(skb)->daddr;
|
|
@@ -659,6 +659,23 @@ nf_nat_ipv4_in(void *priv, struct sk_buff *skb,
|
|
return ret;
|
|
}
|
|
|
|
+static unsigned int
|
|
+nf_nat_ipv4_local_in(void *priv, struct sk_buff *skb,
|
|
+ const struct nf_hook_state *state)
|
|
+{
|
|
+ __be32 saddr = ip_hdr(skb)->saddr;
|
|
+ struct sock *sk = skb->sk;
|
|
+ unsigned int ret;
|
|
+
|
|
+ ret = nf_nat_ipv4_fn(priv, skb, state);
|
|
+
|
|
+ if (ret == NF_ACCEPT && sk && saddr != ip_hdr(skb)->saddr &&
|
|
+ !inet_sk_transparent(sk))
|
|
+ skb_orphan(skb); /* TCP edemux obtained wrong socket */
|
|
+
|
|
+ return ret;
|
|
+}
|
|
+
|
|
static unsigned int
|
|
nf_nat_ipv4_out(void *priv, struct sk_buff *skb,
|
|
const struct nf_hook_state *state)
|
|
@@ -736,7 +753,7 @@ nf_nat_ipv4_local_fn(void *priv, struct sk_buff *skb,
|
|
static const struct nf_hook_ops nf_nat_ipv4_ops[] = {
|
|
/* Before packet filtering, change destination */
|
|
{
|
|
- .hook = nf_nat_ipv4_in,
|
|
+ .hook = nf_nat_ipv4_pre_routing,
|
|
.pf = NFPROTO_IPV4,
|
|
.hooknum = NF_INET_PRE_ROUTING,
|
|
.priority = NF_IP_PRI_NAT_DST,
|
|
@@ -757,7 +774,7 @@ static const struct nf_hook_ops nf_nat_ipv4_ops[] = {
|
|
},
|
|
/* After packet filtering, change source */
|
|
{
|
|
- .hook = nf_nat_ipv4_fn,
|
|
+ .hook = nf_nat_ipv4_local_in,
|
|
.pf = NFPROTO_IPV4,
|
|
.hooknum = NF_INET_LOCAL_IN,
|
|
.priority = NF_IP_PRI_NAT_SRC,
|
|
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
|
|
index acce622582e3d..bce6ca203d462 100644
|
|
--- a/net/netfilter/x_tables.c
|
|
+++ b/net/netfilter/x_tables.c
|
|
@@ -330,6 +330,7 @@ static int match_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
const struct xt_match *m;
|
|
int have_rev = 0;
|
|
|
|
+ mutex_lock(&xt[af].mutex);
|
|
list_for_each_entry(m, &xt[af].match, list) {
|
|
if (strcmp(m->name, name) == 0) {
|
|
if (m->revision > *bestp)
|
|
@@ -338,6 +339,7 @@ static int match_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
have_rev = 1;
|
|
}
|
|
}
|
|
+ mutex_unlock(&xt[af].mutex);
|
|
|
|
if (af != NFPROTO_UNSPEC && !have_rev)
|
|
return match_revfn(NFPROTO_UNSPEC, name, revision, bestp);
|
|
@@ -350,6 +352,7 @@ static int target_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
const struct xt_target *t;
|
|
int have_rev = 0;
|
|
|
|
+ mutex_lock(&xt[af].mutex);
|
|
list_for_each_entry(t, &xt[af].target, list) {
|
|
if (strcmp(t->name, name) == 0) {
|
|
if (t->revision > *bestp)
|
|
@@ -358,6 +361,7 @@ static int target_revfn(u8 af, const char *name, u8 revision, int *bestp)
|
|
have_rev = 1;
|
|
}
|
|
}
|
|
+ mutex_unlock(&xt[af].mutex);
|
|
|
|
if (af != NFPROTO_UNSPEC && !have_rev)
|
|
return target_revfn(NFPROTO_UNSPEC, name, revision, bestp);
|
|
@@ -371,12 +375,10 @@ int xt_find_revision(u8 af, const char *name, u8 revision, int target,
|
|
{
|
|
int have_rev, best = -1;
|
|
|
|
- mutex_lock(&xt[af].mutex);
|
|
if (target == 1)
|
|
have_rev = target_revfn(af, name, revision, &best);
|
|
else
|
|
have_rev = match_revfn(af, name, revision, &best);
|
|
- mutex_unlock(&xt[af].mutex);
|
|
|
|
/* Nothing at all? Return 0 to try loading module. */
|
|
if (best == -1) {
|
|
diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c
|
|
index 726dda95934c6..4f50a64315cf0 100644
|
|
--- a/net/netlabel/netlabel_cipso_v4.c
|
|
+++ b/net/netlabel/netlabel_cipso_v4.c
|
|
@@ -575,6 +575,7 @@ list_start:
|
|
|
|
break;
|
|
}
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
rcu_read_unlock();
|
|
|
|
genlmsg_end(ans_skb, data);
|
|
@@ -583,12 +584,14 @@ list_start:
|
|
list_retry:
|
|
/* XXX - this limit is a guesstimate */
|
|
if (nlsze_mult < 4) {
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
rcu_read_unlock();
|
|
kfree_skb(ans_skb);
|
|
nlsze_mult *= 2;
|
|
goto list_start;
|
|
}
|
|
list_failure_lock:
|
|
+ cipso_v4_doi_putdef(doi_def);
|
|
rcu_read_unlock();
|
|
list_failure:
|
|
kfree_skb(ans_skb);
|
|
diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
|
|
index d7134c558993c..38de24af24c44 100644
|
|
--- a/net/qrtr/qrtr.c
|
|
+++ b/net/qrtr/qrtr.c
|
|
@@ -935,8 +935,10 @@ static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
|
|
plen = (len + 3) & ~3;
|
|
skb = sock_alloc_send_skb(sk, plen + QRTR_HDR_MAX_SIZE,
|
|
msg->msg_flags & MSG_DONTWAIT, &rc);
|
|
- if (!skb)
|
|
+ if (!skb) {
|
|
+ rc = -ENOMEM;
|
|
goto out_node;
|
|
+ }
|
|
|
|
skb_reserve(skb, QRTR_HDR_MAX_SIZE);
|
|
|
|
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
|
|
index 5e8e49c4ab5ca..54a8c363bcdda 100644
|
|
--- a/net/sched/sch_api.c
|
|
+++ b/net/sched/sch_api.c
|
|
@@ -2167,7 +2167,7 @@ static int tc_dump_tclass_qdisc(struct Qdisc *q, struct sk_buff *skb,
|
|
|
|
static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
|
|
struct tcmsg *tcm, struct netlink_callback *cb,
|
|
- int *t_p, int s_t)
|
|
+ int *t_p, int s_t, bool recur)
|
|
{
|
|
struct Qdisc *q;
|
|
int b;
|
|
@@ -2178,7 +2178,7 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
|
|
if (tc_dump_tclass_qdisc(root, skb, tcm, cb, t_p, s_t) < 0)
|
|
return -1;
|
|
|
|
- if (!qdisc_dev(root))
|
|
+ if (!qdisc_dev(root) || !recur)
|
|
return 0;
|
|
|
|
if (tcm->tcm_parent) {
|
|
@@ -2213,13 +2213,13 @@ static int tc_dump_tclass(struct sk_buff *skb, struct netlink_callback *cb)
|
|
s_t = cb->args[0];
|
|
t = 0;
|
|
|
|
- if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t) < 0)
|
|
+ if (tc_dump_tclass_root(dev->qdisc, skb, tcm, cb, &t, s_t, true) < 0)
|
|
goto done;
|
|
|
|
dev_queue = dev_ingress_queue(dev);
|
|
if (dev_queue &&
|
|
tc_dump_tclass_root(dev_queue->qdisc_sleeping, skb, tcm, cb,
|
|
- &t, s_t) < 0)
|
|
+ &t, s_t, false) < 0)
|
|
goto done;
|
|
|
|
done:
|
|
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
|
|
index cf702a5f7fe5d..39ed0e0afe6d9 100644
|
|
--- a/net/sunrpc/sched.c
|
|
+++ b/net/sunrpc/sched.c
|
|
@@ -963,8 +963,11 @@ void rpc_execute(struct rpc_task *task)
|
|
|
|
rpc_set_active(task);
|
|
rpc_make_runnable(rpciod_workqueue, task);
|
|
- if (!is_async)
|
|
+ if (!is_async) {
|
|
+ unsigned int pflags = memalloc_nofs_save();
|
|
__rpc_execute(task);
|
|
+ memalloc_nofs_restore(pflags);
|
|
+ }
|
|
}
|
|
|
|
static void rpc_async_schedule(struct work_struct *work)
|
|
diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
|
|
index 33c58de58626c..3edae90188936 100644
|
|
--- a/samples/bpf/xdpsock_user.c
|
|
+++ b/samples/bpf/xdpsock_user.c
|
|
@@ -1543,5 +1543,7 @@ int main(int argc, char **argv)
|
|
|
|
xdpsock_cleanup();
|
|
|
|
+ munmap(bufs, NUM_FRAMES * opt_xsk_frame_size);
|
|
+
|
|
return 0;
|
|
}
|
|
diff --git a/security/commoncap.c b/security/commoncap.c
|
|
index b2a656947504d..a6c9bb4441d54 100644
|
|
--- a/security/commoncap.c
|
|
+++ b/security/commoncap.c
|
|
@@ -500,8 +500,7 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size)
|
|
__u32 magic, nsmagic;
|
|
struct inode *inode = d_backing_inode(dentry);
|
|
struct user_namespace *task_ns = current_user_ns(),
|
|
- *fs_ns = inode->i_sb->s_user_ns,
|
|
- *ancestor;
|
|
+ *fs_ns = inode->i_sb->s_user_ns;
|
|
kuid_t rootid;
|
|
size_t newsize;
|
|
|
|
@@ -524,15 +523,6 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size)
|
|
if (nsrootid == -1)
|
|
return -EINVAL;
|
|
|
|
- /*
|
|
- * Do not allow allow adding a v3 filesystem capability xattr
|
|
- * if the rootid field is ambiguous.
|
|
- */
|
|
- for (ancestor = task_ns->parent; ancestor; ancestor = ancestor->parent) {
|
|
- if (from_kuid(ancestor, rootid) == 0)
|
|
- return -EINVAL;
|
|
- }
|
|
-
|
|
newsize = sizeof(struct vfs_ns_cap_data);
|
|
nscap = kmalloc(newsize, GFP_ATOMIC);
|
|
if (!nscap)
|
|
diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
|
|
index 6a85645663759..17a25e453f60c 100644
|
|
--- a/sound/pci/hda/hda_bind.c
|
|
+++ b/sound/pci/hda/hda_bind.c
|
|
@@ -47,6 +47,10 @@ static void hda_codec_unsol_event(struct hdac_device *dev, unsigned int ev)
|
|
if (codec->bus->shutdown)
|
|
return;
|
|
|
|
+ /* ignore unsol events during system suspend/resume */
|
|
+ if (codec->core.dev.power.power_state.event != PM_EVENT_ON)
|
|
+ return;
|
|
+
|
|
if (codec->patch_ops.unsol_event)
|
|
codec->patch_ops.unsol_event(codec, ev);
|
|
}
|
|
diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
|
|
index 80016b7b6849e..b972d59eb1ec2 100644
|
|
--- a/sound/pci/hda/hda_controller.c
|
|
+++ b/sound/pci/hda/hda_controller.c
|
|
@@ -609,13 +609,6 @@ static int azx_pcm_open(struct snd_pcm_substream *substream)
|
|
20,
|
|
178000000);
|
|
|
|
- /* by some reason, the playback stream stalls on PulseAudio with
|
|
- * tsched=1 when a capture stream triggers. Until we figure out the
|
|
- * real cause, disable tsched mode by telling the PCM info flag.
|
|
- */
|
|
- if (chip->driver_caps & AZX_DCAPS_AMD_WORKAROUND)
|
|
- runtime->hw.info |= SNDRV_PCM_INFO_BATCH;
|
|
-
|
|
if (chip->align_buffer_size)
|
|
/* constrain buffer sizes to be multiple of 128
|
|
bytes. This is more efficient in terms of memory
|
|
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
|
|
index 145f4ff47d54f..d244616d28d88 100644
|
|
--- a/sound/pci/hda/hda_intel.c
|
|
+++ b/sound/pci/hda/hda_intel.c
|
|
@@ -1026,6 +1026,8 @@ static int azx_prepare(struct device *dev)
|
|
chip = card->private_data;
|
|
chip->pm_prepared = 1;
|
|
|
|
+ flush_work(&azx_bus(chip)->unsol_work);
|
|
+
|
|
/* HDA controller always requires different WAKEEN for runtime suspend
|
|
* and system suspend, so don't use direct-complete here.
|
|
*/
|
|
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
|
|
index ee500e46dd4f6..f774b2ac9720c 100644
|
|
--- a/sound/pci/hda/patch_ca0132.c
|
|
+++ b/sound/pci/hda/patch_ca0132.c
|
|
@@ -1275,6 +1275,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
|
|
SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
|
|
SND_PCI_QUIRK(0x1102, 0x0018, "Recon3D", QUIRK_R3D),
|
|
SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
|
|
+ SND_PCI_QUIRK(0x1102, 0x0191, "Sound Blaster AE-5 Plus", QUIRK_AE5),
|
|
SND_PCI_QUIRK(0x1102, 0x0081, "Sound Blaster AE-7", QUIRK_AE7),
|
|
{}
|
|
};
|
|
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
|
|
index d49cc4409d59c..a980a4eda51c9 100644
|
|
--- a/sound/pci/hda/patch_conexant.c
|
|
+++ b/sound/pci/hda/patch_conexant.c
|
|
@@ -149,6 +149,21 @@ static int cx_auto_vmaster_mute_led(struct led_classdev *led_cdev,
|
|
return 0;
|
|
}
|
|
|
|
+static void cxt_init_gpio_led(struct hda_codec *codec)
|
|
+{
|
|
+ struct conexant_spec *spec = codec->spec;
|
|
+ unsigned int mask = spec->gpio_mute_led_mask | spec->gpio_mic_led_mask;
|
|
+
|
|
+ if (mask) {
|
|
+ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_MASK,
|
|
+ mask);
|
|
+ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DIRECTION,
|
|
+ mask);
|
|
+ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA,
|
|
+ spec->gpio_led);
|
|
+ }
|
|
+}
|
|
+
|
|
static int cx_auto_init(struct hda_codec *codec)
|
|
{
|
|
struct conexant_spec *spec = codec->spec;
|
|
@@ -156,6 +171,7 @@ static int cx_auto_init(struct hda_codec *codec)
|
|
if (!spec->dynamic_eapd)
|
|
cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, true);
|
|
|
|
+ cxt_init_gpio_led(codec);
|
|
snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
|
|
|
|
return 0;
|
|
@@ -215,6 +231,7 @@ enum {
|
|
CXT_FIXUP_HP_SPECTRE,
|
|
CXT_FIXUP_HP_GATE_MIC,
|
|
CXT_FIXUP_MUTE_LED_GPIO,
|
|
+ CXT_FIXUP_HP_ZBOOK_MUTE_LED,
|
|
CXT_FIXUP_HEADSET_MIC,
|
|
CXT_FIXUP_HP_MIC_NO_PRESENCE,
|
|
};
|
|
@@ -654,31 +671,36 @@ static int cxt_gpio_micmute_update(struct led_classdev *led_cdev,
|
|
return 0;
|
|
}
|
|
|
|
-
|
|
-static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
|
|
- const struct hda_fixup *fix, int action)
|
|
+static void cxt_setup_mute_led(struct hda_codec *codec,
|
|
+ unsigned int mute, unsigned int mic_mute)
|
|
{
|
|
struct conexant_spec *spec = codec->spec;
|
|
- static const struct hda_verb gpio_init[] = {
|
|
- { 0x01, AC_VERB_SET_GPIO_MASK, 0x03 },
|
|
- { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x03 },
|
|
- {}
|
|
- };
|
|
|
|
- if (action == HDA_FIXUP_ACT_PRE_PROBE) {
|
|
+ spec->gpio_led = 0;
|
|
+ spec->mute_led_polarity = 0;
|
|
+ if (mute) {
|
|
snd_hda_gen_add_mute_led_cdev(codec, cxt_gpio_mute_update);
|
|
- spec->gpio_led = 0;
|
|
- spec->mute_led_polarity = 0;
|
|
- spec->gpio_mute_led_mask = 0x01;
|
|
- spec->gpio_mic_led_mask = 0x02;
|
|
+ spec->gpio_mute_led_mask = mute;
|
|
+ }
|
|
+ if (mic_mute) {
|
|
snd_hda_gen_add_micmute_led_cdev(codec, cxt_gpio_micmute_update);
|
|
+ spec->gpio_mic_led_mask = mic_mute;
|
|
}
|
|
- snd_hda_add_verbs(codec, gpio_init);
|
|
- if (spec->gpio_led)
|
|
- snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA,
|
|
- spec->gpio_led);
|
|
}
|
|
|
|
+static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
|
|
+ const struct hda_fixup *fix, int action)
|
|
+{
|
|
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
|
|
+ cxt_setup_mute_led(codec, 0x01, 0x02);
|
|
+}
|
|
+
|
|
+static void cxt_fixup_hp_zbook_mute_led(struct hda_codec *codec,
|
|
+ const struct hda_fixup *fix, int action)
|
|
+{
|
|
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
|
|
+ cxt_setup_mute_led(codec, 0x10, 0x20);
|
|
+}
|
|
|
|
/* ThinkPad X200 & co with cxt5051 */
|
|
static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
|
|
@@ -839,6 +861,10 @@ static const struct hda_fixup cxt_fixups[] = {
|
|
.type = HDA_FIXUP_FUNC,
|
|
.v.func = cxt_fixup_mute_led_gpio,
|
|
},
|
|
+ [CXT_FIXUP_HP_ZBOOK_MUTE_LED] = {
|
|
+ .type = HDA_FIXUP_FUNC,
|
|
+ .v.func = cxt_fixup_hp_zbook_mute_led,
|
|
+ },
|
|
[CXT_FIXUP_HEADSET_MIC] = {
|
|
.type = HDA_FIXUP_FUNC,
|
|
.v.func = cxt_fixup_headset_mic,
|
|
@@ -917,6 +943,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
|
|
SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x8402, "HP ProBook 645 G4", CXT_FIXUP_MUTE_LED_GPIO),
|
|
+ SND_PCI_QUIRK(0x103c, 0x8427, "HP ZBook Studio G5", CXT_FIXUP_HP_ZBOOK_MUTE_LED),
|
|
SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
|
|
@@ -956,6 +983,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
|
|
{ .id = CXT_FIXUP_MUTE_LED_EAPD, .name = "mute-led-eapd" },
|
|
{ .id = CXT_FIXUP_HP_DOCK, .name = "hp-dock" },
|
|
{ .id = CXT_FIXUP_MUTE_LED_GPIO, .name = "mute-led-gpio" },
|
|
+ { .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
|
|
{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
|
|
{}
|
|
};
|
|
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
|
|
index c67d5915ce243..8c6f10cbced32 100644
|
|
--- a/sound/pci/hda/patch_hdmi.c
|
|
+++ b/sound/pci/hda/patch_hdmi.c
|
|
@@ -2475,6 +2475,18 @@ static void generic_hdmi_free(struct hda_codec *codec)
|
|
}
|
|
|
|
#ifdef CONFIG_PM
|
|
+static int generic_hdmi_suspend(struct hda_codec *codec)
|
|
+{
|
|
+ struct hdmi_spec *spec = codec->spec;
|
|
+ int pin_idx;
|
|
+
|
|
+ for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
|
|
+ struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
|
|
+ cancel_delayed_work_sync(&per_pin->work);
|
|
+ }
|
|
+ return 0;
|
|
+}
|
|
+
|
|
static int generic_hdmi_resume(struct hda_codec *codec)
|
|
{
|
|
struct hdmi_spec *spec = codec->spec;
|
|
@@ -2498,6 +2510,7 @@ static const struct hda_codec_ops generic_hdmi_patch_ops = {
|
|
.build_controls = generic_hdmi_build_controls,
|
|
.unsol_event = hdmi_unsol_event,
|
|
#ifdef CONFIG_PM
|
|
+ .suspend = generic_hdmi_suspend,
|
|
.resume = generic_hdmi_resume,
|
|
#endif
|
|
};
|
|
diff --git a/sound/usb/card.c b/sound/usb/card.c
|
|
index 57d6d4ff01e08..fc7c359ae215a 100644
|
|
--- a/sound/usb/card.c
|
|
+++ b/sound/usb/card.c
|
|
@@ -830,6 +830,9 @@ static int usb_audio_probe(struct usb_interface *intf,
|
|
snd_media_device_create(chip, intf);
|
|
}
|
|
|
|
+ if (quirk)
|
|
+ chip->quirk_type = quirk->type;
|
|
+
|
|
usb_chip[chip->index] = chip;
|
|
chip->intf[chip->num_interfaces] = intf;
|
|
chip->num_interfaces++;
|
|
@@ -904,6 +907,9 @@ static void usb_audio_disconnect(struct usb_interface *intf)
|
|
}
|
|
}
|
|
|
|
+ if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND)
|
|
+ usb_enable_autosuspend(interface_to_usbdev(intf));
|
|
+
|
|
chip->num_interfaces--;
|
|
if (chip->num_interfaces <= 0) {
|
|
usb_chip[chip->index] = NULL;
|
|
diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
|
|
index f82c2ab809c1d..10b3a8006bdb3 100644
|
|
--- a/sound/usb/quirks.c
|
|
+++ b/sound/usb/quirks.c
|
|
@@ -523,7 +523,7 @@ static int setup_disable_autosuspend(struct snd_usb_audio *chip,
|
|
struct usb_driver *driver,
|
|
const struct snd_usb_audio_quirk *quirk)
|
|
{
|
|
- driver->supports_autosuspend = 0;
|
|
+ usb_disable_autosuspend(interface_to_usbdev(iface));
|
|
return 1; /* Continue with creating streams and mixer */
|
|
}
|
|
|
|
@@ -1520,6 +1520,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
|
|
case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */
|
|
case USB_ID(0x21b4, 0x0081): /* AudioQuest DragonFly */
|
|
case USB_ID(0x2912, 0x30c8): /* Audioengine D1 */
|
|
+ case USB_ID(0x413c, 0xa506): /* Dell AE515 sound bar */
|
|
return true;
|
|
}
|
|
|
|
@@ -1672,6 +1673,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
|
|
&& (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
|
msleep(20);
|
|
|
|
+ /*
|
|
+ * Plantronics headsets (C320, C320-M, etc) need a delay to avoid
|
|
+ * random microhpone failures.
|
|
+ */
|
|
+ if (USB_ID_VENDOR(chip->usb_id) == 0x047f &&
|
|
+ (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
|
|
+ msleep(20);
|
|
+
|
|
/* Zoom R16/24, many Logitech(at least H650e/H570e/BCC950),
|
|
* Jabra 550a, Kingston HyperX needs a tiny delay here,
|
|
* otherwise requests like get/set frequency return
|
|
diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
|
|
index 0805b7f21272f..9667060ff92be 100644
|
|
--- a/sound/usb/usbaudio.h
|
|
+++ b/sound/usb/usbaudio.h
|
|
@@ -27,6 +27,7 @@ struct snd_usb_audio {
|
|
struct snd_card *card;
|
|
struct usb_interface *intf[MAX_CARD_INTERFACES];
|
|
u32 usb_id;
|
|
+ uint16_t quirk_type;
|
|
struct mutex mutex;
|
|
unsigned int system_suspend;
|
|
atomic_t active;
|
|
diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
|
|
index dfa540d8a02d6..d636643ddd358 100644
|
|
--- a/tools/bpf/resolve_btfids/main.c
|
|
+++ b/tools/bpf/resolve_btfids/main.c
|
|
@@ -258,6 +258,11 @@ static struct btf_id *add_symbol(struct rb_root *root, char *name, size_t size)
|
|
return btf_id__add(root, id, false);
|
|
}
|
|
|
|
+/* Older libelf.h and glibc elf.h might not yet define the ELF compression types. */
|
|
+#ifndef SHF_COMPRESSED
|
|
+#define SHF_COMPRESSED (1 << 11) /* Section with compressed data. */
|
|
+#endif
|
|
+
|
|
/*
|
|
* The data of compressed section should be aligned to 4
|
|
* (for 32bit) or 8 (for 64 bit) bytes. The binutils ld
|
|
diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
|
|
index 9bc537d0b92da..f6e8831673f9b 100644
|
|
--- a/tools/lib/bpf/xsk.c
|
|
+++ b/tools/lib/bpf/xsk.c
|
|
@@ -535,15 +535,16 @@ static int xsk_lookup_bpf_maps(struct xsk_socket *xsk)
|
|
if (fd < 0)
|
|
continue;
|
|
|
|
+ memset(&map_info, 0, map_len);
|
|
err = bpf_obj_get_info_by_fd(fd, &map_info, &map_len);
|
|
if (err) {
|
|
close(fd);
|
|
continue;
|
|
}
|
|
|
|
- if (!strcmp(map_info.name, "xsks_map")) {
|
|
+ if (!strncmp(map_info.name, "xsks_map", sizeof(map_info.name))) {
|
|
ctx->xsks_map_fd = fd;
|
|
- continue;
|
|
+ break;
|
|
}
|
|
|
|
close(fd);
|
|
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
|
|
index 62f3deb1d3a8b..e41a8f9b99d2d 100644
|
|
--- a/tools/perf/Makefile.perf
|
|
+++ b/tools/perf/Makefile.perf
|
|
@@ -600,7 +600,7 @@ arch_errno_hdr_dir := $(srctree)/tools
|
|
arch_errno_tbl := $(srctree)/tools/perf/trace/beauty/arch_errno_names.sh
|
|
|
|
$(arch_errno_name_array): $(arch_errno_tbl)
|
|
- $(Q)$(SHELL) '$(arch_errno_tbl)' $(firstword $(CC)) $(arch_errno_hdr_dir) > $@
|
|
+ $(Q)$(SHELL) '$(arch_errno_tbl)' '$(patsubst -%,,$(CC))' $(arch_errno_hdr_dir) > $@
|
|
|
|
sync_file_range_arrays := $(beauty_outdir)/sync_file_range_arrays.c
|
|
sync_file_range_tbls := $(srctree)/tools/perf/trace/beauty/sync_file_range.sh
|
|
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
|
|
index d42339df20f8d..8a3b7d5a47376 100644
|
|
--- a/tools/perf/util/sort.c
|
|
+++ b/tools/perf/util/sort.c
|
|
@@ -3003,7 +3003,7 @@ int output_field_add(struct perf_hpp_list *list, char *tok)
|
|
if (strncasecmp(tok, sd->name, strlen(tok)))
|
|
continue;
|
|
|
|
- if (sort__mode != SORT_MODE__MEMORY)
|
|
+ if (sort__mode != SORT_MODE__BRANCH)
|
|
return -EINVAL;
|
|
|
|
return __sort_dimension__add_output(list, sd);
|
|
@@ -3015,7 +3015,7 @@ int output_field_add(struct perf_hpp_list *list, char *tok)
|
|
if (strncasecmp(tok, sd->name, strlen(tok)))
|
|
continue;
|
|
|
|
- if (sort__mode != SORT_MODE__BRANCH)
|
|
+ if (sort__mode != SORT_MODE__MEMORY)
|
|
return -EINVAL;
|
|
|
|
return __sort_dimension__add_output(list, sd);
|
|
diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
|
|
index f507dff713c9f..8a01af783310a 100644
|
|
--- a/tools/perf/util/trace-event-read.c
|
|
+++ b/tools/perf/util/trace-event-read.c
|
|
@@ -361,6 +361,7 @@ static int read_saved_cmdline(struct tep_handle *pevent)
|
|
pr_debug("error reading saved cmdlines\n");
|
|
goto out;
|
|
}
|
|
+ buf[ret] = '\0';
|
|
|
|
parse_saved_cmdline(pevent, buf, size);
|
|
ret = 0;
|
|
diff --git a/tools/testing/selftests/bpf/progs/netif_receive_skb.c b/tools/testing/selftests/bpf/progs/netif_receive_skb.c
|
|
index 6b670039ea679..1d8918dfbd3ff 100644
|
|
--- a/tools/testing/selftests/bpf/progs/netif_receive_skb.c
|
|
+++ b/tools/testing/selftests/bpf/progs/netif_receive_skb.c
|
|
@@ -16,6 +16,13 @@ bool skip = false;
|
|
#define STRSIZE 2048
|
|
#define EXPECTED_STRSIZE 256
|
|
|
|
+#if defined(bpf_target_s390)
|
|
+/* NULL points to a readable struct lowcore on s390, so take the last page */
|
|
+#define BADPTR ((void *)0xFFFFFFFFFFFFF000ULL)
|
|
+#else
|
|
+#define BADPTR 0
|
|
+#endif
|
|
+
|
|
#ifndef ARRAY_SIZE
|
|
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
|
|
#endif
|
|
@@ -113,11 +120,11 @@ int BPF_PROG(trace_netif_receive_skb, struct sk_buff *skb)
|
|
}
|
|
|
|
/* Check invalid ptr value */
|
|
- p.ptr = 0;
|
|
+ p.ptr = BADPTR;
|
|
__ret = bpf_snprintf_btf(str, STRSIZE, &p, sizeof(p), 0);
|
|
if (__ret >= 0) {
|
|
- bpf_printk("printing NULL should generate error, got (%d)",
|
|
- __ret);
|
|
+ bpf_printk("printing %llx should generate error, got (%d)",
|
|
+ (unsigned long long)BADPTR, __ret);
|
|
ret = -ERANGE;
|
|
}
|
|
|
|
diff --git a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
|
|
index a621b58ab079d..9afe947cfae95 100644
|
|
--- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
|
|
+++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
|
|
@@ -446,10 +446,8 @@ int _geneve_get_tunnel(struct __sk_buff *skb)
|
|
}
|
|
|
|
ret = bpf_skb_get_tunnel_opt(skb, &gopt, sizeof(gopt));
|
|
- if (ret < 0) {
|
|
- ERROR(ret);
|
|
- return TC_ACT_SHOT;
|
|
- }
|
|
+ if (ret < 0)
|
|
+ gopt.opt_class = 0;
|
|
|
|
bpf_trace_printk(fmt, sizeof(fmt),
|
|
key.tunnel_id, key.remote_ipv4, gopt.opt_class);
|
|
diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
|
|
index bed53b561e044..1b138cd2b187d 100644
|
|
--- a/tools/testing/selftests/bpf/verifier/array_access.c
|
|
+++ b/tools/testing/selftests/bpf/verifier/array_access.c
|
|
@@ -250,12 +250,13 @@
|
|
BPF_MOV64_IMM(BPF_REG_5, 0),
|
|
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
|
|
BPF_FUNC_csum_diff),
|
|
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
|
|
BPF_EXIT_INSN(),
|
|
},
|
|
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
|
|
.fixup_map_array_ro = { 3 },
|
|
.result = ACCEPT,
|
|
- .retval = -29,
|
|
+ .retval = 65507,
|
|
},
|
|
{
|
|
"invalid write map access into a read-only array 1",
|
|
diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
|
|
index 197e769c2ed16..f8cda822c1cec 100755
|
|
--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
|
|
+++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
|
|
@@ -86,11 +86,20 @@ test_ip6gretap()
|
|
|
|
test_gretap_stp()
|
|
{
|
|
+ # Sometimes after mirror installation, the neighbor's state is not valid.
|
|
+ # The reason is that there is no SW datapath activity related to the
|
|
+ # neighbor for the remote GRE address. Therefore whether the corresponding
|
|
+ # neighbor will be valid is a matter of luck, and the test is thus racy.
|
|
+ # Set the neighbor's state to permanent, so it would be always valid.
|
|
+ ip neigh replace 192.0.2.130 lladdr $(mac_get $h3) \
|
|
+ nud permanent dev br2
|
|
full_test_span_gre_stp gt4 $swp3.555 "mirror to gretap"
|
|
}
|
|
|
|
test_ip6gretap_stp()
|
|
{
|
|
+ ip neigh replace 2001:db8:2::2 lladdr $(mac_get $h3) \
|
|
+ nud permanent dev br2
|
|
full_test_span_gre_stp gt6 $swp3.555 "mirror to ip6gretap"
|
|
}
|
|
|