With the size of the frame limited we can now write to an offset within the
buffer instead of having to write at the very start of the buffer. The
advantage to this is that it allows us to leave padding room for things
like supporting XDP in the future.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds support for using 3K buffers in order 1 pages the same way
we were using 2K buffers in 4K pages. We are reserving 1K of room for now
to have space available for future headroom and tailroom when we enable
build_skb support.
One side effect of this patch is that we can end up using a larger buffer
if jumbo frames is enabled. The impact shouldn't be too great, but it
could hurt small packet performance for UDP workloads if jumbo frames is
enabled as the truesize of frames will be larger.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Since there are potential drawbacks to the new Rx allocation approach I
thought it best to add a "chicken bit" so that we can turn the feature off
if in the event that a problem is found.
It also provides a means of validating the legacy Rx path in the event that
we are forced to fall back. At some point in the future when we are
convinced we don't need it anymore we might be able to drop the legacy-rx
flag.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Update the handling of page addresses so that we always refer to them using
a void pointer, and try to use the consistent name of va indicating we are
working with a virtual address.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We only need to sync the size of the frame that is read to test. We don't
need to sync the entire Rx buffer. This way the testing is more consistent
with how we handle things in the receive path.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In order to support the use of build_skb going forward it will be necessary
to place a maximum limit on the amount of data we can receive when jumbo
frames is not enabled. In order to do this I am adding a new upper limit
for receive based on the size of a 2K buffer minus padding.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
In the case of the Tx rings we need to only clear the Tx buffer_info when
we are resetting the rings. Ideally we do this when we configure the ring
to bring it back up instead of when we are taking it down in order to avoid
dirtying pages we don't need to.
In addition we don't need to clear the Tx descriptor ring since we will
fully repopulate it when we begin transmitting frames and next_to_watch can
be cleared to prevent the ring from being cleaned beyond that point instead
of needing to touch anything in the Tx descriptor ring.
Finally with these changes we can avoid having to reset the skb member of
the Tx buffer_info structure in the cleanup path since the skb will always
be associated with the first buffer which has next_to_watch set.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change makes it so that instead of going through the entire ring on Rx
cleanup we only go through the region that was designated to be cleaned up
and stop when we reach the region where new allocations should start.
In addition we can avoid having to perform a memset on the Rx buffer_info
structures until we are about to start using the ring again. By deferring
this we can avoid dirtying the cache any more than we have to which can
help to improve the time needed to bring the interface down and then back
up again in a reset or suspend/resume cycle.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change makes it so that we use the length of the packet instead of the
DD status bit to determine if a new descriptor is ready to be processed.
The obvious advantage is that it cuts down on reads as we don't really even
need the DD bit if going from a 0 to a non-zero value on size is enough to
inform us that the packet has been completed.
In addition I have updated the code so that we only reset the Rx descriptor
length for descriptor zero when resetting a ring instead of having to do a
memset with 0 over the entire ring. By doing this we can save some time on
initialization.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Since we are already using DMA attributes in igb for Rx there is no reason
why we can't also apply DMA_ATTR_WEAK_ORDERING which is needed on some
platforms to improve performance.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABAgAGBQJYy4bXAAoJELDendYovxMvVPwIAJWsWFoPEb/3sUt/0oYtFiq3
NXj8kHTm8GIn73PSbVQe9V8aoqRO+1+AjveoBz47M10Aro/NfgccGFgWBrhTwYst
S5axq4AySGEnUNnSyxxpaBgijBeXnX3jeZ/SJ0EgINy73CV3FjsqWrQrKv6Mt2Hg
6Vwo/ebAxs/FPJQIj7kLsSrWCJvav5145HrzYSEO29JXc6zOSgXZQkLOmdOck2He
Kw8XbYgTqDOlLuf3KwzpXJM8bDwird6U7orgfFyMDRyDDOfvKAm3oQpuW8styPL/
NNNOX5sVTqNPktLXVf2Y+W8rlxGCqQEZbCuZYLWbYmrWcv02cD+IGs/Npwg0s00=
=zuJx
-----END PGP SIGNATURE-----
Merge tag 'for-linus-4.11b-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen fix from Juergen Gross:
"A minor fix for using the appropriate refcount_t instead of atomic_t"
* tag 'for-linus-4.11b-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
drivers, xen: convert grant_map.users from atomic_t to refcount_t
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJYy2bCAAoJEAx081l5xIa+wDAQAJTrgKmSOu8Vqv8YZqAHc46r
bnsYOs9Xs4YYId8QP2QXkqI8lzlDmNlecn9ARYRBLR/UfAVqnZzirWjolL66D5FI
ZVj9R+WuujrN/8eimkZPwcZRfdi6RRgwHRbbXthMoiKXq/ydE5xnUtEv5rp9WHFb
QZeUrhhDdNXNORiwOlCjpqu7DvolRQmfroOh7xaDgSQoCmhyCB9D3pRyFONGDbkf
LSf5WUP4VFTYrGCKZLXCqZkfcve+k99sQcuFcEnX3Pg+XFjScrCTa5XAiBLiV3bR
62JI2rcyZqKkLV9iqXDQp/P3Rbz9qero0xaDpF51aNWKJlsioW07V9BLsEemdxg2
3wdG51jsGDZ2g9cQL37vfKXQH5r3V1KUaXD1HVdY2y4vWT3BBWc8I7UAddfeCCAF
FBMzyAd9SjoI6PgkEZqshFAO1Z/QWux9wjpcZ7CWte8F/5Cul8QOl7ZO8Hg4bN5i
VaKTxAT91FwvDgbfVSZ+L+EeVyns9WKRUDt19HRIPruKMwjLPBGmZrL4UChIzacz
XJYkFdR1rOUQFsgcPnyWqkBQfjB0hiryJSuGGXMAIWKlsbsCUth/+ekUrwDtBNdP
zbzyRBa0WMlNxCxeVcmNfCXMqNYJUlcD7lAMe3aKg/WUIQqVTcT6gyxcf8dkhVu9
hosSva7b03GnK9ei44+y
=0d8b
-----END PGP SIGNATURE-----
Merge tag 'drm-fixes-for-v4.11-rc3' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"Bunch of fixes across the drivers, in a St Patrick's day pull request
(please turn terminal colors to green on black or black on green for
full effect).
On the arm side, tilcdc, omap and malidp got fixes, while amd has some
powermanagement fixes, and intel has a set of fixes across the driver.
Nothing seems to bad or scary at this point"
* tag 'drm-fixes-for-v4.11-rc3' of git://people.freedesktop.org/~airlied/linux: (27 commits)
drm/amd/amdgpu: Fix debugfs reg read/write address width
drm/amdgpu/si: add dpm quirk for Oland
drm/radeon/si: add dpm quirk for Oland
drm: amd: remove broken include path
drm/amd/powerplay: fix copy error in smu7_clockpoweragting.c
drm/tilcdc: Set framebuffer DMA address to HW only if CRTC is enabled
drm/tilcdc: Fix hardcoded fail-return value in tilcdc_crtc_create()
drm/i915: Fix forcewake active domain tracking
drm/i915: Nuke skl_update_plane debug message from the pipe update critical section
drm/i915: use correct node for handling cache domain eviction
uapi: fix drm/omap_drm.h userspace compilation errors
drm/omap: fix dmabuf mmap for dma_alloc'ed buffers
drm/amdgpu: fix parser init error path to avoid crash in parser fini
drm/amd/amdgpu: Disable GFX_PG on Carrizo until compute issues solved
drm: mali-dp: Fix smart layer not going to composition
drm: mali-dp: Remove mclk rate management
drm/i915: Drain the freed state from the tail of the next commit
drm/i915: Nuke debug messages from the pipe update critical section
drm/i915: Use pagecache write to prepopulate shmemfs from pwrite-ioctl
drm/i915: Store a permanent error in obj->mm.pages
...
- Fix symbols__fixup_end heuristic for corner cases, such as JITted eBPF
programs, that are loaded at page aligned addresses, just after the
kernel proper (Daniel Borkmann)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJYy+vGAAoJENZQFvNTUqpAbHEQAKxgQ5S3Lydbt2Ihy9TbaTKF
HOclArR8n96nbIuJk/C5b8Tcrp8FDxhEv0FZd5qAIpEJ2ZWdAXdXrZcFzNq3JZDD
gXGwpC7kjb0wqGwwmAQnxF444CKYkmsSf6NiW9//ZnA23tYddYbgUsRJyB22NT0Z
byNYQP0oBhK8x9jY0RWxK/DDPdEUSyVXIl2rUWckmh5ovyU/FBVxj2QXB/BOLguC
QTYPNXs7QDNSGvZKrXHSwZn3CULyotvPnGPv9K0eM5VZp6MLUMITg7tI0zBlTJTk
Ghf0XuC1Fijq9oxZgZ/8HAs/Ypx/MCdnml6v3RBZfeOvhE6M1aYXEzjEHw1l1EGj
pZxrPz9l+16pmFebRmpkfmmRyatFul9WNzQzMiuQ4d3XVZqNeqm5eFNVWCrvPQhU
lFt0Wx7y30fKQwECcOgFNxhGDeo68QPC8iDelpMZXmVxFHosM0TRYvLXKlzOp/oP
pNtGNCEC3OI9sWbhXsY8dzXB2fG0gH4SI0MqjGd+72rM8JJiqbea54KXUjxCEVDc
j1De7rqPpQSUrQz3UpxpVknzbG5ANO+FbmkaV+psGanKgQssKsVAGERMpXKQMrWC
XfYgjfaXjLWy950GMZxLXK4AdrjEt0Zyelg1++dRyTvqV0i1YC4q4fQP9RHCjPyO
pMWBvVTkZZtQdECgbAzx
=KhBf
-----END PGP SIGNATURE-----
Merge tag 'perf-urgent-for-mingo-4.11-20170317' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/urgent fix from Arnaldo Carvalho de Melo:
- Fix symbols__fixup_end heuristic for corner cases, such as JITted eBPF
programs, that are loaded at page aligned addresses, just after the
kernel proper (Daniel Borkmann)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The current symbols__fixup_end() heuristic for the last entry in the rb
tree is suboptimal as it leads to not being able to recognize the symbol
in the call graph in a couple of corner cases, for example:
i) If the symbol has a start address (f.e. exposed via kallsyms)
that is at a page boundary, then the roundup(curr->start, 4096)
for the last entry will result in curr->start == curr->end with
a symbol length of zero.
ii) If the symbol has a start address that is shortly before a page
boundary, then also here, curr->end - curr->start will just be
very few bytes, where it's unrealistic that we could perform a
match against.
Instead, change the heuristic to roundup(curr->start, 4096) + 4096, so
that we can catch such corner cases and have a better chance to find
that specific symbol. It's still just best effort as the real end of the
symbol is unknown to us (and could even be at a larger offset than the
current range), but better than the current situation.
Alexei reported that he recently run into case i) with a JITed eBPF
program (these are all page aligned) as the last symbol which wasn't
properly shown in the call graph (while other eBPF program symbols in
the rb tree were displayed correctly). Since this is a generic issue,
lets try to improve the heuristic a bit.
Reported-and-Tested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Fixes: 2e538c4a18 ("perf tools: Improve kernel/modules symbol lookup")
Link: http://lkml.kernel.org/r/bb5c80d27743be6f12afc68405f1956a330e1bc9.1489614365.git.daniel@iogearbox.net
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
refcount_t type and corresponding API (see include/linux/refcount.h)
should be used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David Windsor <dwindsor@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
When an interrupt occurs on an MCP23S08 chip, the INTF register will only
contain one bit as causing the interrupt. If more than two pins change at
the same time on the chip, this causes one of the pins to not be reported.
This patch fixes the logic for checking if a pin has changed, so that
multiple pins will always cause more than one change.
Cc: stable@vger.kernel.org
Signed-off-by: Robert Middleton <robert.middleton@rm5248.com>
Tested-by: Phil Reid <preid@electromag.com.au>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Naively, it looks racy, but ->mmap_sem saves it. Add a comment and a
lockdep assertion.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/03a1e629063899168dfc4707f3bb6e581e21f5c6.1489694270.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
If one thread mmaps a perf event while another thread in the same mm
is in some context where active_mm != mm (which can happen in the
scheduler, for example), refresh_pce() would write the wrong value
to CR4.PCE. This broke some PAPI tests.
Reported-and-tested-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes: 7911d3f7af ("perf/x86: Only allow rdpmc if a perf_event is mapped")
Link: http://lkml.kernel.org/r/0c5b38a76ea50e405f9abe07a13dfaef87c173a1.1489694270.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
As of commit 438cc81a41 ("powerpc/pseries: Automatically resize HPT
for memory hot add/remove"), when running on the pseries platform, we
always attempt to use the PAPR extension to resize the hashed page
table (HPT) when we add or remove memory.
This is fine, but when the extension is not available we'll give a
harmless, but scary warning. Instead check if the firmware supports HPT
resizing before populating the mmu_hash_ops.resize_hpt pointer.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Information reported to ethtool about link modes is wrong for 25G NIC. Fix
it by checking for presence of 25G NIC, checking the link speed reported by
NIC firmware, and then assigning proper values to the
ethtool_link_ksettings struct.
Signed-off-by: Manish Awasthi <manish.awasthi@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Stephen Hemminger says:
====================
netvsc: small changes for net-next
One bugfix, and two non-code patches
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add some short description of how callback's and NAPI interoperate.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change the argument to channel callback from the channel pointer
to the internal data structure containing per-channel info.
This avoids any possible races when callback happens during
initialization and makes IRQ code simpler.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit b369e7fd41 ("tcp: make TCP_INFO more consistent") moved
lock_sock_fast() earlier in tcp_get_info()
This has the minor effect that jiffies value being sampled at the
beginning of tcp_get_info() is more likely to be off by one, and we
report big tcpi_last_data_sent values (like 0xFFFFFFFF).
Since we lock the socket, fetching tcp_time_stamp right before
doing the jiffies_to_msecs() calls is enough to remove these
wrong values.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When device is being setup on boot, there is a small race where
network device callback is registered, but the netvsc_device pointer
is not set yet. This can cause a NULL ptr dereference if packet
arrives during this window.
Fixes: 46b4f7f5d1 ("netvsc: eliminate per-device outstanding send counter")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrei reported a false alarm of lockdep at net/bridge/br_fdb.c:109,
this is because in Andrei's case, a spin_bug() was already triggered
before this, therefore the debug_locks is turned off, lockdep_is_held()
is no longer accurate after that. We should use lockdep_assert_held_once()
instead of lockdep_is_held() to respect debug_locks.
Fixes: 410b3d48f5 ("bridge: fdb: add proper lock checks in searching functions")
Reported-by: Andrei Vagin <avagin@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If we receive a BUSY packet for a call we think we've just completed, the
packet is handed off to the connection processor to deal with - but the
connection processor doesn't expect a BUSY packet and so flags a protocol
error.
Fix this by simply ignoring the BUSY packet for the moment.
The symptom of this may appear as a system call failing with EPROTO. This
may be triggered by pressing ctrl-C under some circumstances.
This comes about we abort calls due to interruption by a signal (which we
shouldn't do, but that's going to be a large fix and mostly in fs/afs/).
What happens is that we abort the call and may also abort follow up calls
too (this needs offloading somehoe). So we see a transmission of something
like the following sequence of packets:
DATA for call N
ABORT call N
DATA for call N+1
ABORT call N+1
in very quick succession on the same channel. However, the peer may have
deferred the processing of the ABORT from the call N to a background thread
and thus sees the DATA message from the call N+1 coming in before it has
cleared the channel. Thus it sends a BUSY packet[*].
[*] Note that some implementations (OpenAFS, for example) mark the BUSY
packet with one plus the callNumber of the call prior to call N.
Ordinarily, this would be call N, but there's no requirement for the
calls on a channel to be numbered strictly sequentially (the number is
required to increase).
This is wrong and means that the callNumber in the BUSY packet should
be ignored (it really ought to be N+1 since that's what it's in
response to).
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The pointer array for the tx/rx sub crqs should be free'ed when
releasing the tx/rx sub crqs.
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov says:
====================
bpf: inline bpf_map_lookup_elem()
bpf_map_lookup_elem() is one of the most frequently used helper functions.
Improve JITed program performance by inlining this helper.
bpf_map_type before after
hash 58M 74M
array 174M 280M
The values are number of lookups per second in ideal conditions
measured by micro-benchmark in patch 6.
The 'perf report' for HASH map type:
before:
54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem
14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw
8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem
5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem
2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
after:
60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem
18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw
2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
1.94% map_perf_test [kernel.kallsyms] [k] _einittext
1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit
1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
so the cost of htab_map_lookup_elem() and bpf_map_lookup_elem()
is gone after inlining.
'per-cpu' and 'lru' map types can be optimized similarly in the future.
Note the sparse will complain that bpf is addictive ;)
kernel/bpf/hashtab.c:438:19: sparse: subtraction of functions? Share your drugs
kernel/bpf/verifier.c:3342:38: sparse: subtraction of functions? Share your drugs
it's not a new warning, just in new places.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
$ map_perf_test 128
speed of HASH bpf_map_lookup_elem() in lookups per second
w/o JIT w/JIT
before 46M 58M
after 42M 74M
perf report
before:
54.23% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem
14.24% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw
8.84% map_perf_test [kernel.kallsyms] [k] htab_map_lookup_elem
5.93% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem
2.30% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
1.49% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
after:
60.03% map_perf_test [kernel.kallsyms] [k] __htab_map_lookup_elem
18.07% map_perf_test [kernel.kallsyms] [k] lookup_elem_raw
2.91% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
1.94% map_perf_test [kernel.kallsyms] [k] _einittext
1.90% map_perf_test [kernel.kallsyms] [k] __audit_syscall_exit
1.72% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
Notice that bpf_map_lookup_elem() and htab_map_lookup_elem() are trivial
functions, yet they take sizeable amount of cpu time.
htab_map_gen_lookup() removes bpf_map_lookup_elem() and converts
htab_map_lookup_elem() into three BPF insns which causing cpu time
for bpf_prog_da4fc6a3f41761a2() slightly increase.
$ map_perf_test 256
speed of ARRAY bpf_map_lookup_elem() in lookups per second
w/o JIT w/JIT
before 97M 174M
after 64M 280M
before:
37.33% map_perf_test [kernel.kallsyms] [k] array_map_lookup_elem
13.95% map_perf_test [kernel.kallsyms] [k] bpf_map_lookup_elem
6.54% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
4.57% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
after:
32.86% map_perf_test [kernel.kallsyms] [k] bpf_prog_da4fc6a3f41761a2
6.54% map_perf_test [kernel.kallsyms] [k] kprobe_ftrace_handler
array_map_gen_lookup() removes calls to array_map_lookup_elem()
and bpf_map_lookup_elem() and replaces them with 7 bpf insns.
The performance without JIT is slower, since executing extra insns
in the interpreter is slower than running native C code,
but with JIT the performance gains are obvious,
since native C->x86 code is replaced with fewer bpf->x86 instructions.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Optimize:
bpf_call
bpf_map_lookup_elem
map->ops->map_lookup_elem
htab_map_lookup_elem
__htab_map_lookup_elem
into:
bpf_call
__htab_map_lookup_elem
to improve performance of JITed programs.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Optimize bpf_call -> bpf_map_lookup_elem() -> array_map_lookup_elem()
into a sequence of bpf instructions.
When JIT is on the sequence of bpf instructions is the sequence
of native cpu instructions with significantly faster performance
than indirect call and two function's prologue/epilogue.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
convert_ctx_accesses() replaces single bpf instruction with a set of
instructions. Adjust corresponding insn_aux_data while patching.
It's needed to make sure subsequent 'for(all insn)' loops
have matching insn and insn_aux_data.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
reduce indent and make it iterate over instructions similar to
convert_ctx_accesses(). Also convert hard BUG_ON into soft verifier error.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
no functional change.
move fixup_bpf_calls() to verifier.c
it's being refactored in the next patch
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Anycast routes have the RTF_ANYCAST flag set, but when dumping routes
for userspace the route type is not set to RTN_ANYCAST. Make it so.
Fixes: 58c4fb86ea ("[IPV6]: Flag RTF_ANYCAST for anycast routes")
CC: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The tcp_tw_recycle was already broken for connections
behind NAT, since the per-destination timestamp is not
monotonically increasing for multiple machines behind
a single destination address.
After the randomization of TCP timestamp offsets
in commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets
for each connection), the tcp_tw_recycle is broken for all
types of connections for the same reason: the timestamps
received from a single machine is not monotonically increasing,
anymore.
Remove tcp_tw_recycle, since it is not functional. Also, remove
the PAWSPassive SNMP counter since it is only used for
tcp_tw_recycle, and simplify tcp_v4_route_req and tcp_v6_route_req
since the strict argument is only set when tcp_tw_recycle is
enabled.
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Lutz Vieweg <lvml@5t9.de>
Cc: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 8a5bd45f6616 (tcp: randomize tcp timestamp offsets for each connection)
randomizes TCP timestamps per connection. After this commit,
there is no guarantee that the timestamps received from the
same destination are monotonically increasing. As a result,
the per-destination timestamp cache in TCP metrics (i.e., tcpm_ts
in struct tcp_metrics_block) is broken and cannot be relied upon.
Remove the per-destination timestamp cache and all related code
paths.
Note that this cache was already broken for caching timestamps of
multiple machines behind a NAT sharing the same address.
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Lutz Vieweg <lvml@5t9.de>
Cc: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shannon Nelson says:
====================
sunvnet: better connection management
These patches remove some problems in handling of carrier state
with the ldmvsw vswitch, remove an xoff misuse in sunvnet, and
add stats for debug and tracking of point-to-point connections
between the ldom VMs.
v2:
- added ldmvsw ndo_open to reset the LDC channel
- updated copyrights
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The sunvnet netdev is connected to the controlling ldom's vswitch
for network bridging. However, for higher performance between ldoms,
there also is a channel between each client ldom. These connections are
represented in the sunvnet driver by a queue for each ldom. The driver
uses select_queue to tell the stack which queue to use by tracking the mac
addresses on the other end of each port. When a connected ldom shuts down,
the driver receives an LDC_EVENT_RESET and the port is removed from the
driver, thus a queue with no ldom on the other end will never be selected
for Tx.
The driver was trying to reinforce the "don't use this queue" notion with
netif_tx_stop_queue() and netif_tx_wake_queue(), which really should only
be used to signal a Tx queue is full (aka XOFF). This misuse of queue
state resulted in NETDEV WATCHDOG messages and lots of unnecessary calls
into the driver's tx_timeout handler. Simply removing these takes care
of the problem.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make sure multicast packets get counted in the device.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Track our used and unused queue indexies correctly. Otherwise, as ports
dropped out and returned, they all eventually ended up with the same
queue index.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In this driver, there is a "port" created for the connection to each of
the other ldoms; a netdev queue is mapped to each port, and they are
collected under a single netdev. The generic netdev statistics show
us all the traffic in and out of our network device, but don't show
individual queue/port stats. This patch breaks out the traffic counts
for the individual ports and gives us a little view into the state of
those connections.
Orabug: 25190537
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When an ldom VM is bound, the network vswitch infrastructure is set up for
it, but was being forced 'UP' by the userland switch configuration script.
When 'UP' but not actually connected to a running VM, the ipv6 neighbor
probes fail (not a horrible thing) and start cluttering up the kernel logs.
Funny thing: these are debug messages that never actually show up, but
we do see the net_ratelimited messages that say N callbacks were
suppressed.
This patch defers the netif_carrier_on() until an actual link has been
established with the VM, as indicated by receiving an LDC_EVENT_UP from
the underlying LDC protocol. Similarly, we take the link down when we
see the LDC_EVENT_RESET. Now when we see the ndo_open(), we reset the
link to get things talking again.
Orabug: 25525312
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cut-n-paste enablement of 802.3ad bonding on 25G NICs, which currently
report 0 as their bandwidth.
CC: Jay Vosburgh <j.vosburgh@gmail.com>
CC: Veaceslav Falico <vfalico@gmail.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: netdev@vger.kernel.org
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
replace comma to semi colons in tcp_westwood_info().
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alive tracking of nexthops can account for a link twice if the carrier
goes down followed by an admin down of the same link rendering multipath
routes useless. This is similar to 79099aab38 for UNREGISTER events and
DOWN events.
Fix by tracking number of alive nexthops in mpls_ifdown similar to the
logic in mpls_ifup. Checking the flags per nexthop once after all events
have been processed is simpler than trying to maintian a running count
through all event combinations.
Also, WRITE_ONCE is used instead of ACCESS_ONCE to set rt_nhn_alive
per a comment from checkpatch:
WARNING: Prefer WRITE_ONCE(<FOO>, <BAR>) over ACCESS_ONCE(<FOO>) = <BAR>
Fixes: c89359a42e ("mpls: support for dead routes")
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Robert Shearman <rshearma@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the aquantia device mtu is changed the net_device structure is not
updated. As a result the ip command does not properly reflect the mtu change.
Commit 5513e16421 incorrectly assumed that __dev_set_mtu() was making the
assignment ndev->mtu = new_mtu; This is not true in the case where the driver
has a ndo_change_mtu routine.
Fixes: 5513e16421 ("net: ethernet: aquantia: Fixes for aq_ndev_change_mtu")
Cc: Pavel Belous <Pavel.Belous@aquantia.com>
Signed-off-by: David Arcari <darcari@redhat.com>
Tested-by: Pavel Belous <pavel.belous@aquantia.com>
Reviewed-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
All IRQs owned by the PF and VF drivers share the same nondescript name
"octeon"; this makes it difficult to setup interrupt affinity.
Change the IRQ names to reflect their specific purpose:
LiquidIO<id>-<func>-<type>-<queue pair num>
Examples:
LiquidIO0-pf0-rxtx-3
LiquidIO1-vf1-rxtx-0
LiquidIO0-pf0-aux
We cannot use netdev->name for naming the IRQs because:
1. Early during init, the PF and VF drivers require interrupts to
send/receive control data from the NIC firmware; so the PF and VF
must request IRQs long before the netdev struct is registered.
2. The IRQ name can only be specified at the time it is requested.
It cannot be changed after that.
Signed-off-by: Rick Farrington <ricardo.farrington@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: Satanand Burla <satananda.burla@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>