Yonghong Song [Sat, 25 May 2019 18:57:53 +0000 (11:57 -0700)]
bpf: check signal validity in nmi for bpf_send_signal() helper
Commit
8b401f9ed244 ("bpf: implement bpf_send_signal() helper")
introduced bpf_send_signal() helper. If the context is nmi,
the sending signal work needs to be deferred to irq_work.
If the signal is invalid, the error will appear in irq_work
and it won't be propagated to user.
This patch did an early check in the helper itself to notify
user invalid signal, as suggested by Daniel.
Suggested-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrii Nakryiko [Sun, 26 May 2019 00:01:01 +0000 (17:01 -0700)]
bpftool: auto-complete BTF IDs for btf dump
Auto-complete BTF IDs for `btf dump id` sub-command. List of possible BTF
IDs is scavenged from loaded BPF programs that have associated BTFs, as
there is currently no API in libbpf to fetch list of all BTFs in the
system.
Suggested-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Matteo Croce [Fri, 24 May 2019 19:59:12 +0000 (21:59 +0200)]
samples: bpf: add ibumad sample to .gitignore
This commit adds ibumad to .gitignore which is
currently ommited from the ignore file.
Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Alexei Starovoitov [Sat, 25 May 2019 01:58:59 +0000 (18:58 -0700)]
Merge branch 'optimize-zext'
Jiong Wang says:
====================
v9:
- Split patch 5 in v8.
make bpf uapi header file sync a separate patch. (Alexei)
v8:
- For stack slot read, mark them as REG_LIVE_READ64. (Alexei)
- Change DEF_NOT_SUBREG from -1 to 0. (Alexei)
- Rebased on top of latest bpf-next.
v7:
- Drop the first patch in v6, the one adding 32-bit return value and
argument type. (Alexei)
- Rename bpf_jit_hardware_zext to bpf_jit_needs_zext. (Alexei)
- Use mov32 with imm == 1 to indicate it is zext. (Alexei)
- JIT back-ends peephole next insn to optimize out unnecessary zext
inserted by verifier. (Alexei)
- Testing:
+ patch set tested (bpf selftest) on x64 host with llvm 9.0
no regression observed no both JIT and interpreter modes.
+ patch set tested (bpf selftest) on x32 host.
By Yanqing Wang, thanks!
no regression observed on both JIT and interpreter modes.
+ patch set tested (bpf selftest) on RV64 host with llvm 9.0,
By Björn Töpel, thanks!
no regression observed before and after this set with JIT_ALWAYS_ON.
test_progs_32 also enabled as LLVM 9.0 is used by Björn.
+ cross compiled the other affected targets, arm, PowerPC, SPARC, S390.
v6:
- Fixed s390 kbuild test robot error. (kbuild)
- Make comment style in backends patches more consistent.
v5:
- Adjusted several test_verifier helpers to make them works on hosts
w and w/o hardware zext. (Naveen)
- Make sure zext flag not set when verifier by-passed, for example,
libtest_bpf.ko. (Naveen)
- Conservatively mark bpf main return value as 64-bit. (Alexei)
- Make sure read flag is either READ64 or READ32, not the mix of both.
(Alexei)
- Merged patch 1 and 2 in v4. (Alexei)
- Fixed kbuild test robot warning on NFP. (kbuild)
- Proposed new BPF_ZEXT insn to have optimal code-gen for various JIT
back-ends.
- Conservately set zext flags for patched-insn.
- Fixed return value zext for helper function calls.
- Also adjusted test_verifier scalability unit test to avoid triggerring
too many insn patch which will hang computer.
- re-tested on x86 host with llvm 9.0, no regression on test_verifier,
test_progs, test_progs_32.
- re-tested offload target (nfp), no regression on local testsuite.
v4:
- added the two missing fixes which addresses two Jakub's reviewes in v3.
- rebase on top of bpf-next.
v3:
- remove redundant check in "propagate_liveness_reg". (Jakub)
- add extra check in "mark_reg_read" to prune more search. (Jakub)
- re-implemented "prog_flags" passing mechanism, removed use of
global switch inside libbpf.
- enabled high 32-bit randomization beyond "test_verifier" and
"test_progs". Now it should have been enabled for all possible
tests. Re-run all tests, haven't noticed regression.
- remove RFC tag.
v2:
- rebased on top of bpf-next master.
- added comments for what is sub-register def index. (Edward, Alexei)
- removed patch 1 which turns bit mask from enum to macro. (Alexei)
- removed sysctl/bpf_jit_32bit_opt. (Alexei)
- merged sub-register def insn index into reg state. (Alexei)
- change test methodology (Alexei):
+ instead of simple unit tests on x86_64 for which this optimization
doesn't enabled due to there is hardware support, poison high
32-bit for whose def identified as safe to do so. this could let
the correctness of this patch set checked when daily bpf selftest
ran which delivers very stressful test on host machine like x86_64.
+ hi32 poisoning is gated by a new BPF_F_TEST_RND_HI32 prog flags.
+ BPF_F_TEST_RND_HI32 is enabled for all tests of "test_progs" and
"test_verifier", the latter needs minor tweak on two unit tests,
please see the patch for the change.
+ introduced a new global variable "libbpf_test_mode" into libbpf.
once it is set to true, it will set BPF_F_TEST_RND_HI32 for all the
later PROG_LOAD syscall, the goal is to easy the enable of hi32
poison on exsiting testsuite.
we could also introduce new APIs, for example "bpf_prog_test_load",
then use -Dbpf_prog_load=bpf_prog_test_load to migrate tests under
test_progs, but there are several load APIs, and such new API need
some change on struture like "struct bpf_prog_load_attr".
+ removed old unit tests. it is based on insn scan and requires quite
a few test_verifier generic code change. given hi32 randomization
could offer good test coverage, the unit tests doesn't add much
extra test value.
- enhanced register width check ("is_reg64") when record sub-register
write, now, it returns more accurate width.
- Re-run all tests under "test_progs" and "test_verifier" on x86_64, no
regression. Fixed a couple of bugs exposed:
1. ctx field size transformation was not taken into account.
2. insn patch could cause lost of original aux data which is
important for ctx field conversion.
3. return value for propagate_liveness was wrong and caused
regression on processed insn number.
4. helper call arg wasn't handled properly that path prune may cause
64-bit read info in pruned path lost.
- Re-run Cilium bpf prog for processed-insn-number benchmarking, no
regression.
v1:
- Fixed the missing handling on callee-saved for bpf-to-bpf call,
sub-register defs therefore moved to frame state. (Jakub Kicinski)
- Removed redundant "cross_reg". (Jakub Kicinski)
- Various coding styles & grammar fixes. (Jakub Kicinski, Quentin Monnet)
eBPF ISA specification requires high 32-bit cleared when low 32-bit
sub-register is written. This applies to destination register of ALU32 etc.
JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and
AArch64 ISA has the same semantics, so the corresponding JIT back-end
doesn't need to do extra work.
However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches
(PowerPC, SPARC etc) need to do explicit zero extension to meet this
requirement, otherwise code like the following will fail.
u64_value = (u64) u32_value
... other uses of u64_value
This is because compiler could exploit the semantic described above and
save those zero extensions for extending u32_value to u64_value, these JIT
back-ends are expected to guarantee this through inserting extra zero
extensions which however could be a significant increase on the code size.
Some benchmarks show there could be ~40% sub-register writes out of total
insns, meaning at least ~40% extra code-gen.
One observation is these extra zero extensions are not always necessary.
Take above code snippet for example, it is possible u32_value will never be
casted into a u64, the value of high 32-bit of u32_value then could be
ignored and extra zero extension could be eliminated.
This patch implements this idea, insns defining sub-registers will be
marked when the high 32-bit of the defined sub-register matters. For
those unmarked insns, it is safe to eliminate high 32-bit clearnace for
them.
Algo
====
We could use insn scan based static analysis to tell whether one
sub-register def doesn't need zero extension. However, using such static
analysis, we must do conservative assumption at branching point where
multiple uses could be introduced. So, for any sub-register def that is
active at branching point, we need to mark it as needing zero extension.
This could introducing quite a few false alarms, for example ~25% on
Cilium bpf_lxc.
It will be far better to use dynamic data-flow tracing which verifier
fortunately already has and could be easily extend to serve the purpose of
this patch set.
- Split read flags into READ32 and READ64.
- Record index of insn that does sub-register write. Keep the index inside
reg state and update it during verifier insn walking.
- A full register read on a sub-register marks its definition insn as
needing zero extension on dst register.
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Benchmark
=========
- I estimate the JITed image could be 10% ~ 30% smaller on these affected
arches (nfp, arm, x32, risv, ppc, sparc, s390), depending on the prog.
- For Cilium bpf_lxc, there is ~11500 insns in the compiled binary (use
latest LLVM snapshot, and with -mcpu=v3 -mattr=+alu32 enabled), 4460 of
them has sub-register writes (~40%). Calculated by:
cat dump | grep -P "\tw" | wc -l (ALU32)
cat dump | grep -P "r.*=.*u32" | wc -l (READ_W)
cat dump | grep -P "r.*=.*u16" | wc -l (READ_H)
cat dump | grep -P "r.*=.*u8" | wc -l (READ_B)
After this patch set enabled, > 25% of those 4460 could be identified as
doesn't needing zero extension on the destination, and the percentage
could go further up to more than 50% with some follow up optimizations
based on the infrastructure offered by this set. This leads to
significant save on JITed image.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:28 +0000 (23:25 +0100)]
nfp: bpf: eliminate zero extension code-gen
This patch eliminate zero extension code-gen for instructions including
both alu and load/store. The only exception is for ctx load, because
offload target doesn't go through host ctx convert logic so we do
customized load and ignores zext flag set by verifier.
Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:27 +0000 (23:25 +0100)]
riscv: bpf: eliminate zero extension code-gen
Cc: Björn Töpel <bjorn.topel@gmail.com>
Acked-by: Björn Töpel <bjorn.topel@gmail.com>
Tested-by: Björn Töpel <bjorn.topel@gmail.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:26 +0000 (23:25 +0100)]
x32: bpf: eliminate zero extension code-gen
Cc: Wang YanQing <udknight@gmail.com>
Tested-by: Wang YanQing <udknight@gmail.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:25 +0000 (23:25 +0100)]
sparc: bpf: eliminate zero extension code-gen
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:24 +0000 (23:25 +0100)]
s390: bpf: eliminate zero extension code-gen
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:23 +0000 (23:25 +0100)]
powerpc: bpf: eliminate zero extension code-gen
Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Cc: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:22 +0000 (23:25 +0100)]
arm: bpf: eliminate zero extension code-gen
Cc: Shubham Bansal <illusionist.neo@gmail.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:21 +0000 (23:25 +0100)]
selftests: bpf: enable hi32 randomization for all tests
The previous libbpf patch allows user to specify "prog_flags" to bpf
program load APIs. To enable high 32-bit randomization for a test, we need
to set BPF_F_TEST_RND_HI32 in "prog_flags".
To enable such randomization for all tests, we need to make sure all places
are passing BPF_F_TEST_RND_HI32. Changing them one by one is not
convenient, also, it would be better if a test could be switched to
"normal" running mode without code change.
Given the program load APIs used across bpf selftests are mostly:
bpf_prog_load: load from file
bpf_load_program: load from raw insns
A test_stub.c is implemented for bpf seltests, it offers two functions for
testing purpose:
bpf_prog_test_load
bpf_test_load_program
The are the same as "bpf_prog_load" and "bpf_load_program", except they
also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to
customize any "prog_flags", it makes little sense to put these two
functions into libbpf.
Then, the following CFLAGS are passed to compilations for host programs:
-Dbpf_prog_load=bpf_prog_test_load
-Dbpf_load_program=bpf_test_load_program
They migrate the used load APIs to the test version, hence enable high
32-bit randomization for these tests without changing source code.
Besides all these, there are several testcases are using
"bpf_prog_load_attr" directly, their call sites are updated to pass
BPF_F_TEST_RND_HI32.
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:20 +0000 (23:25 +0100)]
selftests: bpf: adjust several test_verifier helpers for insn insertion
- bpf_fill_ld_abs_vlan_push_pop:
Prevent zext happens inside PUSH_CNT loop. This could happen because
of BPF_LD_ABS (32-bit def) + BPF_JMP (64-bit use), or BPF_LD_ABS +
EXIT (64-bit use of R0). So, change BPF_JMP to BPF_JMP32 and redefine
R0 at exit path to cut off the data-flow from inside the loop.
- bpf_fill_jump_around_ld_abs:
Jump range is limited to 16 bit. every ld_abs is replaced by 6 insns,
but on arches like arm, ppc etc, there will be one BPF_ZEXT inserted
to extend the error value of the inlined ld_abs sequence which then
contains 7 insns. so, set the dividend to 7 so the testcase could
work on all arches.
- bpf_fill_scale1/bpf_fill_scale2:
Both contains ~1M BPF_ALU32_IMM which will trigger ~1M insn patcher
call because of hi32 randomization later when BPF_F_TEST_RND_HI32 is
set for bpf selftests. Insn patcher is not efficient that 1M call to
it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32
randomization.
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:19 +0000 (23:25 +0100)]
libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attr
libbpf doesn't allow passing "prog_flags" during bpf program load in a
couple of load related APIs, "bpf_load_program_xattr", "load_program" and
"bpf_prog_load_xattr".
It makes sense to allow passing "prog_flags" which is useful for
customizing program loading.
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:18 +0000 (23:25 +0100)]
bpf: verifier: randomize high 32-bit when BPF_F_TEST_RND_HI32 is set
This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32
is set.
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:17 +0000 (23:25 +0100)]
tools: bpf: sync uapi header bpf.h
Sync new bpf prog load flag "BPF_F_TEST_RND_HI32" to tools/.
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:16 +0000 (23:25 +0100)]
bpf: introduce new bpf prog load flags "BPF_F_TEST_RND_HI32"
x86_64 and AArch64 perhaps are two arches that running bpf testsuite
frequently, however the zero extension insertion pass is not enabled for
them because of their hardware support.
It is critical to guarantee the pass correction as it is supposed to be
enabled at default for a couple of other arches, for example PowerPC,
SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way
to test this pass on for example x86_64.
The test methodology employed by this set is "poisoning" useless bits. High
32-bit of a definition is randomized if it is identified as not used by any
later insn. Such randomization is only enabled under testing mode which is
gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32".
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:15 +0000 (23:25 +0100)]
bpf: verifier: insert zero extension according to analysis result
After previous patches, verifier will mark a insn if it really needs zero
extension on dst_reg.
It is then for back-ends to decide how to use such information to eliminate
unnecessary zero extension code-gen during JIT compilation.
One approach is verifier insert explicit zero extension for those insns
that need zero extension in a generic way, JIT back-ends then do not
generate zero extension for sub-register write at default.
However, only those back-ends which do not have hardware zero extension
want this optimization. Back-ends like x86_64 and AArch64 have hardware
zero extension support that the insertion should be disabled.
This patch introduces new target hook "bpf_jit_needs_zext" which returns
false at default, meaning verifier zero extension insertion is disabled at
default. A back-end could override this hook to return true if it doesn't
have hardware support and want verifier insert zero extension explicitly.
Offload targets do not use this native target hook, instead, they could
get the optimization results using bpf_prog_offload_ops.finalize.
NOTE: arches could have diversified features, it is possible for one arch
to have hardware zero extension support for some sub-register write insns
but not for all. For example, PowerPC, SPARC have zero extended loads, but
not for alu32. So when verifier zero extension insertion enabled, these JIT
back-ends need to peephole insns to remove those zero extension inserted
for insn that actually has hardware zero extension support. The peephole
could be as simple as looking the next insn, if it is a special zero
extension insn then it is safe to eliminate it if the current insn has
hardware zero extension support.
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:14 +0000 (23:25 +0100)]
bpf: introduce new mov32 variant for doing explicit zero extension
The encoding for this new variant is based on BPF_X format. "imm" field was
0 only, now it could be 1 which means doing zero extension unconditionally
.code = BPF_ALU | BPF_MOV | BPF_X
.dst_reg = DST
.src_reg = SRC
.imm = 1
We use this new form for doing zero extension for which verifier will
guarantee SRC == DST.
Implications on JIT back-ends when doing code-gen for
BPF_ALU | BPF_MOV | BPF_X:
1. No change if hardware already does zero extension unconditionally for
sub-register write.
2. Otherwise, when seeing imm == 1, just generate insns to clear high
32-bit. No need to generate insns for the move because when imm == 1,
dst_reg is the same as src_reg at the moment.
Interpreter doesn't need change as well. It is doing unconditionally zero
extension for mov32 already.
One helper macro BPF_ZEXT_REG is added to help creating zero extension
insn using this new mov32 variant.
One helper function insn_is_zext is added for checking one insn is an
zero extension on dst. This will be widely used by a few JIT back-ends in
later patches in this set.
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:13 +0000 (23:25 +0100)]
bpf: verifier: mark patched-insn with sub-register zext flag
Patched insns do not go through generic verification, therefore doesn't has
zero extension information collected during insn walking.
We don't bother analyze them at the moment, for any sub-register def comes
from them, just conservatively mark it as needing zero extension.
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Jiong Wang [Fri, 24 May 2019 22:25:12 +0000 (23:25 +0100)]
bpf: verifier: mark verified-insn with sub-register zext flag
eBPF ISA specification requires high 32-bit cleared when low 32-bit
sub-register is written. This applies to destination register of ALU32 etc.
JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and
AArch64 ISA has the same semantics, so the corresponding JIT back-end
doesn't need to do extra work.
However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches
(PowerPC, SPARC etc) need to do explicit zero extension to meet this
requirement, otherwise code like the following will fail.
u64_value = (u64) u32_value
... other uses of u64_value
This is because compiler could exploit the semantic described above and
save those zero extensions for extending u32_value to u64_value, these JIT
back-ends are expected to guarantee this through inserting extra zero
extensions which however could be a significant increase on the code size.
Some benchmarks show there could be ~40% sub-register writes out of total
insns, meaning at least ~40% extra code-gen.
One observation is these extra zero extensions are not always necessary.
Take above code snippet for example, it is possible u32_value will never be
casted into a u64, the value of high 32-bit of u32_value then could be
ignored and extra zero extension could be eliminated.
This patch implements this idea, insns defining sub-registers will be
marked when the high 32-bit of the defined sub-register matters. For
those unmarked insns, it is safe to eliminate high 32-bit clearnace for
them.
Algo:
- Split read flags into READ32 and READ64.
- Record index of insn that does sub-register write. Keep the index inside
reg state and update it during verifier insn walking.
- A full register read on a sub-register marks its definition insn as
needing zero extension on dst register.
A new sub-register write overrides the old one.
- When propagating read64 during path pruning, also mark any insn defining
a sub-register that is read in the pruned path as full-register.
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Fri, 24 May 2019 21:26:49 +0000 (23:26 +0200)]
Merge branch 'bpf-send-sig'
Yonghong Song says:
====================
This patch tries to solve the following specific use case.
Currently, bpf program can already collect stack traces
through kernel function get_perf_callchain()
when certain events happens (e.g., cache miss counter or
cpu clock counter overflows). But such stack traces are
not enough for jitted programs, e.g., hhvm (jited php).
To get real stack trace, jit engine internal data structures
need to be traversed in order to get the real user functions.
bpf program itself may not be the best place to traverse
the jit engine as the traversing logic could be complex and
it is not a stable interface either.
Instead, hhvm implements a signal handler,
e.g. for SIGALARM, and a set of program locations which
it can dump stack traces. When it receives a signal, it will
dump the stack in next such program location.
This patch implements bpf_send_signal() helper to send
a signal to hhvm in real time, resulting in intended stack traces.
Patch #1 implemented the bpf_send_helper() in the kernel.
Patch #2 synced uapi header bpf.h to tools directory.
Patch #3 added a self test which covers tracepoint
and perf_event bpf programs.
Changelogs:
v4 => v5:
. pass the "current" task struct to irq_work as well
since the current task struct may change between
nmi and subsequent irq_work_interrupt.
Discovered by Daniel.
v3 => v4:
. fix one typo and declare "const char *id_path = ..."
to avoid directly use the long string in the func body
in Patch #3.
v2 => v3:
. change the standalone test to be part of prog_tests.
RFC v1 => v2:
. previous version allows to send signal to an arbitrary
pid. This version just sends the signal to current
task to avoid unstable pid and potential races between
sending signals and task state changes for the pid.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Yonghong Song [Thu, 23 May 2019 21:47:47 +0000 (14:47 -0700)]
tools/bpf: add selftest in test_progs for bpf_send_signal() helper
The test covered both nmi and tracepoint perf events.
$ ./test_progs
...
test_send_signal_tracepoint:PASS:tracepoint 0 nsec
...
test_send_signal_common:PASS:tracepoint 0 nsec
...
test_send_signal_common:PASS:perf_event 0 nsec
...
test_send_signal:OK
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Yonghong Song [Thu, 23 May 2019 21:47:46 +0000 (14:47 -0700)]
tools/bpf: sync bpf uapi header bpf.h to tools directory
The bpf uapi header include/uapi/linux/bpf.h is sync'ed
to tools/include/uapi/linux/bpf.h.
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Yonghong Song [Thu, 23 May 2019 21:47:45 +0000 (14:47 -0700)]
bpf: implement bpf_send_signal() helper
This patch tries to solve the following specific use case.
Currently, bpf program can already collect stack traces
through kernel function get_perf_callchain()
when certain events happens (e.g., cache miss counter or
cpu clock counter overflows). But such stack traces are
not enough for jitted programs, e.g., hhvm (jited php).
To get real stack trace, jit engine internal data structures
need to be traversed in order to get the real user functions.
bpf program itself may not be the best place to traverse
the jit engine as the traversing logic could be complex and
it is not a stable interface either.
Instead, hhvm implements a signal handler,
e.g. for SIGALARM, and a set of program locations which
it can dump stack traces. When it receives a signal, it will
dump the stack in next such program location.
Such a mechanism can be implemented in the following way:
. a perf ring buffer is created between bpf program
and tracing app.
. once a particular event happens, bpf program writes
to the ring buffer and the tracing app gets notified.
. the tracing app sends a signal SIGALARM to the hhvm.
But this method could have large delays and causing profiling
results skewed.
This patch implements bpf_send_signal() helper to send
a signal to hhvm in real time, resulting in intended stack traces.
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Fri, 24 May 2019 21:05:58 +0000 (14:05 -0700)]
Merge branch 'btf2c-converter'
Andrii Nakryiko says:
====================
This patch set adds BTF-to-C dumping APIs to libbpf, allowing to output
a subset of BTF types as a compilable C type definitions. This is useful by
itself, as raw BTF output is not easy to inspect and comprehend. But it's also
a big part of BPF CO-RE (compile once - run everywhere) initiative aimed at
allowing to write relocatable BPF programs, that won't require on-the-host
kernel headers (and would be able to inspect internal kernel structures, not
exposed through kernel headers).
This patch set consists of three groups of patches and one pre-patch, with the
BTF-to-C dumper API depending on the first two groups.
Pre-patch #1 fixes issue with libbpf_internal.h.
btf__parse_elf() API patches:
- patch #2 adds btf__parse_elf() API to libbpf, allowing to load BTF and/or
BTF.ext from ELF file;
- patch #3 utilizies btf__parse_elf() from bpftool for `btf dump file` command;
- patch #4 switches test_btf.c to use btf__parse_elf() to check for presence
of BTF data in object file.
libbpf's internal hashmap patches:
- patch #5 adds resizeable non-thread safe generic hashmap to libbpf;
- patch #6 adds tests for that hashmap;
- patch #7 migrates btf_dedup()'s dedup_table to use hashmap w/ APPEND.
BTF-to-C dumper API patches:
- patch #8 adds btf_dump APIs with all the logic for laying out type
definitions in correct order and emitting C syntax for them;
- patch #9 adds lots of tests for common and quirky parts of C type system;
- patch #10 adds support for C-syntax btf dumping to bpftool;
- patch #11 updates bpftool documentation to mention C-syntax dump option;
- patch #12 update bash-completion for btf dump sub-command.
v2->v3:
- fix bpftool-btf.rst formatting (Quentin);
- simplify bash autocompletion script (Quentin);
- better error message in btf dump (Quentin);
v1->v2:
- removed unuseful file header (Jakub);
- removed inlines in .c (Jakub);
- added 'format {c|raw}' keyword/option (Jakub);
- re-use i var for iteration in btf_dump_c() (Jakub);
- bumped libbpf version to 0.0.4;
v0->v1:
- fix bug in hashmap__for_each_bucket_entry() not handling empty hashmap;
- removed `btf dump`-specific libbpf logging hook up (Quentin has more generic
patchset);
- change btf__parse_elf() to always load .BTF and return it as a result, with
.BTF.ext being optional and returned through struct btf_ext** arg (Alexei);
- endianness check to use __BYTE_ORDER__ (Alexei);
- bool:1 to __u8:1 in type_aux_state (Alexei);
- added HASHMAP_APPEND strategy to hashmap, changed
hashmap__for_each_key_entry() to also check for key equality during
iteration (multimap iteration for key);
- added new tests for empty hashmap and hashmap as a multimap;
- tried to clarify weak/strong dependency ordering comments (Alexei)
- btf dump test's expected output - support better commenting aproach (Alexei);
- added bash-completion for a new "c" option (Alexei).
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:07 +0000 (11:59 -0700)]
bpftool: update bash-completion w/ new c option for btf dump
Add bash completion for new C btf dump option.
Cc: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:06 +0000 (11:59 -0700)]
bpftool/docs: add description of btf dump C option
Document optional **c** option for btf dump subcommand.
Cc: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:05 +0000 (11:59 -0700)]
bpftool: add C output format option to btf dump subcommand
Utilize new libbpf's btf_dump API to emit BTF as a C definitions.
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:04 +0000 (11:59 -0700)]
selftests/bpf: add btf_dump BTF-to-C conversion tests
Add new test_btf_dump set of tests, validating BTF-to-C conversion
correctness. Tests rely on clang to generate BTF from provided C test
cases.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:03 +0000 (11:59 -0700)]
libbpf: add btf_dump API for BTF-to-C conversion
BTF contains enough type information to allow generating valid
compilable C header w/ correct layout of structs/unions and all the
typedef/enum definitions. This patch adds a new "object" - btf_dump to
facilitate dumping BTF as valid C. btf_dump__dump_type() is the main API
which takes care of dumping out (through user-provided printf-like
callback function) C definitions for given type ID and it's required
dependencies. This allows for not just dumping out entirety of BTF types,
but also selective filtering based on user-provided criterias w/ minimal
set of dependent types.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:02 +0000 (11:59 -0700)]
libbpf: switch btf_dedup() to hashmap for dedup table
Utilize libbpf's hashmap as a multimap fof dedup_table implementation.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:01 +0000 (11:59 -0700)]
selftests/bpf: add tests for libbpf's hashmap
Test all APIs for internal hashmap implementation.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:59:00 +0000 (11:59 -0700)]
libbpf: add resizable non-thread safe internal hashmap
There is a need for fast point lookups inside libbpf for multiple use
cases (e.g., name resolution for BTF-to-C conversion, by-name lookups in
BTF for upcoming BPF CO-RE relocation support, etc). This patch
implements simple resizable non-thread safe hashmap using single linked
list chains.
Four different insert strategies are supported:
- HASHMAP_ADD - only add key/value if key doesn't exist yet;
- HASHMAP_SET - add key/value pair if key doesn't exist yet; otherwise,
update value;
- HASHMAP_UPDATE - update value, if key already exists; otherwise, do
nothing and return -ENOENT;
- HASHMAP_APPEND - always add key/value pair, even if key already exists.
This turns hashmap into a multimap by allowing multiple values to be
associated with the same key. Most useful read API for such hashmap is
hashmap__for_each_key_entry() iteration. If hashmap__find() is still
used, it will return last inserted key/value entry (first in a bucket
chain).
For HASHMAP_SET and HASHMAP_UPDATE, old key/value pair is returned, so
that calling code can handle proper memory management, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:58:59 +0000 (11:58 -0700)]
selftests/bpf: use btf__parse_elf to check presence of BTF/BTF.ext
Switch test_btf.c to rely on btf__parse_elf to check presence of BTF and
BTF.ext data, instead of implementing its own ELF parsing.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:58:58 +0000 (11:58 -0700)]
bpftool: use libbpf's btf__parse_elf API
Use btf__parse_elf() API, provided by libbpf, instead of implementing
ELF parsing by itself.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:58:57 +0000 (11:58 -0700)]
libbpf: add btf__parse_elf API to load .BTF and .BTF.ext
Loading BTF and BTF.ext from ELF file is a common need. Instead of
requiring every user to re-implement it, let's provide this API from
libbpf itself. It's mostly copy/paste from `bpftool btf dump`
implementation, which will be switched to libbpf's version in next
patch. btf__parse_elf allows to load BTF and optionally BTF.ext.
This is also useful for tests that need to load/work with BTF, loaded
from test ELF files.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Andrii Nakryiko [Fri, 24 May 2019 18:58:56 +0000 (11:58 -0700)]
libbpf: ensure libbpf.h is included along libbpf_internal.h
libbpf_internal.h expects a bunch of stuff defined in libbpf.h to be
defined. This patch makes sure that libbpf.h is always included.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Michal Rostecki [Thu, 23 May 2019 12:53:55 +0000 (14:53 +0200)]
samples: bpf: Do not define bpf_printk macro
The bpf_printk macro was moved to bpf_helpers.h which is included in all
example programs.
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Michal Rostecki [Thu, 23 May 2019 12:53:54 +0000 (14:53 +0200)]
selftests: bpf: Move bpf_printk to bpf_helpers.h
bpf_printk is a macro which is commonly used to print out debug messages
in BPF programs and it was copied in many selftests and samples. Since
all of them include bpf_helpers.h, this change moves the macro there.
Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Daniel Borkmann [Thu, 23 May 2019 23:46:26 +0000 (01:46 +0200)]
Merge branch 'bpf-explored-states'
Alexei Starovoitov says:
====================
Convert explored_states array into hash table and use simple hash
to reduce verifier peak memory consumption for programs with bpf2bpf
calls. More details in patch 3.
v1->v2: fixed Jakub's small nit in patch 1
====================
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Wed, 22 May 2019 03:17:07 +0000 (20:17 -0700)]
bpf: convert explored_states to hash table
All prune points inside a callee bpf function most likely will have
different callsites. For example, if function foo() is called from
two callsites the half of explored states in all prune points in foo()
will be useless for subsequent walking of one of those callsites.
Fortunately explored_states pruning heuristics keeps the number of states
per prune point small, but walking these states is still a waste of cpu
time when the callsite of the current state is different from the callsite
of the explored state.
To improve pruning logic convert explored_states into hash table and
use simple insn_idx ^ callsite hash to select hash bucket.
This optimization has no effect on programs without bpf2bpf calls
and drastically improves programs with calls.
In the later case it reduces total memory consumption in 1M scale tests
by almost 3 times (peak_states drops from 5752 to 2016).
Care should be taken when comparing the states for equivalency.
Since the same hash bucket can now contain states with different indices
the insn_idx has to be part of verifier_state and compared.
Different hash table sizes and different hash functions were explored,
but the results were not significantly better vs this patch.
They can be improved in the future.
Hit/miss heuristic is not counting index miscompare as a miss.
Otherwise verifier stats become unstable when experimenting
with different hash functions.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Wed, 22 May 2019 03:17:06 +0000 (20:17 -0700)]
bpf: split explored_states
split explored_states into prune_point boolean mark
and link list of explored states.
This removes STATE_LIST_MARK hack and allows marks to be separate from states.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Wed, 22 May 2019 03:17:05 +0000 (20:17 -0700)]
bpf: cleanup explored_states
clean up explored_states to prep for introduction of hashtable
No functional changes.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Daniel Borkmann [Thu, 23 May 2019 14:20:58 +0000 (16:20 +0200)]
Merge branch 'bpf-jmp-seq-limit'
Alexei Starovoitov says:
====================
Patch 1 - jmp sequence limit
Patch 2 - improve existing tests
Patch 3 - add pyperf-based realistic bpf program that takes
advantage of higher limit and use it as a stress test
v1->v2: fixed nit in patch 3. added Andrii's acks
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Wed, 22 May 2019 03:14:21 +0000 (20:14 -0700)]
selftests/bpf: add pyperf scale test
Add a snippet of pyperf bpf program used to collect python stack traces
as a scale test for the verifier.
At 189 loop iterations llvm 9.0 starts ignoring '#pragma unroll'
and generates partially unrolled loop instead.
Hence use 50, 100, and 180 loop iterations to stress test.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Wed, 22 May 2019 03:14:20 +0000 (20:14 -0700)]
selftests/bpf: adjust verifier scale test
Adjust scale tests to check for new jmp sequence limit.
BPF_JGT had to be changed to BPF_JEQ because the verifier was
too smart. It tracked the known safe range of R0 values
and pruned the search earlier before hitting exact 8192 limit.
bpf_semi_rand_get() was too (un)?lucky.
k = 0; was missing in bpf_fill_scale2.
It was testing a bit shorter sequence of jumps than intended.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Alexei Starovoitov [Wed, 22 May 2019 03:14:19 +0000 (20:14 -0700)]
bpf: bump jmp sequence limit
The limit of 1024 subsequent jumps was causing otherwise valid
programs to be rejected. Bump it to 8192 and make the error more verbose.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Andrii Nakryiko [Wed, 22 May 2019 17:51:28 +0000 (10:51 -0700)]
libbpf: emit diff of mismatched public API, if any
It's easy to have a mismatch of "intended to be public" vs really
exposed API functions. While Makefile does check for this mismatch, if
it actually occurs it's not trivial to determine which functions are
accidentally exposed. This patch dumps out a diff showing what's not
supposed to be exposed facilitating easier fixing.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Sunil Muthuswamy [Wed, 22 May 2019 23:10:44 +0000 (23:10 +0000)]
hv_sock: perf: loop in send() to maximize bandwidth
Currently, the hv_sock send() iterates once over the buffer, puts data into
the VMBUS channel and returns. It doesn't maximize on the case when there
is a simultaneous reader draining data from the channel. In such a case,
the send() can maximize the bandwidth (and consequently minimize the cpu
cycles) by iterating until the channel is found to be full.
Perf data:
Total Data Transfer: 10GB/iteration
Single threaded reader/writer, Linux hvsocket writer with Windows hvsocket
reader
Packet size: 64KB
CPU sys time was captured using the 'time' command for the writer to send
10GB of data.
'Send Buffer Loop' is with the patch applied.
The values below are over 10 iterations.
|--------------------------------------------------------|
| | Current | Send Buffer Loop |
|--------------------------------------------------------|
| | Throughput | CPU sys | Throughput | CPU sys |
| | (MB/s) | time (s) | (MB/s) | time (s) |
|--------------------------------------------------------|
| Min | 407 | 7.048 | 401 | 5.958 |
|--------------------------------------------------------|
| Max | 455 | 7.563 | 542 | 6.993 |
|--------------------------------------------------------|
| Avg | 440 | 7.411 | 451 | 6.639 |
|--------------------------------------------------------|
| Median | 446 | 7.417 | 447 | 6.761 |
|--------------------------------------------------------|
Observation:
1. The avg throughput doesn't really change much with this change for this
scenario. This is most probably because the bottleneck on throughput is
somewhere else.
2. The average system (or kernel) cpu time goes down by 10%+ with this
change, for the same amount of data transfer.
Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Reviewed-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sunil Muthuswamy [Wed, 22 May 2019 22:56:07 +0000 (22:56 +0000)]
hv_sock: perf: Allow the socket buffer size options to influence the actual socket buffers
Currently, the hv_sock buffer size is static and can't scale to the
bandwidth requirements of the application. This change allows the
applications to influence the socket buffer sizes using the SO_SNDBUF and
the SO_RCVBUF socket options.
Few interesting points to note:
1. Since the VMBUS does not allow a resize operation of the ring size, the
socket buffer size option should be set prior to establishing the
connection for it to take effect.
2. Setting the socket option comes with the cost of that much memory being
reserved/allocated by the kernel, for the lifetime of the connection.
Perf data:
Total Data Transfer: 1GB
Single threaded reader/writer
Results below are summarized over 10 iterations.
Linux hvsocket writer + Windows hvsocket reader:
|---------------------------------------------------------------------------------------------|
|Packet size -> | 128B | 1KB | 4KB | 64KB |
|---------------------------------------------------------------------------------------------|
|SO_SNDBUF size | | Throughput in MB/s (min/max/avg/median): |
| v | |
|---------------------------------------------------------------------------------------------|
| Default | 109/118/114/116 | 636/774/701/700 | 435/507/480/476 | 410/491/462/470 |
| 16KB | 110/116/112/111 | 575/705/662/671 | 749/900/854/869 | 592/824/692/676 |
| 32KB | 108/120/115/115 | 703/823/767/772 | 718/878/850/866 | 1593/2124/2000/2085 |
| 64KB | 108/119/114/114 | 592/732/683/688 | 805/934/903/911 | 1784/1943/1862/1843 |
|---------------------------------------------------------------------------------------------|
Windows hvsocket writer + Linux hvsocket reader:
|---------------------------------------------------------------------------------------------|
|Packet size -> | 128B | 1KB | 4KB | 64KB |
|---------------------------------------------------------------------------------------------|
|SO_RCVBUF size | | Throughput in MB/s (min/max/avg/median): |
| v | |
|---------------------------------------------------------------------------------------------|
| Default | 69/82/75/73 | 313/343/333/336 | 418/477/446/445 | 659/701/676/678 |
| 16KB | 69/83/76/77 | 350/401/375/382 | 506/548/517/516 | 602/624/615/615 |
| 32KB | 62/83/73/73 | 471/529/496/494 | 830/1046/935/939 | 944/1180/1070/1100 |
| 64KB | 64/70/68/69 | 467/533/501/497 | 1260/1590/1430/1431 | 1605/1819/1670/1660 |
|---------------------------------------------------------------------------------------------|
Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Reviewed-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 22 May 2019 22:00:25 +0000 (15:00 -0700)]
ipv4/igmp: shrink struct ip_sf_list
Removing two 4 bytes holes allows to use kmalloc-32
kmem cache instead of kmalloc-64 on 64bit kernels.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:22:21 +0000 (12:22 -0700)]
neighbor: Add tracepoint to __neigh_create
Add tracepoint to __neigh_create to enable debugging of new entries.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:11:06 +0000 (12:11 -0700)]
selftests: pmtu: Simplify cleanup and namespace names
The point of the pause-on-fail argument is to leave the setup as is after
a test fails to allow a user to debug why it failed. Move the cleanup
after posting the result to the user to make it so.
Random names for the namespaces are not user friendly when trying to
debug a failure. Make them simpler and more direct for the tests. Run
cleanup at the beginning to ensure they are cleaned up if they already
exist.
Remove cleanup_done. There is no harm in doing cleanup twice; just
ignore any errors related to not existing - which is already done.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:09:16 +0000 (12:09 -0700)]
selftests: fib-onlink: Make quiet by default
Add VERBOSE argument to fib-onlink-tests.sh and make output quiet by
default. Add getopt parsing of inputs and support for -v (verbose) and
-p (pause on fail).
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:07:43 +0000 (12:07 -0700)]
net: Set strict_start_type for routes and rules
New userspace on an older kernel can send unknown and unsupported
attributes resulting in an incompelete config which is almost
always wrong for routing (few exceptions are passthrough settings
like the protocol that installed the route).
Set strict_start_type in the policies for IPv4 and IPv6 routes and
rules to detect new, unsupported attributes and fail the route add.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 23 May 2019 00:48:44 +0000 (17:48 -0700)]
Merge branch 'net-Export-functions-for-nexthop-code'
David Ahern says:
====================
net: Export functions for nexthop code
This set exports ipv4 and ipv6 fib functions for use by the nexthop
code. It also adds new ones to send route notifications if a nexthop
configuration changes.
v2
- repost of patches dropped at the end of the last dev window
added patch 8 which exports nh_update_mtu since it is inline with
the other patches
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:46 +0000 (12:04 -0700)]
ipv4: Rename and export nh_update_mtu
Rename nh_update_mtu to fib_nhc_update_mtu and export for use by the
nexthop code.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:45 +0000 (12:04 -0700)]
ipv4: export fib_info_update_nh_saddr
Add scope as input argument versus relying on fib_info reference in
fib_nh, and export fib_info_update_nh_saddr.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:44 +0000 (12:04 -0700)]
ipv4: export fib_flush
As nexthops are deleted, fib entries referencing it are marked dead.
Export fib_flush so those entries can be removed in a timely manner.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:43 +0000 (12:04 -0700)]
ipv4: export fib_check_nh
Change fib_check_nh to take net, table and scope as input arguments
over struct fib_config and export for use by nexthop code.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:42 +0000 (12:04 -0700)]
ipv4: Add function to send route updates
Add fib_info_notify_update to walk the fib and send RTM_NEWROUTE
notifications with NLM_F_REPLACE set for entries linked to a fib_info
that have nh_updated flag set. This helper will be used by the nexthop
code to notify userspace of routes that are impacted when a nexthop
config is updated via replace. The new function and its helper are
similar to how fib_flush and fib_table_flush work for address delete
and link down events.
This notification is needed for legacy apps that do not understand
the new nexthop object. Apps that are nexthop aware can use the
RTA_NH_ID attribute in the route notification to just ignore it.
In the future this should be wrapped in a sysctl to allow OS'es that
are fully updated to avoid the notificaton storm.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:41 +0000 (12:04 -0700)]
ipv6: export function to send route updates
Add fib6_rt_update to send RTM_NEWROUTE with NLM_F_REPLACE set. This
helper will be used by the nexthop code to notify userspace of routes
that are impacted when a nexthop config is updated via replace.
This notification is needed for legacy apps that do not understand
the new nexthop object. Apps that are nexthop aware can use the
RTA_NH_ID attribute in the route notification to just ignore it.
In the future this should be wrapped in a sysctl to allow OS'es that
are fully updated to avoid the notificaton storm.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:40 +0000 (12:04 -0700)]
ipv6: Add hook to bump sernum for a route to stubs
Add hook to ipv6 stub to bump the sernum up to the root node for a
route. This is needed by the nexthop code when a nexthop config changes.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 22 May 2019 19:04:39 +0000 (12:04 -0700)]
ipv6: Add delete route hook to stubs
Add ip6_del_rt to the IPv6 stub. The hook is needed by the nexthop
code to remove entries linked to a nexthop that is getting deleted.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 23 May 2019 00:46:28 +0000 (17:46 -0700)]
Merge branch 'net-phy-T1-support'
Andrew Lunn says:
====================
net: phy: T1 support
T1 PHYs make use of a single twisted pair, rather than the traditional
2 pair for 100BaseT or 4 pair for 1000BaseT. This patchset adds link
modes for 100BaseT1 and 1000BaseT1, and them makes use of 100BaseT1 in
the list of PHY features used by current T1 drivers.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Wed, 22 May 2019 18:47:04 +0000 (20:47 +0200)]
net: phy: Make phy_basic_t1_features use base100t1.
Now that there is a link mode for 100BaseT1, use it in
phy_basic_t1_features so T1 PHY drivers will indicate this mode via
the Ethtool API.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Wed, 22 May 2019 18:47:03 +0000 (20:47 +0200)]
net: phy: Add support for 100BaseT1 and 1000BaseT1
Add link modes for 100Mbps and 1Gbps over a single pair.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:27 +0000 (18:43 +0000)]
net: phy: dp83867: Allocate state struct in probe
This was being done in config the first time the phy was configured.
Should be in the probe method.
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:26 +0000 (18:43 +0000)]
net: phy: dp83867: Validate FIFO depth property
Insure property is in valid range and fail when reading DT if it is not.
Also add error message for existing failure if required property is not
present.
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:25 +0000 (18:43 +0000)]
net: phy: dp83867: IO impedance is not dependent on RGMII delay
The driver would only set the IO impedance value when RGMII internal
delays were enabled. There is no reason for this. Move the IO
impedance block out of the RGMII delay block.
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:24 +0000 (18:43 +0000)]
net: phy: dp83867: Use unsigned variables to store unsigned properties
The variables used to store u32 DT properties were signed ints. This
doesn't work properly if the value of the property were to overflow.
Use unsigned variables so this doesn't happen.
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:23 +0000 (18:43 +0000)]
net: phy: dp83867: Rework delay rgmii delay handling
The code was assuming the reset default of the delay control register
was to have delay disabled. This is what the datasheet shows as the
register's initial value. However, that's not actually true: the
default is controlled by the PHY's pin strapping.
If the interface mode is selected as RX or TX delay only, insure the
other direction's delay is disabled.
If the interface mode is just "rgmii", with neither TX or RX internal
delay, one might expect that the driver should disable both delays. But
this is not what the driver does. It leaves the setting at the PHY's
strapping's default. And that default, for no pins with strapping
resistors, is to have delay enabled and 2.00 ns.
Rather than change this behavior, I've kept it the same and documented
it. No delay will most likely not work and will break ethernet on any
board using "rgmii" mode. If the board is strapped to have a delay and
is configured to use "rgmii" mode a warning is generated that "rgmii-id"
should have been used.
Also validate the delay values and fail if they are not in range.
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:22 +0000 (18:43 +0000)]
net: phy: dp83867: Add ability to disable output clock
Generally, the output clock pin is only used for testing and only serves
as a source of RF noise after this. It could be used to daisy-chain
PHYs, but this is uncommon. Since the PHY can disable the output, make
doing so an option. I do this by adding another enumeration to the
allowed values of ti,clk-output-sel.
The code was not using the value DP83867_CLK_O_SEL_REF_CLK as one might
expect: to select the REF_CLK as the output. Rather it meant "keep
clock output setting as is", which, depending on PHY strapping, might
not be outputting REF_CLK.
Change this so DP83867_CLK_O_SEL_REF_CLK means enable REF_CLK output.
Omitting the property will leave the setting as is (which was the
previous behavior in this case).
Out of range values were silently converted into
DP83867_CLK_O_SEL_REF_CLK. Change this so they generate an error.
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:21 +0000 (18:43 +0000)]
dt-bindings: phy: dp83867: Add documentation for disabling clock output
The clock output is generally only used for testing and development and
not used to daisy-chain PHYs. It's just a source of RF noise afterward.
Add a mux value for "off". I've added it as another enumeration to the
output property. In the actual PHY, the mux and the output enable are
independently controllable. However, it doesn't seem useful to be able
to describe the mux setting when the output is disabled.
Document that PHY's default setting will be left as is if the property
is omitted.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Trent Piepho [Wed, 22 May 2019 18:43:19 +0000 (18:43 +0000)]
dt-bindings: phy: dp83867: Describe how driver behaves w.r.t rgmii delay
Add a note to make it more clear how the driver behaves when "rgmii" vs
"rgmii-id", "rgmii-idrx", or "rgmii-idtx" interface modes are selected.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Trent Piepho <tpiepho@impinj.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Vishal Kulkarni [Wed, 22 May 2019 16:16:12 +0000 (21:46 +0530)]
cxgb4: Enable hash filter with offload
Hash (exact-match) filters used for offloading flows share the
same active region resources on the chip with upper layer drivers,
like iw_cxgb4, chcr, etc. Currently, only either Hash filters
or ULDs can use the active region resources, but not both. Hence,
use the new firmware configuration parameters (when available)
to allow both the Hash filters and ULDs to share the
active region simultaneously.
Signed-off-by: Vishal Kulkarni <vishal@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Baruch Siach [Wed, 22 May 2019 13:56:15 +0000 (16:56 +0300)]
net: fec: remove redundant ipg clock disable
Don't disable the ipg clock in the regulator error path. The clock is
disable unconditionally two lines below the failed_regulator label.
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net>
Felipe Gasper [Tue, 21 May 2019 00:43:51 +0000 (19:43 -0500)]
net: Add UNIX_DIAG_UID to Netlink UNIX socket diagnostics.
This adds the ability for Netlink to report a socket's UID along with the
other UNIX diagnostic information that is already available. This will
allow diagnostic tools greater insight into which users control which
socket.
To test this, do the following as a non-root user:
unshare -U -r bash
nc -l -U user.socket.$$ &
.. and verify from within that same session that Netlink UNIX socket
diagnostics report the socket's UID as 0. Also verify that Netlink UNIX
socket diagnostics report the socket's UID as the user's UID from an
unprivileged process in a different session. Verify the same from
a root process.
Signed-off-by: Felipe Gasper <felipe@felipegasper.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Wed, 22 May 2019 15:36:16 +0000 (08:36 -0700)]
Merge tag 'arm64-fixes' of git://git./linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
- Fix SPE probe failure when backing auxbuf with high-order pages
- Fix handling of DMA allocations from outside of the vmalloc area
- Fix generation of build-id ELF section for vDSO object
- Disable huge I/O mappings if kernel page table dumping is enabled
- A few other minor fixes (comments, kconfig etc)
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: vdso: Explicitly add build-id option
arm64/mm: Inhibit huge-vmap with ptdump
arm64: Print physical address of page table base in show_pte()
arm64: don't trash config with compat symbol if COMPAT is disabled
arm64: assembler: Update comment above cond_yield_neon() macro
drivers/perf: arm_spe: Don't error on high-order pages for aux buf
arm64/iommu: handle non-remapped addresses in ->mmap and ->get_sgtable
Linus Torvalds [Wed, 22 May 2019 15:31:09 +0000 (08:31 -0700)]
Merge tag 'gfs2-5.1.fixes2' of git://git./linux/kernel/git/gfs2/linux-gfs2
Pull gfs2 fix from Andreas Gruenbacher:
"Fix a gfs2 sign extension bug introduced in v4.3"
* tag 'gfs2-5.1.fixes2' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
gfs2: Fix sign extension bug in gfs2_update_stats
Linus Torvalds [Wed, 22 May 2019 15:28:16 +0000 (08:28 -0700)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
1) Clear up some recent tipc regressions because of registration
ordering. Fix from Junwei Hu.
2) tipc's TLV_SET() can read past the end of the supplied buffer during
the copy. From Chris Packham.
3) ptp example program doesn't match the kernel, from Richard Cochran.
4) Outgoing message type fix in qrtr, from Bjorn Andersson.
5) Flow control regression in stmmac, from Tan Tee Min.
6) Fix inband autonegotiation in phylink, from Russell King.
7) Fix sk_bound_dev_if handling in rawv6_bind(), from Mike Manning.
8) Fix usbnet crash after disconnect, from Kloetzke Jan.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (21 commits)
usbnet: fix kernel crash after disconnect
selftests: fib_rule_tests: use pre-defined DEV_ADDR
net-next: net: Fix typos in ip-sysctl.txt
ipv6: Consider sk_bound_dev_if when binding a raw socket to an address
net: phylink: ensure inband AN works correctly
usbnet: ipheth: fix racing condition
net: stmmac: dma channel control register need to be init first
net: stmmac: fix ethtool flow control not able to get/set
net: qrtr: Fix message type of outgoing packets
networking: : fix typos in code comments
ptp: Fix example program to match kernel.
fddi: fix typos in code comments
selftests: fib_rule_tests: enable forwarding before ipv4 from/iif test
selftests: fib_rule_tests: fix local IPv4 address typo
tipc: Avoid copying bytes beyond the supplied data
2/2] net: xilinx_emaclite: use readx_poll_timeout() in mdio wait function
1/2] net: axienet: use readx_poll_timeout() in mdio wait function
vlan: Mark expected switch fall-through
macvlan: Mark expected switch fall-through
net/mlx4_en: ethtool, Remove unsupported SFP EEPROM high pages query
...
Linus Torvalds [Wed, 22 May 2019 15:10:35 +0000 (08:10 -0700)]
Merge tag 'for-5.2/dm-fix-1' of git://git./linux/kernel/git/device-mapper/linux-dm
Pull device mapper fix from Mike Snitzer:
"Fix a particularly glaring oversight in a DM core commit from 5.1 that
doesn't properly trim special IOs (e.g. discards) relative to
corresponding target's max_io_len_target_boundary()"
* tag 'for-5.2/dm-fix-1' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm: make sure to obey max_io_len_target_boundary
Andreas Gruenbacher [Fri, 17 May 2019 18:18:43 +0000 (19:18 +0100)]
gfs2: Fix sign extension bug in gfs2_update_stats
Commit
4d207133e9c3 changed the types of the statistic values in struct
gfs2_lkstats from s64 to u64. Because of that, what should be a signed
value in gfs2_update_stats turned into an unsigned value. When shifted
right, we end up with a large positive value instead of a small negative
value, which results in an incorrect variance estimate.
Fixes: 4d207133e9c3 ("gfs2: Make statistics unsigned, suitable for use with do_div()")
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Cc: stable@vger.kernel.org # v4.4+
Michael Lass [Tue, 21 May 2019 19:58:07 +0000 (21:58 +0200)]
dm: make sure to obey max_io_len_target_boundary
Commit
61697a6abd24 ("dm: eliminate 'split_discard_bios' flag from DM
target interface") incorrectly removed code from
__send_changing_extent_only() that is required to impose a per-target IO
boundary on IO that exceeds max_io_len_target_boundary(). Otherwise
"special" IO (e.g. DISCARD, WRITE SAME, WRITE ZEROES) can write beyond
where allowed.
Fix this by restoring the max_io_len_target_boundary() limit in
__send_changing_extent_only()
Fixes: 61697a6abd24 ("dm: eliminate 'split_discard_bios' flag from DM target interface")
Cc: stable@vger.kernel.org # 5.1+
Signed-off-by: Michael Lass <bevan@bi-co.net>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Kloetzke Jan [Tue, 21 May 2019 13:18:40 +0000 (13:18 +0000)]
usbnet: fix kernel crash after disconnect
When disconnecting cdc_ncm the kernel sporadically crashes shortly
after the disconnect:
[ 57.868812] Unable to handle kernel NULL pointer dereference at virtual address
00000000
...
[ 58.006653] PC is at 0x0
[ 58.009202] LR is at call_timer_fn+0xec/0x1b4
[ 58.013567] pc : [<
0000000000000000>] lr : [<
ffffff80080f5130>] pstate:
00000145
[ 58.020976] sp :
ffffff8008003da0
[ 58.024295] x29:
ffffff8008003da0 x28:
0000000000000001
[ 58.029618] x27:
000000000000000a x26:
0000000000000100
[ 58.034941] x25:
0000000000000000 x24:
ffffff8008003e68
[ 58.040263] x23:
0000000000000000 x22:
0000000000000000
[ 58.045587] x21:
0000000000000000 x20:
ffffffc68fac1808
[ 58.050910] x19:
0000000000000100 x18:
0000000000000000
[ 58.056232] x17:
0000007f885aff8c x16:
0000007f883a9f10
[ 58.061556] x15:
0000000000000001 x14:
000000000000006e
[ 58.066878] x13:
0000000000000000 x12:
00000000000000ba
[ 58.072201] x11:
ffffffc69ff1db30 x10:
0000000000000020
[ 58.077524] x9 :
8000100008001000 x8 :
0000000000000001
[ 58.082847] x7 :
0000000000000800 x6 :
ffffff8008003e70
[ 58.088169] x5 :
ffffffc69ff17a28 x4 :
00000000ffff138b
[ 58.093492] x3 :
0000000000000000 x2 :
0000000000000000
[ 58.098814] x1 :
0000000000000000 x0 :
0000000000000000
...
[ 58.205800] [< (null)>] (null)
[ 58.210521] [<
ffffff80080f5298>] expire_timers+0xa0/0x14c
[ 58.215937] [<
ffffff80080f542c>] run_timer_softirq+0xe8/0x128
[ 58.221702] [<
ffffff8008081120>] __do_softirq+0x298/0x348
[ 58.227118] [<
ffffff80080a6304>] irq_exit+0x74/0xbc
[ 58.232009] [<
ffffff80080e17dc>] __handle_domain_irq+0x78/0xac
[ 58.237857] [<
ffffff8008080cf4>] gic_handle_irq+0x80/0xac
...
The crash happens roughly 125..130ms after the disconnect. This
correlates with the 'delay' timer that is started on certain USB tx/rx
errors in the URB completion handler.
The problem is a race of usbnet_stop() with usbnet_start_xmit(). In
usbnet_stop() we call usbnet_terminate_urbs() to cancel all URBs in
flight. This only makes sense if no new URBs are submitted
concurrently, though. But the usbnet_start_xmit() can run at the same
time on another CPU which almost unconditionally submits an URB. The
error callback of the new URB will then schedule the timer after it was
already stopped.
The fix adds a check if the tx queue is stopped after the tx list lock
has been taken. This should reliably prevent the submission of new URBs
while usbnet_terminate_urbs() does its job. The same thing is done on
the rx side even though it might be safe due to other flags that are
checked there.
Signed-off-by: Jan Klötzke <Jan.Kloetzke@preh.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Tue, 21 May 2019 06:40:47 +0000 (14:40 +0800)]
selftests: fib_rule_tests: use pre-defined DEV_ADDR
DEV_ADDR is defined but not used. Use it in address setting.
Do the same with IPv6 for consistency.
Reported-by: David Ahern <dsahern@gmail.com>
Fixes: fc82d93e57e3 ("selftests: fib_rule_tests: fix local IPv4 address typo")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Masanari Iida [Tue, 21 May 2019 03:41:15 +0000 (12:41 +0900)]
net-next: net: Fix typos in ip-sysctl.txt
This patch fixes some spelling typos found in ip-sysctl.txt
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mike Manning [Mon, 20 May 2019 18:57:17 +0000 (19:57 +0100)]
ipv6: Consider sk_bound_dev_if when binding a raw socket to an address
IPv6 does not consider if the socket is bound to a device when binding
to an address. The result is that a socket can be bound to eth0 and
then bound to the address of eth1. If the device is a VRF, the result
is that a socket can only be bound to an address in the default VRF.
Resolve by considering the device if sk_bound_dev_if is set.
Signed-off-by: Mike Manning <mmanning@vyatta.att-mail.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Tested-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Mon, 20 May 2019 15:07:20 +0000 (16:07 +0100)]
net: phylink: ensure inband AN works correctly
Do not update the link interface mode while the link is down to avoid
spurious link interface changes.
Always call mac_config if we have a PHY to propagate the pause mode
settings to the MAC.
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bernd Eckstein [Mon, 20 May 2019 15:31:09 +0000 (17:31 +0200)]
usbnet: ipheth: fix racing condition
Fix a racing condition in ipheth.c that can lead to slow performance.
Bug: In ipheth_tx(), netif_wake_queue() may be called on the callback
ipheth_sndbulk_callback(), _before_ netif_stop_queue() is called.
When this happens, the queue is stopped longer than it needs to be,
thus reducing network performance.
Fix: Move netif_stop_queue() in front of usb_submit_urb(). Now the order
is always correct. In case, usb_submit_urb() fails, the queue is woken up
again as callback will not fire.
Testing: This racing condition is usually not noticeable, as it has to
occur very frequently to slowdown the network. The callback from the USB
is usually triggered slow enough, so the situation does not appear.
However, on a Ubuntu Linux on VMWare Workstation, running on Windows 10,
the we loose the race quite often and the following speedup can be noticed:
Without this patch: Download: 4.10 Mbit/s, Upload: 4.01 Mbit/s
With this patch: Download: 36.23 Mbit/s, Upload: 17.61 Mbit/s
Signed-off-by: Oliver Zweigle <Oliver.Zweigle@faro.com>
Signed-off-by: Bernd Eckstein <3ernd.Eckstein@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Torvalds [Tue, 21 May 2019 19:51:20 +0000 (12:51 -0700)]
Merge tag 'selinux-pr-
20190521' of git://git./linux/kernel/git/pcmoore/selinux
Pull SELinux fix from Paul Moore:
"One small SELinux patch to fix a problem when disconnecting a SCTP
socket with connect(AF_UNSPEC)"
* tag 'selinux-pr-
20190521' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux:
selinux: do not report error on connect(AF_UNSPEC)
Linus Torvalds [Tue, 21 May 2019 19:33:38 +0000 (12:33 -0700)]
Merge tag 'spdx-5.2-rc2' of git://git./linux/kernel/git/gregkh/driver-core
Pull SPDX update from Greg KH:
"Here is a series of patches that add SPDX tags to different kernel
files, based on two different things:
- SPDX entries are added to a bunch of files that we missed a year
ago that do not have any license information at all.
These were either missed because the tool saw the MODULE_LICENSE()
tag, or some EXPORT_SYMBOL tags, and got confused and thought the
file had a real license, or the files have been added since the
last big sweep, or they were Makefile/Kconfig files, which we
didn't touch last time.
- Add GPL-2.0-only or GPL-2.0-or-later tags to files where our scan
tools can determine the license text in the file itself. Where this
happens, the license text is removed, in order to cut down on the
700+ different ways we have in the kernel today, in a quest to get
rid of all of these.
These patches have been out for review on the linux-spdx@vger mailing
list, and while they were created by automatic tools, they were
hand-verified by a bunch of different people, all whom names are on
the patches are reviewers.
The reason for these "large" patches is if we were to continue to
progress at the current rate of change in the kernel, adding license
tags to individual files in different subsystems, we would be finished
in about 10 years at the earliest.
There will be more series of these types of patches coming over the
next few weeks as the tools and reviewers crunch through the more
"odd" variants of how to say "GPLv2" that developers have come up with
over the years, combined with other fun oddities (GPL + a BSD
disclaimer?) that are being unearthed, with the goal for the whole
kernel to be cleaned up.
These diffstats are not small, 3840 files are touched, over 10k lines
removed in just 24 patches"
* tag 'spdx-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (24 commits)
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 25
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 24
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 23
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 22
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 21
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 20
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 19
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 18
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 17
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 15
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 14
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 13
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 12
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 11
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 10
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 9
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 7
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 5
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 4
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 3
...
Linus Torvalds [Tue, 21 May 2019 19:24:24 +0000 (12:24 -0700)]
Merge branch 'linus' of git://git./linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
- Two long-standing bugs in the powerpc assembly of vmx
- Stack overrun caused by HASH_MAX_DESCSIZE being too small
- Regression in caam
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: vmx - ghash: do nosimd fallback manually
crypto: vmx - CTR: always increment IV as quadword
crypto: hash - fix incorrect HASH_MAX_DESCSIZE
crypto: caam - fix typo in i.MX6 devices list for errata
Thomas Gleixner [Sun, 19 May 2019 13:51:55 +0000 (15:51 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 25
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it would be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 6 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154043.007767574@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Sun, 19 May 2019 13:51:54 +0000 (15:51 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 24
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or any
later version this program is distributed in the hope that it will
be useful but without any warranty without even the implied warranty
of merchantability or fitness for a particular purpose see the gnu
general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 50 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.917228456@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Sun, 19 May 2019 13:51:53 +0000 (15:51 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 23
Based on 1 normalized pattern(s):
this software may be redistributed and or modified under the terms
of the gnu general public license as published by the free software
foundation either version 2 of the license or any later version this
program is distributed in the hope that it will be useful but
without any warranty without even the implied warranty of
merchantability or fitness for a particular purpose see the gnu
general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 2 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.800979546@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Sun, 19 May 2019 13:51:52 +0000 (15:51 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 22
Based on 1 normalized pattern(s):
the code contained herein is licensed under the gnu general public
license you may obtain a copy of the gnu general public license
version 2 or later at the following locations
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 4 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.707528683@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Sun, 19 May 2019 13:51:51 +0000 (15:51 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 21
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not write to the free software foundation
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 2 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.615184352@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Sun, 19 May 2019 13:51:50 +0000 (15:51 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 20
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program see the file copying if not write to the free
software foundation inc 51 franklin steet fifth floor boston ma
02110 1301 usa
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 41 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jilayne Lovejoy <opensource@jilayne.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190519154042.524645346@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>